query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
f45a0ec2b2e046a73bf0926755c048cc
Automatic recognition of fingerspelled words in British Sign Language
[ { "docid": "93afb696fa395a7f7c2a4f3fc2ac690d", "text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.", "title": "" }, { "docid": "4b4a3eb0e24f48bab61d348f61b31f32", "text": "In recent years, gesture recognition has received much attention from research communities. Computer vision-based gesture recognition has many potential applications in the area of human-computer interaction as well as sign language recognition. Sign languages use a combination of hand shapes, motion and locations as well as facial expressions. Finger-spelling is a manual representation of alphabet letters, which is often used where there is no sign word to correspond to a spoken word. In Australia, a sign language called Auslan is used by the deaf community and and the finger-spelling letters use two handed motion, unlike the well known finger-spelling of American Sign Language (ASL) that uses static shapes. This thesis presents the Auslan Finger-spelling Recognizer (AFR) that is a real-time system capable of recognizing signs that consists of Auslan manual alphabet letters from video sequences. The AFR system has two components: the first is the feature extraction process that extracts a combination of spatial and motion features from the images. Which classifies a sequence of features using Hidden Markov Models (HMMs). Tests using a vocabulary of twenty signed words showed the system could achieve 97% accuracy at the letter level and 88% at the word level using a finite state grammar network and embedded training.", "title": "" }, { "docid": "188d9e1b0244aa7f68610dab9d852ab9", "text": "We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user’s unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.", "title": "" } ]
[ { "docid": "d4793c300bca8137d0da7ffdde75a72b", "text": "The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.", "title": "" }, { "docid": "64d755d95353a66ec967c7f74aaf2232", "text": "Purpose: Platinum-based drugs, in particular cisplatin (cis-diamminedichloridoplatinum(II), CDDP), are used for treatment of squamous cell carcinoma of the head and neck (SCCHN). Despite initial responses, CDDP treatment often results in chemoresistance, leading to therapeutic failure. The role of primary resistance at subclonal level and treatment-induced clonal selection in the development of CDDP resistance remains unknown.Experimental Design: By applying targeted next-generation sequencing, fluorescence in situ hybridization, microarray-based transcriptome, and mass spectrometry-based phosphoproteome analysis to the CDDP-sensitive SCCHN cell line FaDu, a CDDP-resistant subline, and single-cell derived subclones, the molecular basis of CDDP resistance was elucidated. The causal relationship between molecular features and resistant phenotypes was determined by siRNA-based gene silencing. The clinical relevance of molecular findings was validated in patients with SCCHN with recurrence after CDDP-based chemoradiation and the TCGA SCCHN dataset.Results: Evidence of primary resistance at clonal level and clonal selection by long-term CDDP treatment was established in the FaDu model. Resistance was associated with aneuploidy of chromosome 17, increased TP53 copy-numbers and overexpression of the gain-of-function (GOF) mutant variant p53R248L siRNA-mediated knockdown established a causal relationship between mutant p53R248L and CDDP resistance. Resistant clones were also characterized by increased activity of the PI3K-AKT-mTOR pathway. The poor prognostic value of GOF TP53 variants and mTOR pathway upregulation was confirmed in the TCGA SCCHN cohort.Conclusions: Our study demonstrates a link of intratumoral heterogeneity and clonal evolution as important mechanisms of drug resistance in SCCHN and establishes mutant GOF TP53 variants and the PI3K/mTOR pathway as molecular targets for treatment optimization. Clin Cancer Res; 24(1); 158-68. ©2017 AACR.", "title": "" }, { "docid": "88d554d6ce6cc9dbcf80a4a4039b2bdf", "text": "In display advertising, click through rate (CTR) prediction is the problem of estimating the probability that an advertisement (ad) is clicked when displayed to a user in a specific context. Due to its easy implementation and promising performance, logistic regression (LR) model has been widely used for CTR prediction, especially in industrial systems. However, it is not easy for LR to capture the nonlinear information, such as the conjunction information, from user features and ad features. In this paper, we propose a novel model, called coupled group lasso (CGL), for CTR prediction in display advertising. CGL can seamlessly integrate the conjunction information from user features and ad features for modeling. Furthermore, CGL can automatically eliminate useless features for both users and ads, which may facilitate fast online prediction. Scalability of CGL is ensured through feature hashing and distributed implementation. Experimental results on real-world data sets show that our CGL model can achieve state-of-the-art performance on webscale CTR prediction tasks. Proceedings of the 31 st International Conference on Machine Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copyright 2014 by the author(s).", "title": "" }, { "docid": "081da5941b0431d00b4058c26987d43f", "text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2bb0b89491015f124e4b244954508234", "text": "In recent years, deep neural networks have achieved significant success in Chinese word segmentation and many other natural language processing tasks. Most of these algorithms are end-to-end trainable systems and can effectively process and learn from large scale labeled datasets. However, these methods typically lack the capability of processing rare words and data whose domains are different from training data. Previous statistical methods have demonstrated that human knowledge can provide valuable information for handling rare cases and domain shifting problems. In this paper, we seek to address the problem of incorporating dictionaries into neural networks for the Chinese word segmentation task. Two different methods that extend the bi-directional long short-term memory neural network are proposed to perform the task. To evaluate the performance of the proposed methods, state-of-the-art supervised models based methods and domain adaptation approaches are compared with our methods on nine datasets from different domains. The experimental results demonstrate that the proposed methods can achieve better performance than other state-of-the-art neural network methods and domain adaptation approaches in most cases.", "title": "" }, { "docid": "66c49b0dbdbdf29ace0f60839b867e43", "text": "The job shop scheduling problem with the makespan criterion is a certain NP-hard case from OR theory having excellent practical applications. This problem, having been examined for years, is also regarded as an indicator of the quality of advanced scheduling algorithms. In this paper we provide a new approximate algorithm that is based on the big valley phenomenon, and uses some elements of so-called path relinking technique as well as new theoretical properties of neighbourhoods. The proposed algorithm owns, unprecedented up to now, accuracy, obtainable in a quick time on a PC, which has been confirmed after wide computer tests.", "title": "" }, { "docid": "7170110b2520fb37e282d08ed8774d0f", "text": "OBJECTIVE\nTo examine the performance of the 11-13 weeks scan in detecting non-chromosomal abnormalities.\n\n\nMETHODS\nProspective first-trimester screening study for aneuploidies, including basic examination of the fetal anatomy, in 45 191 pregnancies. Findings were compared to those at 20-23 weeks and postnatal examination.\n\n\nRESULTS\nAneuploidies (n = 332) were excluded from the analysis. Fetal abnormalities were observed in 488 (1.1%) of the remaining 44 859 cases; 213 (43.6%) of these were detected at 11-13 weeks. The early scan detected all cases of acrania, alobar holoprosencephaly, exomphalos, gastroschisis, megacystis and body stalk anomaly, 77% of absent hand or foot, 50% of diaphragmatic hernia, 50% of lethal skeletal dysplasias, 60% of polydactyly, 34% of major cardiac defects, 5% of facial clefts and 14% of open spina bifida, but none of agenesis of the corpus callosum, cerebellar or vermian hypoplasia, echogenic lung lesions, bowel obstruction, most renal defects or talipes. Nuchal translucency (NT) was above the 95th percentile in 34% of fetuses with major cardiac defects.\n\n\nCONCLUSION\nAt 11-13 weeks some abnormalities are always detectable, some can never be and others are potentially detectable depending on their association with increased NT, the phenotypic expression of the abnormality with gestation and the objectives set for such a scan.", "title": "" }, { "docid": "a7cdfc27dbc704140ef5b3199469898f", "text": "This technical report updates the 2004 American Academy of Pediatrics technical report on the legalization of marijuana. Current epidemiology of marijuana use is presented, as are definitions and biology of marijuana compounds, side effects of marijuana use, and effects of use on adolescent brain development. Issues concerning medical marijuana specifically are also addressed. Concerning legalization of marijuana, 4 different approaches in the United States are discussed: legalization of marijuana solely for medical purposes, decriminalization of recreational use of marijuana, legalization of recreational use of marijuana, and criminal prosecution of recreational (and medical) use of marijuana. These approaches are compared, and the latest available data are presented to aid in forming public policy. The effects on youth of criminal penalties for marijuana use and possession are also addressed, as are the effects or potential effects of the other 3 policy approaches on adolescent marijuana use. Recommendations are included in the accompanying policy statement.", "title": "" }, { "docid": "c2a59be58131149dcddfec02214423b8", "text": "Complex structures manufactured using low-pressure vacuum bag-only (VBO) prepreg processing are more susceptible to defects than flat laminates due to complex compaction conditions present at sharp corners. Consequently, effective defect mitigation strategies are required to produce structural parts. In this study, we investigated the relationships between laminate properties, processing conditions`, mold designs and part quality in order to develop science-based guidelines for the manufacture of complex parts. Generic laminates consisting of a central corner and two flanges were fabricated in a multi-part study that considered variation in corner angle and local curvature radius, the applied pressure during layup and cure, and the prepreg material and laminate thickness. The manufactured parts were analyzed in terms of microstructural fiber bed and resin distribution, thickness variation, and void content. The results indicated that defects observed in corner laminates were influenced by both mold design and processing conditions, and that optimal combinations of these factors can mitigate defects and improve quality.", "title": "" }, { "docid": "74b13257e74b79cc74774f54ffbe1ed2", "text": "Earlier studies have suggested that higher education institutions could harness the predictive power of Learning Management System (LMS) data to develop reporting tools that identify at-risk students and allow for more timely pedagogical interventions. This paper confirms and extends this proposition by providing data from an international research project investigating which student online activities accurately predict academic achievement. Analysis of LMS tracking data from a Blackboard Vista-supported course identified 15 variables demonstrating a significant simple correlation with student final grade. Regression modelling generated a best-fit predictive model for this course which incorporates key variables such as total number of discussion messages posted, total number of mail messages sent, and total number of assessments completed and which explains more than 30% of the variation in student final grade. Logistic modelling demonstrated the predictive power of this model, which correctly identified 81% of students who achieved a failing grade. Moreover, network analysis of course discussion forums afforded insight into the development of the student learning community by identifying disconnected students, patterns of student-to-student communication, and instructor positioning within the network. This study affirms that pedagogically meaningful information can be extracted from LMS-generated student tracking data, and discusses how these findings are informing the development of a customizable dashboardlike reporting tool for educators that will extract and visualize real-time data on student engagement and likelihood of success. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ca768eb654b323354b7d78969162cb81", "text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.", "title": "" }, { "docid": "8ea5ed93c3c162c99fe329d243906712", "text": "This paper describes the design, simulation and measurement of a dual-band slotted waveguide antenna array for adaptive 5G networks, operating in the millimeter wave frequency range. Its structure is composed by two groups of slots milled onto the opposite faces of a rectangular waveguide, enabling antenna operation over two different frequency bands, namely 28 and 38 GHz. Measured and numerical results, obtained using ANSYS HFSS, demonstrate two bandwidths of approximately 26.36% and 9.78% for 28 GHz and 38 GHz, respectively. The antenna gain varies from 12.6 dBi for the lower frequency band to 15.6dBi for the higher one.", "title": "" }, { "docid": "34855c90155970485094829edb6bc3cb", "text": "We present an approach for navigating in unknown environments while, simultaneously, gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our pipeline for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction pipeline to create a photo-realistic textured 3D model of the inspected area. These 3D models are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment over time. Finally, we evaluate our approach using the Sparus II, a torpedo-shaped AUV, conducting inspection missions in a challenging, real-world and natural scenario.", "title": "" }, { "docid": "e2f7084f75a7b77602c113501dcd384d", "text": "The fuzzy vault is a promising cryptosystem for the protection of fingerprint templates. However it requires the alignment of the query template to the enrolled one. Most existing implementations either rely on the extraction of the core which is not a guaranteed task, or use publicly available helper data which may leak information about the protected minutiae. We propose a new alignment approach for the fingerprint fuzzy vault based on distortion invariant minutiae features. Experimental results show improvements in genuine and false accept rates over previous implementations.", "title": "" }, { "docid": "1351b9d778da2821362a1b4caa35e7e4", "text": "Though designing a data warehouse requires techniques completely different from those adopted for operational systems, no significant effort has been made so far to develop a complete and consistent design methodology for data warehouses. In this paper we outline a general methodological framework for data warehouse design, based on our Dimensional Fact Model (DFM). After analyzing the existing information system and collecting the user requirements, conceptual design is carried out semi-automatically starting from the operational database scheme. A workload is then characterized in terms of data volumes and expected queries, to be used as the input of the logical and physical design phases whose output is the final scheme for the data warehouse.", "title": "" }, { "docid": "e28c2662f3948d346a00298976d9b37c", "text": "Analysts engaged in real-time monitoring of cybersecurity incidents must quickly and accurately respond to alerts generated by intrusion detection systems. We investigated two complementary approaches to improving analyst performance on this vigilance task: a graph-based visualization of correlated IDS output and defensible recommendations based on machine learning from historical analyst behavior. We tested our approach with 18 professional cybersecurity analysts using a prototype environment in which we compared the visualization with a conventional tabular display, and the defensible recommendations with limited or no recommendations. Quantitative results showed improved analyst accuracy with the visual display and the defensible recommendations. Additional qualitative data from a \"talk aloud\" protocol illustrated the role of displays and recommendations in analysts' decision-making process. Implications for the design of future online analysis environments are discussed.", "title": "" }, { "docid": "3ce6c3b6a23e713bf9af419ce2d7ded3", "text": "Two measures of financial performance that are being applied increasingly in investor-owned and not-for-profit healthcare organizations are market value added (MVA) and economic value added (EVA). Unlike traditional profitability measures, both MVA and EVA measures take into account the cost of equity capital. MVA is most appropriate for investor-owned healthcare organizations and EVA is the best measure for not-for-profit organizations. As healthcare financial managers become more familiar with MVA and EVA and understand their potential, these two measures may become more widely accepted accounting tools for assessing the financial performance of investor-owned and not-for-profit healthcare organizations.", "title": "" }, { "docid": "d8be338cbe411c79905f108fbbe55814", "text": "Head-up displays (HUD) permit augmented reality (AR) information in cars. Simulation is a convenient way to design and evaluate the benefit of such innovation for the driver. For this purpose, we have developed a virtual HUD that we compare to real AR HUDs from depth perception features. User testing was conducted with 24 participants in a stereoscopic driving simulator. It showed the ability of the virtual HUD to reproduce the perception of the distance between real objects and their augmentation. Three AR overlay designs to highlight the car ahead were compared: the trapezoid shape was perceived as more congruent that the U shape overlay.", "title": "" }, { "docid": "12d05bc19380bce526194dd5ff4629ed", "text": "Deep learning architectures have proved versatile in a number of drug discovery applications, including the modeling of in vitro compound activity. While controlling for prediction confidence is essential to increase the trust, interpretability, and usefulness of virtual screening models in drug discovery, techniques to estimate the reliability of the predictions generated with deep learning networks remain largely underexplored. Here, we present Deep Confidence, a framework to compute valid and efficient confidence intervals for individual predictions using the deep learning technique Snapshot Ensembling and conformal prediction. Specifically, Deep Confidence generates an ensemble of deep neural networks by recording the network parameters throughout the local minima visited during the optimization phase of a single neural network. This approach serves to derive a set of base learners (i.e., snapshots) with comparable predictive power on average that will however generate slightly different predictions for a given instance. The variability across base learners and the validation residuals are in turn harnessed to compute confidence intervals using the conformal prediction framework. Using a set of 24 diverse IC50 data sets from ChEMBL 23, we show that Snapshot Ensembles perform on par with Random Forest (RF) and ensembles of independently trained deep neural networks. In addition, we find that the confidence regions predicted using the Deep Confidence framework span a narrower set of values. Overall, Deep Confidence represents a highly versatile error prediction framework that can be applied to any deep learning-based application at no extra computational cost.", "title": "" }, { "docid": "bae6a214381859ac955f1651c7df0c0f", "text": "The fastcluster package is a C++ library for hierarchical, agglomerative clustering. It provides a fast implementation of the most efficient, current algorithms when the input is a dissimilarity index. Moreover, it features memory-saving routines for hierarchical clustering of vector data. It improves both asymptotic time complexity (in most cases) and practical performance (in all cases) compared to the existing implementations in standard software: several R packages, MATLAB, Mathematica, Python with SciPy. The fastcluster package presently has interfaces to R and Python. Part of the functionality is designed as a drop-in replacement for the methods hclust and flashClust in R and scipy.cluster.hierarchy.linkage in Python, so that existing programs can be effortlessly adapted for improved performance.", "title": "" } ]
scidocsrr
6e0836aaa11dce049d8fd95b22da4a2b
Novelty or Surprise?
[ { "docid": "2392f00a979a484b969763d2360007ab", "text": "Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory.", "title": "" } ]
[ { "docid": "912305c77922b8708c291ccc63dae2cd", "text": "Customer satisfaction and loyalty is a well known and established concept in several areas like marketing, consumer research, economic psychology, welfare-economics, and economics. And has long been a topic of high interest in both academia and practice. The aim of the study was to investigate whether customer satisfaction is an indicator of customer loyalty. The findings of the study supported the contention that strong relationship exist between customer satisfaction and loyalty. However, customer satisfaction alone cannot achieve the objective of creating a loyal customer base. Some researchers also argued, that customer satisfaction and loyalty are not directly correlated, particularly in competitive business environments because there is a big difference between satisfaction, which is a passive customer condition, and loyalty, which is an active or proactive relationship with the organization.", "title": "" }, { "docid": "af09c54a7ac34e59888aee3231469958", "text": "Due to the sheer volume of opinion rich web resources such as discussion forum, review sites , blogs and news corpora available in digital form, much of the current research is focusing on the area of sentiment analysis. People are intended to develop a system that can identify and classify opinion or sentiment as represented in an electronic text. An accurate method for predicting sentiments could enable us, to extract opinions from the internet and predict online customer’s preferences, which could prove valuable for economic or marketing research. Till now, there are few different problems predominating in this research community, namely, sentiment classification, feature based classification and handling negations. This paper presents a survey covering the techniques and methods in sentiment analysis and challenges appear in the field.", "title": "" }, { "docid": "bb999acceac5f0bc1f21879529746546", "text": "How do real graphs evolve over time? What are normal growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.\n Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).\n Existing graph generation models do not exhibit these types of behavior even at a qualitative level. We provide a new graph generator, based on a forest fire spreading process that has a simple, intuitive justification, requires very few parameters (like the flammability of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.\n We also notice that the forest fire model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point.\n Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of relation between densification and the degree distribution.", "title": "" }, { "docid": "5855428c40fd0e25e0d05554d2fc8864", "text": "When the landmark patient Phineas Gage died in 1861, no autopsy was performed, but his skull was later recovered. The brain lesion that caused the profound personality changes for which his case became famous has been presumed to have involved the left frontal region, but questions have been raised about the involvement of other regions and about the exact placement of the lesion within the vast frontal territory. Measurements from Gage's skull and modern neuroimaging techniques were used to reconstitute the accident and determine the probable location of the lesion. The damage involved both left and right prefrontal cortices in a pattern that, as confirmed by Gage's modern counterparts, causes a defect in rational decision making and the processing of emotion.", "title": "" }, { "docid": "d65ccb1890bdc597c19d11abad6ae7af", "text": "The traditional view of agent modelling is to infer the explicit parameters of another agent’s strategy (i.e., their probability of taking each action in each situation). Unfortunately, in complex domains with high dimensional strategy spaces, modelling every parameter often requires a prohibitive number of observations. Furthermore, given a model of such a strategy, computing a response strategy that is robust to modelling error may be impractical to compute online. Instead, we propose an implicit modelling framework where agents aim to estimate the utility of a fixed portfolio of pre-computed strategies. Using the domain of heads-up limit Texas hold’em poker, this work describes an end-to-end approach for building an implicit modelling agent. We compute robust response strategies, show how to select strategies for the portfolio, and apply existing variance reduction and online learning techniques to dynamically adapt the agent’s strategy to its opponent. We validate the approach by showing that our implicit modelling agent would have won the heads-up limit opponent exploitation event in the 2011 Annual Computer Poker Competition.", "title": "" }, { "docid": "34bd41f7384d6ee4d882a39aec167b3e", "text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.", "title": "" }, { "docid": "813dc09107865fdd23b8d20c6293686c", "text": "Special Issue Despite increasing organizational interest and investment in virtual worlds (VWs), there is a lack of research on the benefits of VWs. When and how does the use of VW systems engender better organizational outcomes than traditional collaborative technologies? This paper investigates the value of VWs for team collaboration. Team collaboration is particularly relevant in studying VWs given the rich interactive nature of VWs and an increasing organizational reliance on virtual teamwork. To understand the value of VW use for team collaboration, we examine the relationship between a team's disposition toward IT, their general disposition (personality) and VW use in influencing team cohesion and performance. We conducted a field study that compares two collaborative technology systems – one that is based on a traditional desktop metaphor and one that is grounded in the principles of a virtual world. We tracked the use of the systems for one year. We analyzed data at the team level and the results generally support our model, with agreeableness, conscientiousness, extraversion, openness, and computer self-efficacy interacting with time and technology type to positively influence team technology use. We also found that the use of the virtual world system positively influenced the relationship between technology use and team cohesion, which, in turn, predicts team performance. The model explains 57 percent, 21 percent, and 24 percent of the variance in team technology use, team cohesion, and team performance, respectively.", "title": "" }, { "docid": "af4106bc4051e01146101aeb58a4261f", "text": "In recent years a great amount of research has focused on algorithms that learn features from unlabeled data. In this work we propose a model based on the Self-Organizing Map (SOM) neural network to learn features useful for the problem of automatic natural images classification. In particular we use the SOM model to learn single-layer features from the extremely challenging CIFAR-10 dataset, containing 60.000 tiny labeled natural images, and subsequently use these features with a pyramidal histogram encoding to train a linear SVM classifier. Despite the large number of images, the proposed feature learning method requires only few minutes on an entry-level system, however we show that a supervised classifier trained with learned features provides significantly better results than using raw pixels values or other handcrafted features designed specifically for image classification. Moreover, exploiting the topological property of the SOM neural network, it is possible to reduce the number of features and speed up the supervised training process combining topologically close neurons, without repeating the feature learning process.", "title": "" }, { "docid": "1879cae7f67fe249f2c25b2eebb13065", "text": "SystemC is widely used for modeling and simulation in hardware/software co-design. Due to the lack of a complete formal semantics, it is not possible to verify SystemC designs. In this paper, we present an approach to overcome this problem by defining the semantics of SystemC by a mapping from SystemC designs into the well-defined semantics of Uppaal timed automata. The informally defined behavior and the structure of SystemC designs are completely preserved in the generated Uppaal models. The resulting Uppaal models allow us to use the Uppaal model checker and the Uppaal tool suite, including simulation and visualization tools. The model checker can be used to verify important properties such as liveness, deadlock freedom or compliance with timing constraints. We have implemented the presented transformation, applied it to two examples and verified liveness, safety and timing properties by model checking, thus showing the applicability of our approach in practice.", "title": "" }, { "docid": "f4d040ba9ee379111c572ea96807eeb5", "text": "In this paper, a systematic design technique for quadruple-ridged flared horn antennas is presented, to enhance the radiation properties through the profiling of the ridge taper. The technique relies on control of the cutoff frequencies of specific modes inside the horn, instead of brute-force optimization. This is used to design a prototype antenna as a feed for an offset Gregorian reflector system, such as considered for the Square Kilometer Array (SKA) radio telescope, to achieve an optimized aperture efficiency from 2 to 12 GHz. The antenna is employed with a quadraxial feeding network that allows the excitation of the fundamental TE11 mode, while suppressing all other modes that causes phase errors in the aperture. Measured results confirm the validity of this approach, where good agreement is found with the simulated results.", "title": "" }, { "docid": "535ebbee465f6a009a2a85c47115a51b", "text": "Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.", "title": "" }, { "docid": "7f84e215df3d908249bde3be7f2b3cab", "text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.", "title": "" }, { "docid": "34fdd06eb5e5d2bf9266c6852710bed2", "text": "If subjects are shown an angry face as a target visual stimulus for less than forty milliseconds and are then immediately shown an expressionless mask, these subjects report seeing the mask but not the target. However, an aversively conditioned masked target can elicit an emotional response from subjects without being consciously perceived,. Here we study the mechanism of this unconsciously mediated emotional learning. We measured neural activity in volunteer subjects who were presented with two angry faces, one of which, through previous classical conditioning, was associated with a burst of white noise. In half of the trials, the subjects' awareness of the angry faces was prevented by backward masking with a neutral face. A significant neural response was elicited in the right, but not left, amygdala to masked presentations of the conditioned angry face. Unmasked presentations of the same face produced enhanced neural activity in the left, but not right, amygdala. Our results indicate that, first, the human amygdala can discriminate between stimuli solely on the basis of their acquired behavioural significance, and second, this response is lateralized according to the subjects' level of awareness of the stimuli.", "title": "" }, { "docid": "f32187a3253c9327c26f83826e0b03b8", "text": "Spatiotemporal forecasting has significant implications in sustainability, transportation and health-care domain. Traffic forecasting is one canonical example of such learning task. This task is challenging due to (1) non-linear temporal dynamics with changing road conditions, (2) complex spatial dependencies on road networks topology and (3) inherent difficulty of long-term time series forecasting. To address these challenges, we propose Graph Convolutional Recurrent Neural Network to incorporate both spatial and temporal dependency in traffic flow. We further integrate the encoder-decoder framework and scheduled sampling to improve long-term forecasting. When evaluated on real-world road network traffic data, our approach can accurately capture spatiotemporal correlations and consistently outperforms state-of-the-art baselines by 12% 15%.", "title": "" }, { "docid": "fd4bddf9a5ff3c3b8577c46249bec915", "text": "In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars the balanced parenthesis language, 1 n0n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.", "title": "" }, { "docid": "daa6bef4038654f73a6489c03b131740", "text": "Interpreters have been used in many contexts. They provide portability and ease of development at the expense of performance. The literature of the past decade covers analysis of why interpreters are slow, and many software techniques to improve them. A large proportion of these works focuses on the dispatch loop, and in particular on the implementation of the switch statement: typically an indirect branch instruction. Folklore attributes a significant penalty to this branch, due to its high misprediction rate. We revisit this assumption, considering state-of-the-art branch predictors and the three most recent Intel processor generations on current interpreters. Using both hardware counters on Haswell, the latest Intel processor generation, and simulation of the ITTAGE, we show that the accuracy of indirect branch prediction is no longer critical for interpreters. We further compare the characteristics of these interpreters and analyze why the indirect branch is less important than before.", "title": "" }, { "docid": "1de30db68b41c0e29320397ca464bb75", "text": "In software development, bug reports provide crucial information to developers. However, these reports widely differ in their quality. We conducted a survey among developers and users of APACHE, ECLIPSE, and MOZILLA to find out what makes a good bug report.\n The analysis of the 466 responses revealed an information mismatch between what developers need and what users supply. Most developers consider steps to reproduce, stack traces, and test cases as helpful, which are at the same time most difficult to provide for users. Such insight is helpful to design new bug tracking tools that guide users at collecting and providing more helpful information.\n Our CUEZILLA prototype is such a tool and measures the quality of new bug reports; it also recommends which elements should be added to improve the quality. We trained CUEZILLA on a sample of 289 bug reports, rated by developers as part of the survey. In our experiments, CUEZILLA was able to predict the quality of 31--48% of bug reports accurately.", "title": "" }, { "docid": "acf77011955c0920d76b523e9a145227", "text": "Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color, texture, and structural properties all change gradually from one source to the other. We call this process image melding. Our method builds upon a patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L2/L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. In several cases, our unified method outperforms previous state-of-the-art methods specifically designed for those applications.", "title": "" }, { "docid": "4f5b26ab2d8bd68953d473727f6f5589", "text": "OBJECTIVE\nThe study assessed the impact of mindfulness training on occupational safety of hospital health care workers.\n\n\nMETHODS\nThe study used a randomized waitlist-controlled trial design to test the effect of an 8-week mindfulness-based stress reduction (MBSR) course on self-reported health care worker safety outcomes, measured at baseline, postintervention, and 6 months later.\n\n\nRESULTS\nTwenty-three hospital health care workers participated in the study (11 in immediate intervention group; 12 in waitlist control group). The MBSR training decreased workplace cognitive failures (F [1, 20] = 7.44, P = 0.013, (Equation is included in full-text article.)) and increased safety compliance behaviors (F [1, 20] = 7.79, P = 0.011, (Equation is included in full-text article.)) among hospital health care workers. Effects were stable 6 months following the training. The MBSR intervention did not significantly affect participants' promotion of safety in the workplace (F [1, 20] = 0.40, P = 0.54, (Equation is included in full-text article.)).\n\n\nCONCLUSIONS\nMindfulness training may potentially decrease occupational injuries of health care workers.", "title": "" }, { "docid": "8ce97c23c5714b2032cfd8098a59a8b4", "text": "In psychodynamic theory, trauma is associated with a life event, which is defined by its intensity, by the inability of the person to respond adequately and by its pathologic longlasting effects on the psychic organization. In this paper, we describe how neurobiological changes link to psychodynamic theory. Initially, Freud believed that all types of neurosis were the result of former traumatic experiences, mainly in the form of sexual trauma. According to the first Freudian theory (1890–1897), hysteric patients suffer mainly from relevant memories. In his later theory of ‘differed action’, i.e., the retroactive attribution of sexual or traumatic meaning to earlier events, Freud links the consequences of sexual trauma in childhood with the onset of pathology in adulthood (Boschan, 2008). The transmission of trauma from parents to children may take place from one generation to the other. The trauma that is being experienced by the child has an interpersonal character and is being reinforced by the parents’ own traumatic experience. The subject’s interpersonal exposure through the relationship with the direct victims has been recognized as a risk factor for the development of a post-traumatic stress disorder. Trauma may be transmitted from the mother to the foetus during the intrauterine life (Opendak & Sullivan, 2016). Empirical studies also demonstrate that in the first year of life infants that had witnessed violence against their mothers presented symptoms of a posttraumatic disorder. Traumatic symptomatology in infants includes eating difficulties, sleep disorders, high arousal level and excessive crying, affect disorders and relational problems with adults and peers. Infants that are directly dependant to the caregiver are more vulnerable and at a greater risk to suffer interpersonal trauma and its neurobiological consequences (Opendak & Sullivan, 2016). In older children symptoms were more related to the severity of violence they had been exposed to than to the mother’s actual emotional state, which shows that the relationship between mother’s and child’s trauma is different in each age stage. The type of attachment and the quality of the mother-child interactional relationship contribute also to the transmission of the trauma. According to Fonagy (2003), the mother who is experiencing trauma is no longer a source of security and becomes a source of danger. Thus, the mentalization ability may be destroyed by an attachment figure, which caused to the child enough stress related to its own thoughts and emotions to an extent, that the child avoids thoughts about the other’s subjective experience. At a neurobiological level, many studies have shown that the effects of environmental stress on the brain are being mediated through molecular and cellular mechanisms. More specifically, trauma causes changes at a chemical and anatomical level resulting in transforming the subject’s response to future stress. The imprinting mechanisms of traumatic experiences are directly related to the activation of the neurobiological circuits associated with emotion, in which amygdala play a central role. The traumatic experiences are strongly encoded in memory and difficult to be erased. Early stress may result in impaired cognitive function related to disrupted functioning of certain areas of the hippocampus in the short or long term. Infants or young children that have suffered a traumatic experience may are unable to recollect events in a conscious way. However, they may maintain latent memory of the reactions to the experience and the intensity of the emotion. The neurobiological data support the ‘deferred action’ of the psychodynamic theory according which when the impact of early interpersonal trauma is so pervasive, the effects can transcend into later stages, even after the trauma has stopped. The two approaches, psychodynamic and neurobiological, are not opposite, but complementary. Psychodynamic psychotherapists and neurobiologists, based on extended theoretical bases, combine data and enrich the understanding of psychiatric disorders in childhood. The study of interpersonal trauma offers a good example of how different approaches, biological and psychodynamic, may come closer and possibly be unified into a single model, which could result in more effective therapeutic approaches.", "title": "" } ]
scidocsrr
161013313e8a24ed5d849132918e933c
An effective approach to document retrieval via utilizing WordNet and recognizing phrases
[ { "docid": "28b2bbcfb8960ff40f2fe456a5b00729", "text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation", "title": "" } ]
[ { "docid": "2d953dda47c80304f8b2fa0d6e08c2f8", "text": "A facial recognition system is an application which is used for identifying or verifying a person from a digital image or a video frame. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is generally used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. Areas such as network security, content indexing and retrieval, and video compression benefit from face recognition technology since people themselves are the main source of interest. Network access control via face recognition not only makes hackers virtually impossible to steal one's \"password\", but also increases the user friendliness in human-computer interaction. Although humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. In the mid 1960s, scientists began work on using the computer to recognize human faces. Since then, facial recognition software has come a long way. In this article, I have explored the reasons behind using facial recognition, the products developed to implement this biometrics technique and also the criticisms and advantages that are bounded with it.", "title": "" }, { "docid": "955ac5745d8cd0e0a8aea812b3d65dd8", "text": "Over the past few decades, various computational methods have become increasingly important for discovering and developing novel drugs. Computational prediction of chemical reactions is a key part of an efficient drug discovery process. In this review, we discuss important parts of this field, with a focus on utilizing reaction data to build predictive models, the existing programs for synthesis prediction, and usage of quantum mechanics and molecular mechanics (QM/MM) to explore chemical reactions. We also outline potential future developments with an emphasis on pre-competitive collaboration opportunities.", "title": "" }, { "docid": "36342d65aaa9dff0339f8c1c8cb23f30", "text": "Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.", "title": "" }, { "docid": "c0f9beca3504e15c74cd160287db78c7", "text": "The impact of restructuring in the field of communication sector has brought an evolutionary change in power sector too. This revolutionary idea has brought about competition in this sector with an aim of reduction in the electricity price. The competitive environment not only benefits the utilities and customers however it kindles some of the technical issues, typical one being the transmission congestion. It is considered to be tenacious since it admonish system security andmay result in inflation of electricity prices effecting in feeblemarket condition.The explication to the dispute of congestion has been furnished in this paper. To minimize the congestion cost, an effective multi objective approach is proposed to endorse generator rescheduling and FACTS technology using a metaheurisitc optimization algorithm, symbiotic organic search algorithm. The choice of most sensitive generators to reschedule real and reactive power is realized using real power transmission congestion distribution factor. The proposed method has been tested on IEEE 14 bus system and IEEE 30 bus system.", "title": "" }, { "docid": "b8b02f98f21b81ad5e25a73f5f95598f", "text": "Datalog is a family of ontology languages that combine good computational properties with high expressive power. Datalog languages are provably able to capture many relevant Semantic Web languages. In this paper we consider the class of weakly-sticky (WS) Datalog programs, which allow for certain useful forms of joins in rule bodies as well as extending the well-known class of weakly-acyclic TGDs. So far, only nondeterministic algorithms were known for answering queries on WS Datalog programs. We present novel deterministic query answering algorithms under WS Datalog. In particular, we propose: (1) a bottom-up grounding algorithm based on a query-driven chase, and (2) a hybrid approach based on transforming a WS program into a so-called sticky one, for which query rewriting techniques are known. We discuss how our algorithms can be optimized and effectively applied for query answering in real-world scenarios.", "title": "" }, { "docid": "ada8c64a2e5c7be58a2200e8d1f64063", "text": "Nitrogen-containing bioactive alkaloids of plant origin play a significant role in human health and medicine. Several semisynthetic antimitotic alkaloids are successful in anticancer drug development. Gloriosa superba biosynthesizes substantial quantities of colchicine, a bioactive molecule for gout treatment. Colchicine also has antimitotic activity, preventing growth of cancer cells by interacting with microtubules, which could lead to the design of better cancer therapeutics. Further, several colchicine semisynthetics are less toxic than colchicine. Research is being conducted on effective, less toxic colchicine semisynthetic formulations with potential drug delivery strategies directly targeting multiple solid cancers. This article reviews the dynamic state of anticancer drug development from colchicine semisynthetics and natural colchicine production and briefly discusses colchicine biosynthesis.", "title": "" }, { "docid": "4fc9eba84ec8ace75ff4fab2df66b3c5", "text": "Feature representation and object category classification are two key components of most object detection methods. While significant improvements have been achieved for deep feature representation learning, traditional SVM/softmax classifiers remain the dominant methods for the final object category classification. However, SVM/softmax classifiers lack the capacity of explicitly exploiting the complex structure of deep features, as they are purely discriminative methods. The recently proposed discriminative dictionary pair learning (DPL) model involves a fidelity term to minimize the reconstruction loss and a discrimination term to enhance the discriminative capability of the learned dictionary pair, and thus is appropriate for balancing the representation and discrimination to boost object detection performance. In this paper, we propose a novel object detection system by unifying DPL with the convolutional feature learning. Specifically, we incorporate DPL as a Dictionary Pair Classifier Layer (DPCL) into the deep architecture, and develop an end-to-end learning algorithm for optimizing the dictionary pairs and the neural networks simultaneously. Moreover, we design a multi-task loss for guiding our model to accomplish the three correlated tasks: objectness estimation, categoryness computation, and bounding box regression. From the extensive experiments on PASCAL VOC 2007/2012 benchmarks, our approach demonstrates the effectiveness to substantially improve the performances over the popular existing object detection frameworks (e.g., R-CNN [13] and FRCN [12]), and achieves new state-of-the-arts.", "title": "" }, { "docid": "e4ebb6d41393f0bd672f1f5985af98b4", "text": "We propose a new framework to rank image attractiveness using a novel pairwise deep network trained with a large set of side-by-side multi-labeled image pairs from a web image index. The judges only provide relative ranking between two images without the need to directly assign an absolute score, or rate any predefined image attribute, thus making the rating more intuitive and accurate. We investigate a deep attractiveness rank net (DARN), a combination of deep convolutional neural network and rank net, to directly learn an attractiveness score mean and variance for each image and the underlying criteria the judges use to label each pair. The extension of this model (DARN-V2) is able to adapt to individual judge's personal preference. We also show the attractiveness of search results are significantly improved by using this attractiveness information in a real commercial search engine. We evaluate our model against other state-of-the-art models on our side-by-side web test data and another public aesthetic data set. With much less judgments (1M vs 50M), our model outperforms on side-by-side labeled data, and is comparable on data labeled by absolute score.", "title": "" }, { "docid": "8d5a356f4f4e36f4dc796158566d69d5", "text": "This study aims to design a distance warning system to help drive safely and avoid collisions. The system design is made in two stages, i.e. detecting vehicles and estimating the detected vehicles distance. The method used to detect vehicles is Haar method and HOG method. Whereas, the method used to estimate distance is Width-based method. The result showed that Haar method was better in detecting vehicles with the average of TPR by 75%. Based on the result of distance estimation, the value MSE (Mean Squared Error) obtained was 0.333 for van type vehicle.", "title": "" }, { "docid": "70622607a75305882251c073536aa282", "text": "a r t i c l e i n f o", "title": "" }, { "docid": "4b249892383502155243585934f1af3e", "text": "Modeling a student's knowledge state while she is solving exercises is a crucial stepping stone towards providing better personalized learning experiences at scale. This task, also referred to as \"knowledge tracing\", has been explored extensively on exercises where student submissions fall into a finite discrete solution space, e.g. a multiple-choice answer. However, we believe that rich information about a student's learning is captured within their responses to open-ended problems with unbounded solution spaces, such as programming exercises. In addition, sequential snapshots of a student's progress while she is solving a single exercise can provide valuable insights into her learning behavior. In this setting, creating representations for a student's knowledge state is a challenging task, but with recent advances in machine learning, there are more promising techniques to learn representations for complex entities. In our work, we feed the embedded program submissions into a recurrent neural network and train it on the task of predicting the student's success on the subsequent programming exercise. By training on this task, the model learns nuanced representations of a student's knowledge, and reliably predicts future student performance.", "title": "" }, { "docid": "f958c7d3d27ee79c9dee944716139025", "text": "We present a tunable flipflop-based frequency divider and a fully differential push-push VCO designed in a 200GHz fT SiGe BiCMOS technology. A new technique for tuning the sensitivity of the divider in the frequency range of interest is presented. The chip works from 60GHz up to 113GHz. The VCO is based on a new topology which allows generating differential push-push outputs. The VCO shows a tuning range larger than 7GHz. The phase noise is 75dBc/Hz at 100kHz offset. The chip shows a frequency drift of 12.3MHz/C. The fundamental signal suppression is larger than 50dB. The output power is 2×5dBm. At a 3.3V supply, the circuits consume 35mA and 65mA, respectively.", "title": "" }, { "docid": "c70abd8598ef360dc6e9a10f46622003", "text": "Removal of baseline wander is a crucial step in the signal conditioning stage of photoplethysmography signals. Hence, a method for removing the baseline wander from photoplethysmography based on two-stages of median filtering is proposed in this paper. Recordings from Physionet database are used to validate the proposed method. In this paper, the two-stage moving average filtering is also applied to remove baseline wander in photoplethysmography signals for comparison with our novel two-stage median filtering method. Our experiment results show that the performance of two-stage median filtering method is more effective in removing baseline wander from photoplethysmography signals. This median filtering method effectively improves the cross correlation with minimal distortion of the signal of interest. Although the method is proposed for baseline wander in photoplethysmography signals, it can be applied to other biomedical signals as well.", "title": "" }, { "docid": "962903d0c559fbddb0921fcbf2d948f6", "text": "In mining frequent itemsets, one of most important algorithm is FP-growth. FP-growth proposes an algorithm to compress information needed for mining frequent itemsets in FP-tree and recursively constructs FP-trees to find all frequent itemsets. In this paper, we propose the EFP-growth (enhanced FPgrowth) algorithm to achieve the quality of FP-growth. Our proposed method implemented the EFPGrowth based on MapReduce framework using Hadoop approach. New method has high achieving performance compared with the basic FP-Growth. The EFP-growth it can work with the large datasets to discovery frequent patterns in a transaction database. Based on our method, the execution time under different minimum supports is decreased..", "title": "" }, { "docid": "231d7797961326974ca3a3d2271810ae", "text": "Agile methods form an alternative to waterfall methodologies. Little is known about activity composition, the proportion of varying activities in agile processes and the extent to which the proportions of activities differ from \"waterfall\" processes. In the current study, we examine the variation in per formative routines in one large agile and traditional lifecycle project using an event sequencing method. Our analysis shows that the enactment of waterfall and agile routines differ significantly suggesting that agile process is composed of fewer activities which are repeated iteratively1.", "title": "" }, { "docid": "e18b08d7f7895339b432a9f9faf5a923", "text": "We present a parallelized navigation architecture that is capable of running in real-time and incorporating long-term loop closure constraints while producing the optimal Bayesian solution. This architecture splits the inference problem into a low-latency update that incorporates new measurements using just the most recent states (filter), and a high-latency update that is capable of closing long loops and smooths using all past states (smoother). This architecture employs the probabilistic graphical models of Factor Graphs, which allows the low-latency inference and highlatency inference to be viewed as sub-operations of a single optimization performed within a single graphical model. A specific factorization of the full joint density is employed that allows the different inference operations to be performed asynchronously while still recovering the optimal solution produced by a full batch optimization. Due to the real-time, asynchronous nature of this algorithm, updates to the state estimates from the highlatency smoother will naturally be delayed until the smoother calculations have completed. This architecture has been tested within a simulated aerial environment and on real data collected from an autonomous ground vehicle. In all cases, the concurrent architecture is shown to recover the full batch solution, even while updated state estimates are produced in real-time.", "title": "" }, { "docid": "e6f28d4bd8cbbc67acdbb06cc84a8c40", "text": "• Regularization: To force the label embedding as the anchor points for each classes, we regularize the learned label embeddings to be on its corresponding manifold Model Yahoo DBPedia AGNews Yelp P. Yelp F. Bag-ofwords 68.9 96.6 88.8 92.2 58 CNN 70.94 98.28 91.45 95.11 59.48 LSTM 70.84 98.55 86.06 94.74 58.17 Deep CNN 73.43 98.71 91.27 95.72 64.26 SWEM 73.53 98.42 92.24 93.76 61.11 fastText 72.3 98.6 92.5 95.7 63.9 HAN 75.8 Bi-BloSAN 76.28 98.77 93.32 94.56 62.13 LEAM 77.42 99.02 92.45 95.31 64.09 Test Accuracy on document classification tasks, in percentage", "title": "" }, { "docid": "4b28bc08ebeaf9be27ce642e622e064d", "text": "Homogeneity analysis combines the idea of maximizing the correlations between variables of a multivariate data set with that of optimal scaling. In this article we present methodological and practical issues of the R package homals which performs homogeneity analysis and various extensions. By setting rank constraints nonlinear principal component analysis can be performed. The variables can be partitioned into sets such that homogeneity analysis is extended to nonlinear canonical correlation analysis or to predictive models which emulate discriminant analysis and regression models. For each model the scale level of the variables can be taken into account by setting level constraints. All algorithms allow for missing values.", "title": "" }, { "docid": "890236dc21eef6d0523ee1f5e91bf784", "text": "Perhaps the most amazing property of these word embeddings is that somehow these vector encodings effectively capture the semantic meanings of the words. The question one might ask is how or why? The answer is that because the vectors adhere surprisingly well to our intuition. For instance, words that we know to be synonyms tend to have similar vectors in terms of cosine similarity and antonyms tend to have dissimilar vectors. Even more surprisingly, word vectors tend to obey the laws of analogy. For example, consider the analogy ”Woman is to queen as man is to king”. It turns out that", "title": "" }, { "docid": "cbcac82390f0d66d9f8f0ee7f1166693", "text": "The working environment of managers can be viewed as a technological ecosystem with more and more convergent information and communication technologies (ICT) and social and collaborative systems. Our objective is to study how the process of technology acceptance occurs in this population. By using various models and theories, we propose a systemic and iterative model of the process of technology acceptance in the professional context. It is characterized by consideration of the user experience (UX), formed during the use of ICT and social and collaborative systems, according to the degree to which fundamental needs are satisfied or not satisfied. We use the model to analyse acceptance in the case of a sample of 1768 managers. This study presents empirical values of the use of various ICTs by this population, the importance of eight UX criteria, and their impacts on the general process of technology acceptance.", "title": "" } ]
scidocsrr
d09b72087382faa9de5d5ee4c7e08e32
Tolerating Malicious Device Drivers in Linux
[ { "docid": "6ca4d0021c11906bae4dbd5db9b47c80", "text": "Writing code to interact with external devices is inherently difficult, and the added demands of writing device drivers in C for kernel mode compounds the problem. This environment is complex and brittle, leading to increased development costs and, in many cases, unreliable code. Previous solutions to this problem ignore the cost of migrating drivers to a better programming environment and require writing new drivers from scratch or even adopting a new operating system. We present Decaf Drivers, a system for incrementally converting existing Linux kernel drivers to Java programs in user mode. With support from programanalysis tools, Decaf separates out performance-sensitive code and generates a customized kernel interface that allows the remaining code to be moved to Java. With this interface, a programmer can incrementally convert driver code in C to a Java decaf driver. The Decaf Drivers system achieves performance close to native kernel drivers and requires almost no changes to the Linux kernel. Thus, Decaf Drivers enables driver programming to advance into the era of modern programming languages without requiring a complete rewrite of operating systems or drivers. With five drivers converted to Java, we show that Decaf Drivers can (1) move the majority of a driver’s code out of the kernel, (2) reduce the amount of driver code, (3) detect broken error handling at compile time with exceptions, (4) gracefully evolve as driver and kernel code and data structures change, and (5) perform within one percent of native kernel-only drivers.", "title": "" }, { "docid": "38015405cee6dd933bcc4fb8897aecf5", "text": "Computers are notoriously insecure, in part because application security policies do not map well onto traditional protection mechanisms such as Unix user accounts or hardware page tables. Recent work has shown that application policies can be expressed in terms of information flow restrictions and enforced in an OS kernel, providing a strong assurance of security. This paper shows that enforcement of these policies can be pushed largely into the processor itself, by using tagged memory support, which can provide stronger security guarantees by enforcing application security even if the OS kernel is compromised. We present the Loki tagged memory architecture, along with a novel operating system structure that takes advantage of tagged memory to enforce application security policies in hardware. We built a full-system prototype of Loki by modifying a synthesizable SPARC core, mapping it to an FPGA board, and porting HiStar, a Unix-like operating system, to run on it. One result is that Loki allows HiStar, an OS already designed to have a small trusted kernel, to further reduce the amount of trusted code by a factor of two, and to enforce security despite kernel compromises. Using various workloads, we also demonstrate that HiStar running on Loki incurs a low performance overhead.", "title": "" } ]
[ { "docid": "93347ca2b0e76b442b39ea518eebf551", "text": "For tackling thewell known cold-start user problem inmodel-based recommender systems, one approach is to recommend a few items to a cold-start user and use the feedback to learn a pro€le. Œe learned pro€le can then be used to make good recommendations to the cold user. In the absence of a good initial pro€le, the recommendations are like random probes, but if not chosen judiciously, both bad recommendations and too many recommendations may turn o‚ a user. We formalize the cold-start user problem by asking what are the b best items we should recommend to a cold-start user, in order to learn her pro€le most accurately, where b , a given budget, is typically a small number. We formalize the problem as an optimization problem and present multiple non-trivial results, including NP-hardness as well as hardness of approximation. We furthermore show that the objective function, i.e., the least square error of the learned pro€lew.r.t. the true user pro€le, is neither submodular nor supermodular, suggesting ecient approximations are unlikely to exist. Finally, we discuss several scalable heuristic approaches for identifying the b best items to recommend to the user and experimentally evaluate their performance on 4 real datasets. Our experiments show that our proposed accelerated algorithms signi€cantly outperform the prior art in runnning time, while achieving similar error in the learned user pro€le as well as in the rating predictions. ACM Reference format: Sampoorna Biswas, Laks V.S. Lakshmanan, and Senjuti Basu Ray. 2016. Combating the Cold Start User Problem in Model Based Collaborative Filtering. In Proceedings of ACM Conference, Washington, DC, USA, July 2017 (Conference’17), 11 pages. DOI: 10.1145/nnnnnnn.nnnnnnn", "title": "" }, { "docid": "12014f235a197a4fa94e217c50e3433d", "text": "a r t i c l e i n f o Since the early 1990s, South Korea has been expanding its expressways. As of July 2013, a total of 173 expressway service areas (ESAs) have been established. Among these, 31 ESAs were closed due to financial deficits. To address this challenge, this study aimed to develop a decision support system for determining the optimal size of a new ESA, focusing on the profitability of the ESA. This study adopted a case-based reasoning approach as the main research method because it is necessary to provide the historical data as a reference in determining the optimal size of a new ESA, which is more suitable for the decision-making process from the practical perspective. This study used a total of 106 general ESAs to develop the proposed system. Compared to the conventional process (i.e., direction estimation), the prediction accuracy of the improved process (i.e., three-phase estimation process) was improved by 9.84%. The computational time required for the optimization of the proposed system was determined to be less than 10 min (from 1.75 min to 9.93 min). The proposed system could be useful for the final decision-maker as the following purposes: (i) the probability estimation model for determining the optimal size of a new ESA during the planning stage; (ii) the approximate initial construction cost estimation model for a new ESA by using the estimated sales in the ESA; and (iii) the comparative assessment model for evaluating the sales per the building area of the existing ESA.", "title": "" }, { "docid": "e1e836fe6ff690f9c85443d26a1448e3", "text": "■ We describe an apparatus and methodology to support real-time color imaging for night operations. Registered imagery obtained in the visible through nearinfrared band is combined with thermal infrared imagery by using principles of biological opponent-color vision. Visible imagery is obtained with a Gen III image intensifier tube fiber-optically coupled to a conventional charge-coupled device (CCD), and thermal infrared imagery is obtained by using an uncooled thermal imaging array. The two fields of view are matched and imaged through a dichroic beam splitter to produce realistic color renderings of a variety of night scenes. We also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery. Progress in the development of a low-light-sensitive visible CCD imager with high resolution and wide intrascene dynamic range, operating at thirty frames per second, is described. Example low-light CCD imagery obtained under controlled illumination conditions, from full moon down to overcast starlight, processed by our adaptive dynamic-range algorithm, is shown. The combination of a low-light visible CCD imager and a thermal infrared microbolometer array in a single dualband imager, with a portable image-processing computer implementing our neuralnet algorithms, and color liquid-crystal display, yields a compact integrated version of our system as a solid-state color night-vision device. The systems described here can be applied to a large variety of military operations and civilian needs.", "title": "" }, { "docid": "7a1186fb86da864564bb2cf08369ff4f", "text": "Sound-event classification often utilizes time–frequency analysis, which produces an image-like spectrogram. Recent approaches such as spectrogram image features and subband power distribution image features extract the image local statistics such as mean and variance from the spectrogram. They have demonstrated good performance. However, we argue that such simple image statistics cannot well capture the complex texture details of the spectrogram. Thus, we propose to extract the local binary pattern (LBP) from the logarithm of the Gammatone-like spectrogram. However, the LBP feature is sensitive to noise. After analyzing the spectrograms of sound events and the audio noise, we find that the magnitude of pixel differences, which is discarded by the LBP feature, carries important information for sound-event classification. We thus propose a multichannel LBP feature via pixel difference quantization to improve the robustness to the audio noise. In view of the differences between spectrograms and natural images, and the reliability issues of LBP features, we propose two projection-based LBP features to better capture the texture information of the spectrogram. To validate the proposed multichannel projection-based LBP features for robot hearing, we have built a new sound-event classification database, the NTU-SEC database, in the context of social interaction between human and robot. It is publicly available to promote research on sound-event classification in a social context. The proposed approaches are compared with the state of the art on the RWCP database and the NTU-SEC database. They consistently demonstrate superior performance under various noise conditions.", "title": "" }, { "docid": "0153f2fbf53c3919e22f08a60c5c6d5b", "text": "Data mining aims at extracting knowledge from data. Information rich datasets, such as EBSCOhost Newspaper Source [3], carry a significant amount of multi-typed current and archived news data. This extracted information can easily be constructed into a heterogeneous information network. Through use of many mining techniques, deeper comprehension can then be unearthed from the underlying relationships between article, authors, tags, etc. This paper focuses on building on two such techniques, classification and embedding. GNetMine [1] is a common classification algorithm that is able to label entities of different types through a small set of training data. Node2Vec [5] is an embedding approach, using the many nodes and edges in a heterogeneous network, and converting them into a lowdimensional vector space, that can quickly and easily allow for comparison between nodes of any type. The goal of this paper is to combine these methods and compare and contrast the quality of output of Node2Vec with unlabeled data, directly from the heterogeneous network of EBSCOhost sports news data, as well as adding learned classification labels. Can nodes labeled using classification as an input to network embedding, improve the outcome of the embedding results?", "title": "" }, { "docid": "04e198c1c9982acc7153a1cf1be77a0f", "text": "Digital images can be easily tampered with image editing tools. The detection of tampering operations is of great importance. Passive digital image tampering detection aims at verifying the authenticity of digital images without any a prior knowledge on the original images. There are various methods proposed in this filed in recent years. In this paper, we present an overview of these methods in three levels, that is low level, middle level, and high level in semantic sense. The main ideas of the proposed approaches at each level are described in detail, and some comments are given.", "title": "" }, { "docid": "7200c6c09c38e2fb363360ae8bb473ff", "text": "This work describes autofluorescence of the mycelium of the dry rot fungus Serpula lacrymans grown on spruce wood blocks impregnated with various metals. Live mycelium, as opposed to dead mycelium, exhibited yellow autofluorescence upon blue excitation, blue fluorescence with ultraviolet (UV) excitation, orange-red and light-blue fluorescence with violet excitation, and red fluorescence with green excitation. Distinctive autofluorescence was observed in the fungal cell wall and in granula localized in the cytoplasm. In dead mycelium, the intensity of autofluorescence decreased and the signal was diffused throughout the cytoplasm. Metal treatment affected both the color and intensity of autofluorescence and also the morphology of the mycelium. The strongest yellow signal was observed with blue excitation in Cd-treated samples, in conjunction with increased branching and the formation of mycelial loops and protrusions. For the first time, we describe pink autofluorescence that was observed in Mn-, Zn-, and Cu-treated samples with UV, violet or. blue excitation. The lowest signals were obtained in Cu- and Fe-treated samples. Chitin, an important part of the fungal cell wall exhibited intensive primary fluorescence with UV, violet, blue, and green excitation.", "title": "" }, { "docid": "f1be0b8037d5ab3d2a962a08ddc9a388", "text": "This paper presents ZHT, a zero-hop distributed hash table, which has been tuned for the requirements of high-end computing systems. ZHT aims to be a building block for future distributed systems, such as parallel and distributed file systems, distributed job management systems, and parallel programming systems. The goals of ZHT are delivering high availability, good fault tolerance, high throughput, and low latencies, at extreme scales of millions of nodes. ZHT has some important properties, such as being light-weight, dynamically allowing nodes join and leave, fault tolerant through replication, persistent, scalable, and supporting unconventional operations such as append (providing lock-free concurrent key/value modifications) in addition to insert/lookup/remove. We have evaluated ZHT's performance under a variety of systems, ranging from a Linux cluster with 512-cores, to an IBM Blue Gene/P supercomputer with 160K-cores. Using micro-benchmarks, we scaled ZHT up to 32K-cores with latencies of only 1.1ms and 18M operations/sec throughput. This work provides three real systems that have integrated with ZHT, and evaluate them at modest scales. 1) ZHT was used in the FusionFS distributed file system to deliver distributed meta-data management at over 60K operations (e.g. file create) per second at 2K-core scales. 2) ZHT was used in the IStore, an information dispersal algorithm enabled distributed object storage system, to manage chunk locations, delivering more than 500 chunks/sec at 32-nodes scales. 3) ZHT was also used as a building block to MATRIX, a distributed job scheduling system, delivering 5000 jobs/sec throughputs at 2K-core scales. We compared ZHT against other distributed hash tables and key/value stores and found it offers superior performance for the features and portability it supports.", "title": "" }, { "docid": "a845a36fb352f347224e9902087d9625", "text": "Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research.", "title": "" }, { "docid": "c58aaa7e1b197a1ee95fb343b0de8664", "text": "Natural language understanding (NLU) is an important module of spoken dialogue systems. One of the difficulties when it comes to adapting NLU to new domains is the high cost of constructing new training data for each domain. To reduce this cost, we propose a zero-shot learning of NLU that takes into account the sequential structures of sentences together with general question types across different domains. Experimental results show that our methods achieve higher accuracy than baseline methods in two completely different domains (insurance and sightseeing).", "title": "" }, { "docid": "ae2d295f84026ea83c74fa5e1b650385", "text": "We consider learning to generalize and extrapolate with limited data to harder compositional problems than a learner has previously seen. We take steps toward this challenge by presenting a characterization, algorithm, and implementation of a learner that programs itself automatically to reflect the structure of the problem it faces. Our key ideas are (1) transforming representations with modular units of computation is a solution for decomposing problems in a way that reflects their subproblem structure; (2) learning the structure of a computation can be formulated as a sequential decision-making problem. Experiments on solving various multilingual arithmetic problems demonstrate that our method generalizes out of distribution to unseen problem classes and extrapolates to harder versions of the same problem. Our paper provides the first element of a framework for learning general-purpose, compositional and recursive programs that design themselves.", "title": "" }, { "docid": "f1fe8a9d2e4886f040b494d76bc4bb78", "text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.", "title": "" }, { "docid": "48d2c1b5edba779a7f1b0e2a509a496c", "text": "We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.", "title": "" }, { "docid": "b02dcd4d78f87d8ac53414f0afd8604b", "text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.", "title": "" }, { "docid": "caa35f58e9e217fd45daa2e49c4a4cde", "text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤’ tEsEbbEr-E ‘it was broken’, ‰ ̃b’w l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.", "title": "" }, { "docid": "74ef26e332b12329d8d83f80169de5c0", "text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.", "title": "" }, { "docid": "2b7ac1941127e1d47401d67e6d7856de", "text": "Alert correlation is an important technique for managing large the volume of intrusion alerts that are raised by heterogenous Intrusion Detection Systems (IDSs). The recent trend of research in this area is towards extracting attack strategies from raw intrusion alerts. It is generally believed that pure intrusion detection no longer can satisfy the security needs of organizations. Intrusion response and prevention are now becoming crucially important for protecting the network and minimizing damage. Knowing the real security situation of a network and the strategies used by the attackers enables network administrators to launches appropriate response to stop attacks and prevent them from escalating. This is also the primary goal of using alert correlation technique. However, most of the current alert correlation techniques only focus on clustering inter-connected alerts into different groups without further analyzing the strategies of the attackers. Some techniques for extracting attack strategies have been proposed in recent years, but they normally require defining a larger number of rules. This paper focuses on developing a new alert correlation technique that can help to automatically extract attack strategies from a large volume of intrusion alerts, without specific prior knowledge about these alerts. The proposed approach is based on two different neural network approaches, namely, Multilayer Perceptron (MLP) and Support Vector Machine (SVM). The probabilistic output of these two methods is used to determine with which previous alerts this current alert should be correlated. This suggests the causal relationship of two alerts, which is helpful for constructing attack scenarios. One of the distinguishing feature of the proposed technique is that an Alert Correlation Matrix (ACM) is used to store correlation strengthes of any two types of alerts. ACM is updated in the training process, and the information (correlation strength) is then used for extracting high level attack strategies.", "title": "" }, { "docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a", "text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053", "title": "" }, { "docid": "340f64ed182a54ef617d7aa2ffeac138", "text": "Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.", "title": "" }, { "docid": "c2756af71724249b458ffdf7a49c4060", "text": "Objectives. Cooccurring psychiatric disorders influence the outcome and prognosis of gender dysphoria. The aim of this study is to assess psychiatric comorbidities in a group of patients. Methods. Eighty-three patients requesting sex reassignment surgery (SRS) were recruited and assessed through the Persian Structured Clinical Interview for DSM-IV Axis I disorders (SCID-I). Results. Fifty-seven (62.7%) patients had at least one psychiatric comorbidity. Major depressive disorder (33.7%), specific phobia (20.5%), and adjustment disorder (15.7%) were the three most prevalent disorders. Conclusion. Consistent with most earlier researches, the majority of patients with gender dysphoria had psychiatric Axis I comorbidity.", "title": "" } ]
scidocsrr
2eea883530a1e3b58c5968d5136f856c
Large scale multi-label classification via metalabeler
[ { "docid": "2ad76db05382d5bbdae27d5192cccd72", "text": "Very large-scale classification taxonomies typically have hundreds of thousands of categories, deep hierarchies, and skewed category distribution over documents. However, it is still an open question whether the state-of-the-art technologies in automated text categorization can scale to (and perform well on) such large taxonomies. In this paper, we report the first evaluation of Support Vector Machines (SVMs) in web-page classification over the full taxonomy of the Yahoo! categories. Our accomplishments include: 1) a data analysis on the Yahoo! taxonomy; 2) the development of a scalable system for large-scale text categorization; 3) theoretical analysis and experimental evaluation of SVMs in hierarchical and non-hierarchical settings for classification; 4) an investigation of threshold tuning algorithms with respect to time complexity and their effect on the classification accuracy of SVMs. We found that, in terms of scalability, the hierarchical use of SVMs is efficient enough for very large-scale classification; however, in terms of effectiveness, the performance of SVMs over the Yahoo! Directory is still far from satisfactory, which indicates that more substantial investigation is needed.", "title": "" }, { "docid": "40f21a8702b9a0319410b716bda0a11e", "text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.", "title": "" }, { "docid": "9a97ba6e4b4e80af129fdf48964017f2", "text": "Automatically categorizing documents into pre-defined topic hierarchies or taxonomies is a crucial step in knowledge and content management. Standard machine learning techniques like Support Vector Machines and related large margin methods have been successfully applied for this task, albeit the fact that they ignore the inter-class relationships. In this paper, we propose a novel hierarchical classification method that generalizes Support Vector Machine learning and that is based on discriminant functions that are structured in a way that mirrors the class hierarchy. Our method can work with arbitrary, not necessarily singly connected taxonomies and can deal with task-specific loss functions. All parameters are learned jointly by optimizing a common objective function corresponding to a regularized upper bound on the empirical loss. We present experimental results on the WIPO-alpha patent collection to show the competitiveness of our approach.", "title": "" } ]
[ { "docid": "5e182532bfd10dee3f8d57f14d1f4455", "text": "Camera calibrating is a crucial problem for further metric scene measurement. Many techniques and some studies concerning calibration have been presented in the last few years. However, it is still di1cult to go into details of a determined calibrating technique and compare its accuracy with respect to other methods. Principally, this problem emerges from the lack of a standardized notation and the existence of various methods of accuracy evaluation to choose from. This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation. Furthermore, the techniques surveyed have been tested and their accuracy evaluated. Comparative results are shown and discussed in the article. Moreover, code and results are available in internet. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "24a3924f15cb058668e8bcb7ba53ee66", "text": "This paper presents a latest survey of different technologies used in medical image segmentation using Fuzzy C Means (FCM).The conventional fuzzy c-means algorithm is an efficient clustering algorithm that is used in medical image segmentation. To update the study of image segmentation the survey has performed. The techniques used for this survey are Brain Tumor Detection Using Segmentation Based on Hierarchical Self Organizing Map, Robust Image Segmentation in Low Depth Of Field Images, Fuzzy C-Means Technique with Histogram Based Centroid Initialization for Brain Tissue Segmentation in MRI of Head Scans.", "title": "" }, { "docid": "0fcefddfe877b804095838eb9de9581d", "text": "This paper examines the torque ripple and cogging torque variation in surface-mounted permanent-magnet synchronous motors (PMSMs) with skewed rotor. The effect of slot/pole combinations and magnet shapes on the magnitude and harmonic content of torque waveforms in a PMSM drive has been studied. Finite element analysis and experimental results show that the skewing with steps does not necessarily reduce the torque ripple but may cause it to increase for certain magnet designs and configurations. The electromagnetic torque waveforms, including cogging torque, have been analyzed for four different PMSM configurations having the same envelop dimensions and output requirements.", "title": "" }, { "docid": "91c024a832bfc07bc00b7086bcf77add", "text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.", "title": "" }, { "docid": "5bce1b4fb024307bdad27d79f6e26b45", "text": "SMS-based One-Time Passwords (SMS OTP) were introduced to counter phishing and other attacks against Internet services such as online banking. Today, SMS OTPs are commonly used for authentication and authorization for many different applications. Recently, SMS OTPs have come under heavy attack, especially by smartphone trojans. In this paper, we analyze the security architecture of SMS OTP systems and study attacks that pose a threat to Internet-based authentication and authorization services. We determined that the two foundations SMS OTP is built on, cellular networks and mobile handsets, were completely different at the time when SMS OTP was designed and introduced. Throughout this work, we show why SMS OTP systems cannot be considered secure anymore. Based on our findings, we propose mechanisms to secure SMS OTPs against common attacks and specifically against smartphone trojans.", "title": "" }, { "docid": "7fadd4cafa4997c8af947cbdf26f4a43", "text": "This article presents a meta-analysis of the experimental literature that has examined the effect of performance and mastery achievement goals on intrinsic motivation. Summary analyses provided support for the hypothesis that the pursuit of performance goals has an undermining effect on intrinsic motivation relative to the pursuit of mastery goals. Moderator analyses were conducted in an attempt to explain significant variation in the magnitude and direction of this effect across studies. Results indicated that the undermining effect of performance goals relative to mastery goals was contingent on whether participants received confirming or nonconfirming competence feedback, and on whether the experimental procedures induced a performance-approach or performance-avoidance orientation. These findings provide conceptual clarity to the literature on achievement goals and intrinsic motivation and suggest numerous avenues for subsequent empirical work.", "title": "" }, { "docid": "8147143579de86a5eeb668037c2b8c5d", "text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.", "title": "" }, { "docid": "8fa31615d2164e9146be35d046dd71cf", "text": "An empirical investigation of information retrieval (IR) using the MEDLINE 1 database was carried out to study user behaviour, performance and to investigate the reasons for sub-optimal searches. The experimental subjects were drawn from two groups of final year medical students who differed in their knowledge of the search system, i.e. novice and expert users. The subjects carried out four search tasks and their recall and precision performance was recorded. Data was captured on the search strategies used, duration and logs of submitted queries. Differences were found between the groups for the performance measure of recall in only one of the four experimental tasks. Overall performance was poor. Analysis of strategies, timing data and query logs showed that there were many different causes for search failure or success. Poor searchers either gave up too quickly, employed few search terms, used only simple queries or used the wrong search terms. Good searchers persisted longer, used a larger, richer set of terms, constructed more complex queries and were more diligent in evaluating the retrieved results. However, individual performances were not correlated with all of these factors. Poor performers frequently exhibited several factors of good searcher behaviour and failed for just one reason. Overall end-user searching behaviour is complex and it appears that just one factor can cause poor performance, whereas good performance can result from sub-optimal strategies that compensate for some difficulties. The implications of the results for the design of IR interfaces are discussed.", "title": "" }, { "docid": "73af8236cc76e386aa76c6d20378d774", "text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).", "title": "" }, { "docid": "a83b417c2be604427eacf33b1db91468", "text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.", "title": "" }, { "docid": "77278e6ba57e82c88f66bd9155b43a50", "text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.", "title": "" }, { "docid": "70eed1677463969a4ed443988d8d7521", "text": "Security, privacy, and fairness have become critical in the era of data science and machine learning. More and more we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data; how the exploitation of additional data can reveal private information in the original one; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this paper we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensure that a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtually impossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals. J. Sokolić and M. R. D. Rodrigues are with the Department of Electronic and Electrical Engineering, Univeristy College London, London, UK (e-mail: {jure.sokolic.13, m.rodrigues}@ucl.ac.uk). Q. Qiu and G. Sapiro are with the Department of Electrical and Computer Engineering, Duke University, NC, USA (e-mail: {qiang.qiu, guillermo.sapiro}@duke.edu). The work of Guillermo Sapiro was partially supported by NSF, ONR, ARO, NGA. May 24, 2017 DRAFT ar X iv :1 70 5. 08 19 7v 1 [ st at .M L ] 2 3 M ay 2 01 7", "title": "" }, { "docid": "2c3566048334e60ae3f30bd631e4da87", "text": "The Indian Railways is world's fourth largest railway network in the world after USA, Russia and China. There is a severe problem of collisions of trains. So Indian railway is working in this aspect to promote the motto of "SAFE JOURNEY". A RFID based railway track finding system for railway has been proposed in this paper. In this system the RFID tags and reader are used which are attached in the tracks and engine consecutively. So Train engine automatically get the data of path by receiving it from RFID tag and detect it. If path is correct then train continue to run on track and if it is wrong then a signal is generated and sent to the control station and after this engine automatically stop in a minimum time and the display of LCD show the "WRONG PATH". So the collision and accident of train can be avoided. With the help of this system the train engine would be programmed to move according to the requirement. The another feature of this system is automatic track changer by which the track jointer would move automatically according to availability of trains.", "title": "" }, { "docid": "923a714ed2811e29647870a2694698b1", "text": "Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets", "title": "" }, { "docid": "e51d244f45cda8826dc94ba35a12d066", "text": "This article describes part of our contribution to the “Bell Kor’s Pragmatic Chaos” final solution, which won the Netflix Grand Prize. The other portion of the contribution was creat ed while working at AT&T with Robert Bell and Chris Volinsky, as reported in our 2008 Progress Prize report [3]. The final solution includes all the predictors described there. In th is article we describe only the newer predictors. So what is new over last year’s solution? First we further improved the baseline predictors (Sec. III). This in turn impr oves our other models, which incorporate those predictors, like the matrix factorization model (Sec. IV). In addition, an exten sion of the neighborhood model that addresses temporal dynamics was introduced (Sec. V). On the Restricted Boltzmann Machines (RBM) front, we use a new RBM model with superior accuracy by conditioning the visible units (Sec. VI). The fin al addition is the introduction of a new blending algorithm, wh ich is based on gradient boosted decision trees (GBDT) (Sec. VII ).", "title": "" }, { "docid": "2348652010d1dec37a563e3eed15c090", "text": "This study firstly examines the current literature concerning ERP implementation problems during implementation phases and causes of ERP implementation failure. A multiple case study research methodology was adopted to understand “why” and “how” these ERP systems could not be implemented successfully. Different stakeholders (including top management, project manager, project team members and ERP consultants) from these case studies were interviewed, and ERP implementation documents were reviewed for triangulation. An ERP life cycle framework was applied to study the ERP implementation process and the associated problems in each phase of ERP implementation. Fourteen critical failure factors were identified and analyzed, and three common critical failure factors (poor consultant effectiveness, project management effectiveness and poo555îr quality of business process re-engineering) were examined and discussed. Future research on ERP implementation and critical failure factors is discussed. It is hoped that this research will help to bridge the current literature gap and provide practical advice for both academics and practitioners.", "title": "" }, { "docid": "1fb8701f0ad0a9e894e4195bc02d5c25", "text": "As graphics processing units (GPUs) are broadly adopted, running multiple applications on a GPU at the same time is beginning to attract wide attention. Recent proposals on multitasking GPUs have focused on either spatial multitasking, which partitions GPU resource at a streaming multiprocessor (SM) granularity, or simultaneous multikernel (SMK), which runs multiple kernels on the same SM. However, multitasking performance varies heavily depending on the resource partitions within each scheme, and the application mixes. In this paper, we propose GPU Maestro that performs dynamic resource management for efficient utilization of multitasking GPUs. GPU Maestro can discover the best performing GPU resource partition exploiting both spatial multitasking and SMK. Furthermore, dynamism within a kernel and interference between the kernels are automatically considered because GPU Maestro finds the best performing partition through direct measurements. Evaluations show that GPU Maestro can improve average system throughput by 20.2% and 13.9% over the baseline spatial multitasking and SMK, respectively.", "title": "" }, { "docid": "126a6d3308c0b4d1e17139cb16da867d", "text": "INTRODUCTION\n3D-printed anatomical models play an important role in medical and research settings. The recent successes of 3D anatomical models in healthcare have led many institutions to adopt the technology. However, there remain several issues that must be addressed before it can become more wide-spread. Of importance are the problems of cost and time of manufacturing. Machine learning (ML) could be utilized to solve these issues by streamlining the 3D modeling process through rapid medical image segmentation and improved patient selection and image acquisition. The current challenges, potential solutions, and future directions for ML and 3D anatomical modeling in healthcare are discussed. Areas covered: This review covers research articles in the field of machine learning as related to 3D anatomical modeling. Topics discussed include automated image segmentation, cost reduction, and related time constraints. Expert commentary: ML-based segmentation of medical images could potentially improve the process of 3D anatomical modeling. However, until more research is done to validate these technologies in clinical practice, their impact on patient outcomes will remain unknown. We have the necessary computational tools to tackle the problems discussed. The difficulty now lies in our ability to collect sufficient data.", "title": "" }, { "docid": "c1e39be2fa21a4f47d163c1407490dc8", "text": "Most existing anaphora resolution algorithms are designed to account only for anaphors with NP-antecedents. This paper describes an algorithm for the resolution of discourse deictic anaphors, which constitute a large percentage of anaphors in spoken dialogues. The success of the resolution is dependent on the classification of all pronouns and demonstratives into individual, discourse deictic and vague anaphora. Finally, the empirical results of the application of the algorithm to a corpus of spoken dialogues are presented.", "title": "" } ]
scidocsrr
4988cb8d224a6e94d102fa6d8841f27d
Implementing and Proving the TLS 1.3 Record Layer
[ { "docid": "98c64622f9a22f89e3f9dd77c236f310", "text": "After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.", "title": "" } ]
[ { "docid": "e3db113a2b09ee8c7c093e696c85e6bf", "text": "Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.", "title": "" }, { "docid": "14724ca410a07d97857bf730624644a5", "text": "We introduce a highly scalable approach for open-domain question answering with no dependence on any data set for surface form to logical form mapping or any linguistic analytic tool such as POS tagger or named entity recognizer. We define our approach under the Constrained Conditional Models framework which lets us scale up to a full knowledge graph with no limitation on the size. On a standard benchmark, we obtained near 4 percent improvement over the state-of-the-art in open-domain question answering task.", "title": "" }, { "docid": "2b9fa788e7ccacf14fcdc295ba387e25", "text": "In this paper, two kinds of methods, namely additional momentum method and self-adaptive learning rate adjustment method, are used to improve the BP algorithm. Considering the diversity of factors which affect stock prices, Single-input and Multi-input Prediction Model (SIPM and MIPM) are established respectively to implement short-term forecasts for SDIC Electric Power (600886) shares and Bank of China (601988) shares in 2009. Experiments indicate that the improved BP model has superior performance to the basic BP model, and MIPM is also better than SIPM. However, the best performance is obtained by using MIPM and improved prediction model cohesively.", "title": "" }, { "docid": "0f8d4605dc76c4e198e1a3e5c372db1b", "text": "The purpose of this study was to compare the effectiveness of mesenchymal stem cells (MSCs) with platelet-rich plasma (PRP) as scaffold and autogenous cortical bone (ACB) graft with and without PRP in the regenerative treatment of class II furcation defects in dogs. The mandibular second, third, and fourth premolars (P2, P3, P4) and maxillary P3 and P4 of both sides in three dogs were selected for experimentation. Class II furcation defects (5 mm in height and 2 mm in depth) were surgically created. Five weeks after the first operation, scaling + root planning (group 1), PRP (group 2), ACB (group 3), combination of ACB/PRP (group 4), and combination of MSCs/PRP (group 5) treatments were performed during open flap debridement. The percentage of cementum and alveolar bone formation was evaluated by histomorphometric analysis after a healing period of 8 weeks. There was new cementum along with periodontal ligament and coronal growth of alveolar bone in all groups. Cementum formation was significantly higher in groups 3, 4, and 5 compared to the control group (P < 0.05) with no significant difference between groups 2, 3, 4, and 5. Alveolar bone formation was similar in all groups (P > 0.05). It can be concluded that periodontal regeneration with complete filling of class II furcation defects with cementum, alveolar bone, and periodontal ligament is obtained 8 weeks after ACB, ACB/PRP, and MSCs/PRP treatments; however, efficacy of none is higher than another.", "title": "" }, { "docid": "dcd2917cb5414e7b14d739a61f748359", "text": "Software-Defined Networking (SDN) has emerged as a framework for centralized command and control in cloud data centric environments. SDN separates data and control plane, which provides network administrator better visibility and policy enforcement capability compared to traditional networks. The SDN controller can assess reachability information of all the hosts in a network. There are many critical assets in a network which can be compromised by a malicious attacker through a multistage attack. Thus we make use of centralized controller to assess the security state of the entire network and pro-actively perform attack analysis and countermeasure selection. This approach is also known as Moving Target Defense (MTD). We use the SDN controller to assess the attack scenarios through scalable Attack Graphs (AG) and select necessary countermeasures to perform network reconfiguration to counter network attacks. Moreover, our framework has a comprehensive conflict detection and resolution module that ensures that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free policy implementation and preventing information leakage.", "title": "" }, { "docid": "4d4f7352f87476ab6cc1528c9c7a3cea", "text": "We consider topic detection without any prior knowledge of category structure or possible categories. Keywords are extracted and clustered based on different similarity measures using the induced k-bisecting clustering algorithm. Evaluation on Wikipedia articles shows that clusters of keywords correlate strongly with the Wikipedia categories of the articles. In addition, we find that a distance measure based on the Jensen-Shannon divergence of probability distributions outperforms the cosine similarity. In particular, a newly proposed term distribution taking co-occurrence of terms into account gives best results.", "title": "" }, { "docid": "60ad412d0d6557d2a06e9914bbf3c680", "text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bdf81fccbfa77dadcad43699f815475e", "text": "The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set).\n Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%.", "title": "" }, { "docid": "5f8956868216a6c85fadfaba6aed1413", "text": "Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.", "title": "" }, { "docid": "ab225d45a47716b490b0f3ac1ff909b8", "text": "BACKGROUND\n  The dermatological aspects of male genital lichen sclerosus (MGLSc) have not received much prominence in the literature. Sexual morbidity appears under-appreciated, the role of histology is unclear, the relative places of topical medical treatment and circumcision are not established, the prognosis for sexual function, urinary function and penis cancer is uncertain and the pathogenesis has not been specifically studied although autoimmunity (as in women) and HPV infection have been mooted.\n\n\nOBJECTIVE\n  To illuminate the above by analysing the clinical parameters of a large series of patients with MGLSc.\n\n\nMETHODS\n  A total of 329 patients with a clinical diagnosis of MGLSc were identified retrospectively from a dermatology-centred multidisciplinary setting. Their clinical and histopathological features and outcomes have been abstracted from the records and analysed by simple descriptive statistics.\n\n\nRESULTS\n  The collation and analysis of clinical data derived from the largest series of men with MGLSc ever studied from a dermatological perspective has been achieved. These data allow the conclusions below to be drawn.\n\n\nCONCLUSIONS\n  MGLSc is unequivocally a disease of the uncircumcised male; the adult peak is late in the fourth decade; dyspareunia is a common presenting complaint; non-specific histology requires careful interpretation; most men are either cured by topical treatment with ultrapotent steroid (50-60%) or by circumcision (>75%); effective and definitive management appears to abrogate the risk of developing penile squamous cell carcinoma; urinary contact is implicated in the pathogenesis of MGLSc; HPV infection and autoimmunity seem unimportant.", "title": "" }, { "docid": "4fa0a60eb5ae8bd84e4a88c6eada4af4", "text": "Image retrieval can be considered as a classification problem. Classification is usually based on some image features. In the feature extraction image segmentation is commonly used. In this paper we introduce a new feature for image classification for retrieval purposes. This feature is based on the gray level histogram of the image. The feature is called binary histogram and it can be used for image classification without segmentation. Binary histogram can be used for image retrieval as such by using similarity calculation. Another approach is to extract some features from it. In both cases indexing and retrieval do not require much computational time. We test the similarity measurement and the feature-based retrieval by making classification experiments. The proposed features are tested using a set of paper defect images, which are acquired from an industrial imaging application.", "title": "" }, { "docid": "77c8dc928492524cbf665422bbcce60d", "text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2016, INFORMS", "title": "" }, { "docid": "dbc57902c0655f1bdb3f7dbdcdb6fd5c", "text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.", "title": "" }, { "docid": "3ec9107c5d389425e1a89086948ea0c7", "text": "BACKGROUND\nA reduction in the reported incidence of malignant degeneration within nevus sebaceus has led many physicians to recommend serial clinical evaluation and biopsy of suspicious areas rather than prophylactic surgical excision. Unfortunately, no well-defined inclusion criteria, including lesion size and location, have been described for the management of nevus sebaceus.\n\n\nMETHODS\nTo assess whether the incidence or timing of malignant degeneration contraindicates surgical excision, the authors performed a PubMed literature search for any studies, excluding case reports, related to malignant change within nevus sebaceus since 1990. They then defined giant nevus sebaceus to consist of lesions greater than 20 cm or greater than 1 percent of the total body surface area and retrospectively examined their experience and outcomes treating giant nevus sebaceus.\n\n\nRESULTS\nData were pooled from six large retrospective institutional studies (2520 patients). The cumulative incidence of benign and malignant tumors was 6.1 and 0.5 percent, respectively. Of the authors' 195 patients with giant congenital nevi, only six (3.0 percent) met the definition of giant nevus sebaceus. All patients required tissue expansion for reconstruction, and two patients required concomitant skin grafting. Two complications required operative intervention.\n\n\nCONCLUSIONS\nEarly malignant degeneration within nevus sebaceus is rare. Management, however, must account for complex monitoring, particularly for lesions within the scalp, associated alopecia, involvement of multiple facial aesthetic subunits, and postpubertal transformation affecting both appearance and monitoring of the lesions. The latter considerations, rather than the reported incidence of malignant transformation, should form the bases for surgical intervention in giant nevus sebaceus.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" }, { "docid": "9cbd8a5ac00fc940baa63cf0fb4d2220", "text": "— The paper presents a technique for anomaly detection in user behavior in a smart-home environment. Presented technique can be used for a service that learns daily patterns of the user and proactively detects unusual situations. We have identified several drawbacks of previously presented models such as: just one type of anomaly-inactivity, intricate activity classification into hierarchy, detection only on a daily basis. Our novelty approach desists these weaknesses, provides additional information if the activity is unusually short/long, at unusual location. It is based on a semi-supervised clustering model that utilizes the neural network Self-Organizing Maps. The input to the system represents data primarily from presence sensors, however also other sensors with binary output may be used. The experimental study is realized on both synthetic data and areal database collected in our own smart-home installation for the period of two months.", "title": "" }, { "docid": "8a24f9d284507765e0026ae8a70fc482", "text": "The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results. The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were found to be predictive of smear negative but culture positive pulmonary tuberculosis. The current practices of establishing pulmonary tuberculosis diagnosis are not sensitive and specific enough to establish the diagnosis of Acid Fast Bacilli smear negative pulmonary tuberculosis and over treat people with no pulmonary tuberculosis.", "title": "" }, { "docid": "569f8890a294b69d688977fc235aef17", "text": "Traditionally, voice communication over the local loop has been provided by wired systems. In particular, twisted pair has been the standard means of connection for homes and offices for several years. However in the recent past there has been an increased interest in the use of radio access technologies in local loops. Such systems which are now popular for their ease and low cost of installation and maintenance are called Wireless in Local Loop (WLL) systems. Subscribers' demands for greater capacity has grown over the years especially with the advent of the Internet. Wired local loops have responded to these increasing demands through the use of digital technologies such as ISDN and xDSL. Demands for enhanced data rates are being faced by WLL system operators too, thus entailing efforts towards more efficient bandwidth use. Multi-hop communication has already been studied extensively in Ad hoc network environments and has begun making forays into cellular systems as well. Multi-hop communication has been proven as one of the best ways to enhance throughput in a wireless network. Through this effort we study the issues involved in multi-hop communication in a wireless local loop system and propose a novel WLL architecture called Throughput enhanced Wireless in Local Loop (TWiLL). Through a realistic simulation model we show the tremendous performance improvement achieved by TWiLL over WLL. Traditional pricing schemes employed in single hop wireless networks cannot be applied in TWiLL -- a multi-hop environment. We also propose three novel cost reimbursement based pricing schemes which could be applied in such a multi-hop environment.", "title": "" }, { "docid": "748926afd2efcae529a58fbfa3996884", "text": "The purpose of this research was to investigate preservice teachers’ perceptions about using m-phones and laptops in education as mobile learning tools. A total of 1087 preservice teachers participated in the study. The results indicated that preservice teachers perceived laptops potentially stronger than m-phones as m-learning tools. In terms of limitations the situation was balanced for laptops and m-phones. Generally, the attitudes towards using laptops in education were not exceedingly positive but significantly more positive than m-phones. It was also found that such variables as program/department, grade, gender and possessing a laptop are neutral in causing a practically significant difference in preservice teachers’ views. The results imply an urgent need to grow awareness among participating student teachers towards the concept of m-learning, especially m-learning through m-phones. Introduction The world is becoming a mobigital virtual space where people can learn and teach digitally anywhere and anytime. Today, when timely access to information is vital, mobile devices such as cellular phones, smartphones, mp3 and mp4 players, iPods, digital cameras, data-travelers, personal digital assistance devices (PDAs), netbooks, laptops, tablets, iPads, e-readers such as the Kindle, Nook, etc have spread very rapidly and become common (El-Hussein & Cronje, 2010; Franklin, 2011; Kalinic, Arsovski, Stefanovic, Arsovski & Rankovic, 2011). Mobile devices are especially very popular among young population (Kalinic et al, 2011), particularly among university students (Cheon, Lee, Crooks & Song, 2012; Park, Nam & Cha, 2012). Thus, the idea of learning through mobile devices has gradually become a trend in the field of digital learning (Jeng, Wu, Huang, Tan & Yang, 2010). This is because learning with mobile devices promises “new opportunities and could improve the learning process” (Kalinic et al, 2011, p. 1345) and learning with mobile devices can help achieving educational goals if used through appropriate learning strategies (Jeng et al, 2010). As a matter of fact, from a technological point of view, mobile devices are getting more capable of performing all of the functions necessary in learning design (El-Hussein & Cronje, 2010). This and similar ideas have brought about the concept of mobile learning or m-learning. British Journal of Educational Technology Vol 45 No 4 2014 606–618 doi:10.1111/bjet.12064 © 2013 British Educational Research Association Although mobile learning applications are at their early days, there inevitably emerges a natural pressure by students on educators to integrate m-learning (Franklin, 2011) and so a great deal of attention has been drawn in these applications in the USA, Europe and Asia (Wang & Shen, 2012). Several universities including University of Glasgow, University of Sussex and University of Regensburg have been trying to explore and include the concept of m-learning in their learning systems (Kalinic et al, 2011). Yet, the success of m-learning integration requires some degree of awareness and positive attitudes by students towards m-learning. In this respect, in-service or preservice teachers’ perceptions about m-learning become more of an issue, since their attitudes are decisive in successful integration of m-learning (Cheon et al, 2012). Then it becomes critical whether the teachers, in-service or preservice, have favorable perceptions and attitudinal representations regarding m-learning. Theoretical framework M-learning M-learning has a recent history. When developed as the next phase of e-learning in early 2000s (Peng, Su, Chou & Tsai, 2009), its potential for education could not be envisaged (Attewell, 2005). However, recent developments in mobile and wireless technologies facilitated the departure from traditional learning models with time and space constraints, replacing them with Practitioner Notes What is already known about this topic • Mobile devices are very popular among young population, especially among university students. • Though it has a recent history, m-learning (ie, learning through mobile devices) has gradually become a trend. • M-learning brings new opportunities and can improve the learning process. Previous research on m-learning mostly presents positive outcomes in general besides some drawbacks. • The success of integrating m-learning in teaching practice requires some degree of awareness and positive attitudes by students towards m-learning. What this paper adds • Since teachers’ attitudes are decisive in successful integration of m-learning in teaching, the present paper attempts to understand whether preservice teachers have favorable perceptions and attitudes regarding m-learning. • Unlike much of the previous research on m-learning that handle perceptions about m-learning in a general sense, the present paper takes a more specific approach to distinguish and compare the perceptions about two most common m-learning tools: m-phones and laptops. • It also attempts to find out the variables that cause differences in preservice teachers’ perceptions about using these m-learning devices. Implications for practice and/or policy • Results imply an urgent need to grow awareness and further positive attitudes among participating student teachers towards m-learning, especially through m-phones. • Some action should be taken by the faculty and administration to pedagogically inform and raise awareness about m-learning among preservice teachers. Preservice teachers’ perceptions of M-learning tools 607 © 2013 British Educational Research Association models embedded into our everyday environment, and the paradigm of mobile learning emerged (Vavoula & Karagiannidis, 2005). Today it spreads rapidly and promises to be one of the efficient ways of education (El-Hussein & Cronje, 2010). Partly because it is a new concept, there is no common definition of m-learning in the literature yet (Peng et al, 2009). A good deal of literature defines m-learning as a derivation or extension of e-learning, which is performed using mobile devices such as PDA, mobile phones, laptops, etc (Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Riad & El-Ghareeb, 2008). Other definitions highlight certain characteristics of m-learning including portability through mobile devices, wireless Internet connection and ubiquity. For example, a common definition of m-learning in scholarly literature is “the use of portable devices with Internet connection capability in education contexts” (Kinash, Brand & Mathew, 2012, p. 639). In a similar vein, Park et al (2012, p. 592) defines m-learning as “any educational provision where the sole or dominant technologies are handheld or palmtop devices.” On the other hand, m-learning is likely to be simply defined stressing its property of ubiquity, referring to its ability to happen whenever and wherever needed (Peng et al, 2009). For example, Franklin (2011, p. 261) defines mobile learning as “learning that happens anywhere, anytime.” Though it is rather a new research topic and the effectiveness of m-learning in terms of learning achievements has not been fully investigated (Park et al, 2012), there is already an agreement that m-learning brings new opportunities and can improve the learning process (Kalinic et al, 2011). Moreover, the literature review by Wu et al (2012) notes that 86% of the 164 mobile learning studies present positive outcomes in general. Several perspectives of m-learning are attributed in the literature in association with these positive outcomes. The most outstanding among them is the feature of mobility. M-learning makes sense as an educational activity because the technology and its users are mobile (El-Hussein & Cronje, 2010). Hence, learning outside the classroom walls is possible (Nordin, Embi & Yunus, 2010; Şad, 2008; Saran, Seferoğlu & Çağıltay, 2009), enabling students to become an active participant, rather than a passive receiver of knowledge (Looi et al, 2010). This unique feature of m-learning brings about not only the possibility of learning anywhere without limits of classroom or library but also anytime (Çavuş & İbrahim, 2009; Hwang & Chang, 2011; Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Sha, Looi, Chen & Zhang, 2012; Sølvberg & Rismark, 2012). This especially offers learners a certain amount of “freedom and independence” (El-Hussein & Cronje, 2010, p. 19), as well as motivation and ability to “self-regulate their own learning” (Sha et al, 2012, p. 366). This idea of learning coincides with the principles of and meet the requirements of other popular paradigms in education including lifelong learning (Nordin et al, 2010), student-centeredness (Sha et al, 2012) and constructivism (Motiwalla, 2007). Beside the favorable properties referred in the m-learning literature, some drawbacks of m-learning are frequently criticized. The most pronounced one is the small screen sizes of the m-learning tools that makes learning activity difficult (El-Hussein & Cronje, 2010; Kalinic et al, 2011; Riad & El-Ghareeb, 2008; Suki & Suki, 2011). Another problem is the weight and limited battery lives of m-tools, particularly the laptops (Riad & El-Ghareeb, 2008). Lack of understanding or expertise with the technology also hinders nontechnical students’ active use of m-learning (Corbeil & Valdes-Corbeil, 2007; Franklin, 2011). Using mobile devices in classroom can cause distractions and interruptions (Cheon et al, 2012; Fried, 2008; Suki & Suki, 2011). Another concern seems to be about the challenged role of the teacher as the most learning activities take place outside the classroom (Sølvberg & Rismark, 2012). M-learning in higher education Mobile learning is becoming an increasingly promising way of delivering instruction in higher education (El-Hussein & Cronje, 2010). This is justified by the current statistics about the 608 British Journal of Educational Technology Vol 45 No 4 2014 © 2013 British Education", "title": "" }, { "docid": "0ed54ab8e575273a502f188fd2961ff5", "text": "This review examines gender identity issues in competitive sports, focusing on the evolution of policies relating to female gender verification and transsexual participation in sport. The issues are complex and continue to challenge sport governing bodies, including the International Olympic Committee, as they strive to provide a safe environment in which female athletes may compete fairly and equitably.", "title": "" }, { "docid": "620574da26151188171a91eb64de344d", "text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.", "title": "" } ]
scidocsrr
dfd3a900db62e1c9799634dd73fd0a23
EARNING AN E MBEDDING S PACE FOR T RANSFERABLE R OBOT S KILLS
[ { "docid": "ec8bd6218fccc82deb23f5d52c10a7fe", "text": "The options framework provides a method for reinforcement learning agents to build new high-level skills. However, since options are usually learned in the same state space as the problem the agent is currently solving, they cannot be ported to other similar tasks that have different state spaces. We introduce the notion of learning options in agent-space, the portion of the agent’s sensation that is present and retains the same semantics across successive problem instances, rather than in problem-space. Agent-space options can be reused in later tasks that share the same agent-space but are sufficiently distinct to require different problem-spaces. We present experimental results that demonstrate the use of agent-space options in building reusable skills.", "title": "" }, { "docid": "c713b438bc86adea64bb34a1fa038b85", "text": "This paper introduces the Intentional Unintentional (IU) agent. This agent endows the deep deterministic policy gradients (DDPG) agent for continuous control with the ability to solve several tasks simultaneously. Learning to solve many tasks simultaneously has been a long-standing, core goal of artificial intelligence, inspired by infant development and motivated by the desire to build flexible robot manipulators capable of many diverse behaviours. We show that the IU agent not only learns to solve many tasks simultaneously but it also learns faster than agents that target a single task at-a-time. In some cases, where the single task DDPG method completely fails, the IU agent successfully solves the task. To demonstrate this, we build a playroom environment using the MuJoCo physics engine, and introduce a grounded formal language to automatically generate tasks.", "title": "" }, { "docid": "ecd8f70442aa40cd2088f4324fe0d247", "text": "Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.", "title": "" }, { "docid": "ddae1c6469769c2c7e683bfbc223ad1a", "text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "title": "" } ]
[ { "docid": "7480b463612c919f7b0de5d9a12089f1", "text": "Importance\nAmid the current opioid epidemic in the United States, the enhanced recovery after surgery pathway (ERAS) has emerged as one of the best strategies to improve the value and quality of surgical care and has been increasingly adopted for a broad range of complex surgical procedures. The goal of this article was to outline important components of opioid-sparing analgesic regimens.\n\n\nObservations\nRegional analgesia, acetaminophen, nonsteroidal anti-inflammatory agents, gabapentinoids, tramadol, lidocaine, and/or the N-methyl-d-aspartate class of glutamate receptor antagonists have been shown to be effective adjuncts to narcotic analgesia. Nonsteroidal anti-inflammatory agents are not associated with an increase in postoperative bleeding. A meta-analysis of 27 randomized clinical trials found no difference in postoperative bleeding between the groups taking ketorolac tromethamine (33 of 1304 patients [2.5%]) and the control groups (21 of 1010 [2.1%]) (odds ratio [OR], 1.1; 95% CI, 0.61-2.06; P = .72). After adoption of the multimodal analgesia approach for a colorectal ERAS pathway, most patients used less opioids while in the hospital and many did not need opioids after hospital discharge, although approximately 50% of patients received some opioid during their stay.\n\n\nConclusions and Relevance\nMultimodal analgesia is readily available and the evidence is strong to support its efficacy. Surgeons should use this effective approach for patients both using and not using the ERAS pathway to reduce opioid consumption.", "title": "" }, { "docid": "eef453cb52a9bf77cde37a6beeb7ad01", "text": "MOTIVATION\nThere is a great need to develop analytical methodology to analyze and to exploit the information contained in gene expression data. Because of the large number of genes and the complexity of biological networks, clustering is a useful exploratory technique for analysis of gene expression data. Other classical techniques, such as principal component analysis (PCA), have also been applied to analyze gene expression data. Using different data analysis techniques and different clustering algorithms to analyze the same data set can lead to very different conclusions. Our goal is to study the effectiveness of principal components (PCs) in capturing cluster structure. Specifically, using both real and synthetic gene expression data sets, we compared the quality of clusters obtained from the original data to the quality of clusters obtained after projecting onto subsets of the principal component axes.\n\n\nRESULTS\nOur empirical study showed that clustering with the PCs instead of the original variables does not necessarily improve, and often degrades, cluster quality. In particular, the first few PCs (which contain most of the variation in the data) do not necessarily capture most of the cluster structure. We also showed that clustering with PCs has different impact on different algorithms and different similarity metrics. Overall, we would not recommend PCA before clustering except in special circumstances.", "title": "" }, { "docid": "5ed8c1b7efa827d9efcd537cd831142c", "text": "The fundamental role of the software defined networks (SDNs) is to decouple the data plane from the control plane, thus providing a logically centralized visibility of the entire network to the controller. This enables the applications to innovate through network programmability. To establish a centralized visibility, a controller is required to discover a network topology of the entire SDN infrastructure. However, discovering a network topology is challenging due to: 1) the frequent migration of the virtual machines in the data centers; 2) lack of authentication mechanisms; 3) scarcity of the SDN standards; and 4) integration of security mechanisms for the topology discovery. To this end, in this paper, we present a comprehensive survey of the topology discovery and the associated security implications in SDNs. This survey provides discussions related to the possible threats relevant to each layer of the SDN architecture, highlights the role of the topology discovery in the traditional network and SDN, presents a thematic taxonomy of topology discovery in SDN, and provides insights into the potential threats to the topology discovery along with its state-of-the-art solutions in SDN. Finally, this survey also presents future challenges and research directions in the field of SDN topology discovery.", "title": "" }, { "docid": "2e0b2bc23117bbe8d41f400761410638", "text": "Free radicals and other reactive species (RS) are thought to play an important role in many human diseases. Establishing their precise role requires the ability to measure them and the oxidative damage that they cause. This article first reviews what is meant by the terms free radical, RS, antioxidant, oxidative damage and oxidative stress. It then critically examines methods used to trap RS, including spin trapping and aromatic hydroxylation, with a particular emphasis on those methods applicable to human studies. Methods used to measure oxidative damage to DNA, lipids and proteins and methods used to detect RS in cell culture, especially the various fluorescent \"probes\" of RS, are also critically reviewed. The emphasis throughout is on the caution that is needed in applying these methods in view of possible errors and artifacts in interpreting the results.", "title": "" }, { "docid": "be86e50e71e8d8ede9e3c64ae510f1d0", "text": "The subscription covering optimization, whereby a general subscription quenches the forwarding of more specific ones, is a common technique to reduce network traffic and routing state in content-based routing networks. Such optimizations, however, leave the system vulnerable to unsubscriptions that trigger the immediate forwarding of all the subscriptions they had previously quenched. These subscription bursts can severely congest the network, and destabilize the system. This paper presents techniques to retain much of the benefits of subscription covering while avoiding bursty subscription traffic. Heuristics are used to estimate the similarity among subscriptions, and a distributed algorithm determines the portions of a subscription propagation tree that should be preserved. Evaluations show that these mechanisms avoid subscription bursts while maintaining relatively compact routing tables.", "title": "" }, { "docid": "7882226d49d9d932ddda38c428cd8f63", "text": "This paper outlines a framework for Internet banking security using multi-layered, feed-forward artificial neural networks. Such applications utilise anomaly detection techniques which can be applied for transaction authentication and intrusion detection within Internet banking security architectures. Such fraud 'detection' strategies have the potential to significantly limit present levels of financial fraud in comparison to existing fraud 'prevention' techniques", "title": "" }, { "docid": "13d7abc974d44c8c3723c3b9c8534fec", "text": "We propose a novel approach to automatically produce multiple colorized versions of a grayscale image. Our method results from the observation that the task of automated colorization is relatively easy given a low-resolution version of the color image. We first train a conditional PixelCNN to generate a low resolution color for a given grayscale image. Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image. We demonstrate that our approach produces more diverse and plausible colorizations than existing methods, as judged by human raters in a ”Visual Turing Test”.", "title": "" }, { "docid": "fb8e10632b8b9ad2cf772d20dbc95bda", "text": "Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur.", "title": "" }, { "docid": "0449eaba0eea843d71008751de4cf452", "text": "Recent advances in bridging the semantic gap between virtual machines (VMs) and their guest processes have a dark side: They can be abused to subvert and compromise VM file system images and process images. To demonstrate this alarming capability, a context-aware, reactive VM Introspection (VMI) instrument is presented and leveraged to automatically break the authentication mechanisms of both Linux and Windows operating systems. By bridging the semantic gap, the attack is able to automatically identify critical decision points where authentication succeeds or fails at the binary level. It can then leverage the VMI to transparently corrupt the control-flow or data-flow of the victim OS at that point, resulting in successful authentication without any password-guessing or encryption-cracking. The approach is highly flexible (threatening a broad class of authentication implementations), practical (realizable against real-world OSes and VM images), and useful for both malicious attacks and forensics analysis of virtualized systems and software.", "title": "" }, { "docid": "98b0ce9e943ab1a22c4168ba1c79ceb6", "text": "Along with rapid advancement of power semiconductors, voltage multipliers have introduced new series of pulsed power generators. In this paper, current topologies of capacitor-diode voltage multipliers (CDVM) are investigated. Alternative structures for voltage multiplier based on power electronics switches are presented in high voltage pulsed power supplies application. The new topology is able to generate the desired high voltage output without increasing the voltage rating of semiconductor devices as well as capacitors. Finally, a comparative analysis is carried out between different CDVM topologies. Experimental and simulation results are presented to verify the analysis.", "title": "" }, { "docid": "6bc2837d4d1da3344f901a6d7d8502b5", "text": "Many researchers and professionals have reported nonsubstance addiction to online entertainments in adolescents. However, very few scales have been designed to assess problem Internet use in this population, in spite of their high exposure and obvious vulnerability. The aim of this study was to review the currently available scales for assessing problematic Internet use and to validate a new scale of this kind for use, specifically in this age group, the Problematic Internet Entertainment Use Scale for Adolescents. The research was carried out in Spain in a gender-balanced sample of 1131 high school students aged between 12 and 18 years. Psychometric analyses showed the scale to be unidimensional, with excellent internal consistency (Cronbach's alpha of 0.92), good construct validity, and positive associations with alternative measures of maladaptive Internet use. This self-administered scale can rapidly measure the presence of symptoms of behavioral addiction to online videogames and social networking sites, as well as their degree of severity. The results estimate the prevalence of this problematic behavior in Spanish adolescents to be around 5 percent.", "title": "" }, { "docid": "7681a78f2d240afc6b2e48affa0612c1", "text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.", "title": "" }, { "docid": "4d1eae0f247f1c2db9e3c544a65c041f", "text": "This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.", "title": "" }, { "docid": "b34beab849a50ff04a948f277643fb74", "text": "To cite: Hirai T, Koster M. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/ bcr-2013-009759 DESCRIPTION A 22-year-old man with a history of intravenous heroin misuse, presented with 1 week of fatigue and fever. Blood cultures were positive for methicillin-sensitive Staphylococcus aureus. Physical examination showed multiple painful 1– 2 mm macular rashes on the palm and soles bilaterally (figures 1 and 2). Splinter haemorrhages (figure 3) and conjunctival petechiae (figure 4) were also noted. A transoesophageal echocardiogram demonstrated a 16-mm vegetation on the mitral valve (figure 5). Vegitations >10 mm in diameter and infection involving the mitral valve are independently associated with an increased risk of embolisation. However, he decided medical management after extensive discussion and was treated with intravenous nafcillin for 6 weeks. He returned 8 weeks later with acute shortness of breath and evidence of a perforated mitral valve for which he subsequently underwent a successful mitral valve repair with an uneventful recovery.", "title": "" }, { "docid": "96c1da4e4b52014e4a9c5df098938c98", "text": "Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.", "title": "" }, { "docid": "edefc6fde031f912a0993e88724c7e9f", "text": "Negative associations between birth order and intelligence level have been found in numerous studies. The explanation for this relation is not clear, and several hypotheses have been suggested. One family of hypotheses suggests that the relation is due to more-favorable family interaction and stimulation of low-birth-order children, whereas others claim that the effect is caused by prenatal gestational factors. We show that intelligence quotient (IQ) score levels among nearly 250,000 military conscripts were dependent on social rank in the family and not on birth order as such, providing support for a family interaction explanation.", "title": "" }, { "docid": "f67990c307da0b95628441e11ddfb70b", "text": "I shall present an overview of the Java language and a brief description of the Java virtual machine|this will not be a tutorial. For a good Java tutorial and reference, I refer you to [Fla96]. There will be a brief summary and the odd snide remark about network computers. Chapter 1 History Java started o as a control language for embedded systems (c.1990)|the idea was to unite the heterogenous microcontrollers used in embedded appliances (initially mainly consumer electronics) with a common language (the Java Virtual Machine (JVM)) and API so that manufacturers wouldn't have to change their software whenever a new microcontroller came on the market. It was realised that the JVM would work just as well over the internet, and the JVM (and the Java language that was developed in association with it) was pushed as a vehicle for active web content. More recently, the Java bandwagon has acquired such initiatives as the network computer, personal and embedded Java (which take Java back to its roots as an embedded control system), and are being pushed as tools for developing `serious' applications. Unfortunately, the Java bandwagon has also acquired a tremendous amount of hype, snake-oil, and general misinformation. It is being used by Sun as a weapon in its endless and byzantine war with Microsoft, and has pioneered the industry acceptance of vapourware sales (whereby people pay to licence technologies which don't exist yet). The typical lead time for a Java technology is six months (that is, the period between it being announced as available and shipping is usually circa six months). It will be useful to keep this in mind. Most terms to do with Java are trademarks of Sun Microsystems, who controls them in the somewhat vain hope of being able to maintain some degree of standardisation. I shall refer to the JDK, by which I mean Sun's Java Development Kit, the standard Java compiler and runtime environment. 2 Chapter 2 The Java Language As mentioned in the previous section, Java is really a remote execution system, broken into two parts|Java and the Java Virtual Machine. The two are pretty much inseparable. Java has a C/C++-based syntax, inheriting the nomenclature of classes, the private, public, and protected nomenclature, and its concepts of constructors and destructors, from C++. It also borrows heavily from the family of languages that spawned Modula-3|to these it owes garbage collection, threads, exceptions, safety, and much of its inheritance model. Java introduces some new features of its own : Ubiquitous classes |classes and interfaces (which are like classes, but are used to describe speci cations rather than type de nitions) are the only real structure in Java: everything from your window system to an element of your linked list will be a class. Dynamic loading |Java provides for dynamic class (and interface) loading (indeed, it would be di cult to produce a JVM implementation without it). Unicode (2.0) source format |Java source code is written in Unicode 2.0 ([The96])|internationalisation at last ? Labelled breaks {help solve a typical problem with the break construct|you sometimes want to exit more than one loop. We can write, eg: bool k = false; while (a) { while (b) { if (a->head==b->head) { k = true; break; } b=b->tail; 3 }if (k) { break; }; a=a->tail; } Becomes foo: while (a) { while (b) { if (a->head==b->head) { break foo; }b = b->tail; }a=a->tail; } Object-Orientated Synchronisation and exception handling |every object can be locked, waited on and signalled. Every object which is a subclass (in Modula-3 terms, a subtype) of java.lang.Exception may be thrown as an exception. Documentation Comments |There is a special syntax for `documentation comments' (/** ...*/), which may be used to automatically generate documentation from source les. Such tools are fairly primitive at present, and if you look at the automatically generated html documentation for the JDK libraries, you will nd that you need to scoot up and down the object heirarchy several times before very much of it begins to make sense. Widely-used exceptions |Java tends to raise an exception when it encounters a run-time error, rather than aborting your program|so, for example, attempting an out-of-bounds array access throws ArrayIndexOutOfBoundsException rather than aborting your program. It will be useful to note here that Java has complete safety|there are no untraced references, and no way to do pointer arithmetic. Anything unsafe must be done outside Java by another language, the results being communicated back via. the foreign language interface mechanism, native methods, which we will consider later. 2.1 Types Java has a fairly typical type system. As in Modula-3, there are two classes of types|base types and reference types.4 2.1.1 Base types The following categories of base types are de ned: Boolean: bool2 ftrue; falseg 1 Integral { byte 2 f 27 : : :27 1g { short 2 f 215 : : :215 1g { int 2 f 231 : : :231 1g { long 2 f 261 : : :261 1g { char 2 f0 : : :FFFF16g Floating point: IEEE 754 single precision (float), and IEEE 754 double precistion (double) oating point numbers. Note that there is no extended type, so you cannot use IEEE extended precision. You will observe a number of changes from C: No enumerations |the intended methodology is to use class (or interface) variables, eg. static int RED=1;. Hopefully, this will become clearer later. 16-bit char |char has been widened to 16 bits to accomodate Unicode characters. C programmers (and others who assume sizeof(char)==1) beware! No signed or unsigned |this avoids the problems that unsigned types always cause: either LAST(unsigned k) = 2*LAST(k)+1 (C), in which case implicit conversions to signed types can fail, or LAST(unsigned k) = LAST(signed k) (Modula-3) in which case you can never subtract two signed types and put their results in an unsigned variable (try Rect.Horsize(Rect.Full) and watch the pretty value out of range errors abort your program. . . ). 2.1.2 Reference types Reference types subsume classes, interfaces, objects and arrays. The Java equivalent of NIL is null, and the equivalent of ROOT is java.lang.Object. Note that we need no equivalent for ADDRESS, as there are no untraced references in Java, and we need no equivalent for REFANY as there are no records, and it turns out that arrays are also objects2. 1This is the only type that doesn't exist in the JVM|see 3. 2though this is obviously not explicit, since it would introduce parametric polymorphism into the type system. It is, however, possible to introduce parametric polymorphism, as we shall see later in our discussion of Pizza. 5 2.2 Operators and conversion With Java's syntax lifted mostly from C and C++, it is no surprise to nd that it shares many of the same operators for base types: < <= > >= == != && || return a boolean. + * / % ++ -<< >> >>> ~ & | ^ ?: ++ -+= -= *= /= &= |= =̂ %= <<= >>= >>>= instanceof is a binary operator (a instanceof T) which returns a boolean| true if a is of type T, and false otherwise. Conversion is done by typecasting, as in C, using ( and ). .. and + can also be used for strings (\"foo\" + \"bar\"). You will note that the comparison operators now return a boolean, and that Java has standardised (mainly through not having unary *) the behaviour of *=. There is also a new right shift operator, >>>, meaning `arithmetic shift right', some syntactic sugar for concatenating strings, and instanceof and casting replace ISTYPE and NARROW respectively. The `+' syntax for strings is similar to & in Modula-3; note, however, that Java distinguishes between constant strings (of class java.lang.String) and mutable strings (of class java.lang.StringBuffer). \"a\" + \"b\" produces a new String, \"ab\". Integer and oating-point operations with mixed-precision types (eg. int + long or float + double) implicitly convert all their arguments to the `widest' type present, and their results are of the type of their widest operand. Numerical analysts beware. . . There are actually several types of type conversion in Java: Identity conversions |the identity conversion. Assignment conversion |takes place when assigning a variable to the value of an expression. Primitive Widening conversion |widens a value of a base type to another base type with a greater range, and may also convert integer types to oating point types. Primitive Narrowing conversion |the inverse of primitive widening conversion (narrows to a type with a smaller range), and may also convert oating point types to integer types. Widening reference conversion |intuitively, converts an object of a given type to one of one of its supertypes. 6 Narrowing reference conversion |the inverse of widening reference conversion (the reference conversions are like NARROW() in Modula-3). String conversion |there is a conversion from any type to type String. Forbidden Conversions |some conversions are forbidden. Assignment Conversion |occurs during assignment. Method invocation conversion |occurs during method invocation. Casting conversion |occurs when the casting operator is used, eg. (Foo)bar. All of which are described in excruciating detail in x5 of [GS97]. The question of reference type equivalence is a little confused due to the presence of interfaces, but Java basically uses name-equivalence, in contrast with Modula-3's structural equivalence. 2.3 Imperative Constructs Java provides basically the same imperative constructs as C, but there are a few di erences (and surprises): Scoping |Java supports nested scopes (at last!), so { int i=1; if (i==1) { int k; k=4; } }Now works properly3. Indeed, you may even declare variables half way through a scope (though it is considered bad practice to do so): { int i; foo; int k; bar; } 3Java does not support implicit scoping in for...next loops, however, so your loop variables must still be declared in the enclosing scope, or the initialisation clause of the loop. 7 Is equivalent to: { int i; foo; { int k; bar; } }And it", "title": "" }, { "docid": "34f611bbc456d8d7476b4f5df38757d2", "text": "Global motion estimation (GME) is a key technology in unmanned aerial vehicle remote sensing (UAVRS). However, when a UAV’s motion and behavior change significantly or the image information is not rich, traditional image-based methods for GME often perform poorly. Introducing bottom metadata can improve precision in a large-scale motion condition and reduce the dependence on unreliable image information. GME is divided into coarse and residual GME through coordinate transformation and based on the study hypotheses. In coarse GME, an auxiliary image is built to convert image matching from a wide baseline condition to a narrow baseline one. In residual GME, a novel information and contrast feature detection algorithm is proposed for big-block matching to maximize the use of reliable image information and ensure that the contents of interest are well estimated. Additionally, an image motion monitor is designed to select the appropriate processing strategy by monitoring the motion scales of translation, rotation, and zoom. A medium-altitude UAV is employed to collect three types of large-scale motion datasets. Peak signal to noise ratio (PSNR) and motion scale are computed. This study’s result is encouraging and applicable to other mediumor high-altitude UAVs with a similar system structure.", "title": "" }, { "docid": "8ea6c4957443916c2102f8a173f9d3dc", "text": "INTRODUCTION\nOpioid overdose fatality has increased threefold since 1999. As a result, prescription drug overdose surpassed motor vehicle collision as the leading cause of unintentional injury-related death in the USA. Naloxone , an opioid antagonist that has been available for decades, can safely reverse opioid overdose if used promptly and correctly. However, clinicians often overestimate the dose of naloxone needed to achieve the desired clinical outcome, precipitating acute opioid withdrawal syndrome (OWS).\n\n\nAREAS COVERED\nThis article provides a comprehensive review of naloxone's pharmacologic properties and its clinical application to promote the safe use of naloxone in acute management of opioid intoxication and to mitigate the risk of precipitated OWS. Available clinical data on opioid-receptor kinetics that influence the reversal of opioid agonism by naloxone are discussed. Additionally, the legal and social barriers to take home naloxone programs are addressed.\n\n\nEXPERT OPINION\nNaloxone is an intrinsically safe drug, and may be administered in large doses with minimal clinical effect in non-opioid-dependent patients. However, when administered to opioid-dependent patients, naloxone can result in acute opioid withdrawal. Therefore, it is prudent to use low-dose naloxone (0.04 mg) with appropriate titration to reverse ventilatory depression in this population.", "title": "" }, { "docid": "47d8feb4c7ee6bc6e2b2b9bd21591a3b", "text": "BACKGROUND\nAlthough local anesthetics (LAs) are hyperbaric at room temperature, density drops within minutes after administration into the subarachnoid space. LAs become hypobaric and therefore may cranially ascend during spinal anesthesia in an uncontrolled manner. The authors hypothesized that temperature and density of LA solutions have a nonlinear relation that may be described by a polynomial equation, and that conversion of this equation may provide the temperature at which individual LAs are isobaric.\n\n\nMETHODS\nDensity of cerebrospinal fluid was measured using a vibrating tube densitometer. Temperature-dependent density data were obtained from all LAs commonly used for spinal anesthesia, at least in triplicate at 5 degrees, 20 degrees, 30 degrees, and 37 degrees C. The hypothesis was tested by fitting the obtained data into polynomial mathematical models allowing calculations of substance-specific isobaric temperatures.\n\n\nRESULTS\nCerebrospinal fluid at 37 degrees C had a density of 1.000646 +/- 0.000086 g/ml. Three groups of local anesthetics with similar temperature (T, degrees C)-dependent density (rho) characteristics were identified: articaine and mepivacaine, rho1(T) = 1.008-5.36 E-06 T2 (heavy LAs, isobaric at body temperature); L-bupivacaine, rho2(T) = 1.007-5.46 E-06 T2 (intermediate LA, less hypobaric than saline); bupivacaine, ropivacaine, prilocaine, and lidocaine, rho3(T) = 1.0063-5.0 E-06 T (light LAs, more hypobaric than saline). Isobaric temperatures (degrees C) were as follows: 5 mg/ml bupivacaine, 35.1; 5 mg/ml L-bupivacaine, 37.0; 5 mg/ml ropivacaine, 35.1; 20 mg/ml articaine, 39.4.\n\n\nCONCLUSION\nSophisticated measurements and mathematic models now allow calculation of the ideal injection temperature of LAs and, thus, even better control of LA distribution within the cerebrospinal fluid. The given formulae allow the adaptation on subpopulations with varying cerebrospinal fluid density.", "title": "" } ]
scidocsrr
ecb3c82a1941ceb8b284a34a83bb1700
Modeling POI Transition Network of Human Mobility
[ { "docid": "beccfe7a166ff39a3e70c64e06bf79f6", "text": "In this paper, we aim to estimate the similarity between users according to their GPS trajectories. Our approach first models a user's GPS trajectories with a semantic location history (SLH), e.g., shopping malls → restaurants → cinemas. Then, we measure the similarity between different users' SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user's interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. We evaluate our method based on a real-world GPS dataset collected by 109 users in a period of 1 year. As a result, SLH-MTM outperforms the related works [4].", "title": "" } ]
[ { "docid": "34ceb0e84b4e000b721f87bcbec21094", "text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.", "title": "" }, { "docid": "a75a1d34546faa135f74aa5e6142de05", "text": "Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the weak to strong learning guarantees for the existing gradient boosting work for strongly-smooth, strongly-convex objectives under this new measure of performance, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.", "title": "" }, { "docid": "7ad244791a1ef91495aa3e0f4cf43f0c", "text": "T he education and research communities are abuzz with new (or at least re-discovered) ideas about the nature of cognition and learning. Terms like situated cognition,\" \"distributed cognition,\" and \"communities of practice\" fill the air. Recent dialogue in Educational Researcher (Anderson, Reder, & Simon, 1996, 1997; Greeno, 1997) typifies this discussion. Some have argued that the shifts in world view that these discussions represent are even more fundamental than the now-historical shift from behaviorist to cognitive views of learning (Shuell, 1986). These new iaeas about the nature of knowledge, thinking, and learning--which are becoming known as the \"situative perspective\" (Greeno, 1997; Greeno, Collins, & Resnick, 1996)--are interacting with, and sometimes fueling, current reform movements in education. Most discussions of these ideas and their implications for educational practice have been cast primarily in terms of students. Scholars and policymakers have considered, for example, how to help students develop deep understandings of subject matter, situate students' learning in meaningful contexts, and create learning communities in which teachers and students engage in rich discourse about important ideas (e.g., National Council of Teachers of Mathematics, 1989; National Education Goals Panel, 1991; National Research Council, 1993). Less attention has been paid to teachers--either to their roles in creating learning experiences consistent with the reform agenda or to how they themselves learn new ways of teaching. In this article we focus on the latter. Our purpose in considering teachers' learning is twofold. First, we use these ideas about the nature of learning and knowing as lenses for understanding recent research on teacher learning. Second, we explore new issues about teacher learning and teacher education that this perspective brings to light. We begin with a brief overview of three conceptual themes that are central to the situative perspect ive-that cognition is (a) situated in particular physical and social contexts; (b) social in nature; and (c) distributed across the individual, other persons, and tools.", "title": "" }, { "docid": "b3450073ad3d6f2271d6a56fccdc110a", "text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.", "title": "" }, { "docid": "37dbfc84d3b04b990d8b3b31d2013f77", "text": "Large projects such as kernels, drivers and libraries follow a code style, and have recurring patterns. In this project, we explore learning based code recommendation, to use the project context and give meaningful suggestions. Using word vectors to model code tokens, and neural network based learning techniques, we are able to capture interesting patterns, and predict code that that cannot be predicted by a simple grammar and syntax based approach as in conventional IDEs. We achieve a total prediction accuracy of 56.0% on Linux kernel, a C project, and 40.6% on Twisted, a Python networking library.", "title": "" }, { "docid": "f57cb1222f927b551030baaab771f167", "text": "The diameter dependence of the collapse of single- and double-walled carbon nanotubes to two- and four-walled graphene nanoribbons with closed edges (CE(x)GNRs) has been experimentally determined and compared to theory. TEM and AFM were used to characterize nanotubes grown from preformed 4.0 nm diameter aluminum-iron oxide particles. Experimental data indicate that the energy equivalence point (the diameter at which the energy of a round and fully collapsed nanotube is the same) is 2.6 and 4.0 nm for single- and double-walled carbon nanotubes, respectively. Molecular dynamics simulations predict similar energy equivalence diameters with the use of ε = 54 meV/pair to calculate the carbon-carbon van der Waals interaction.", "title": "" }, { "docid": "5271c96b0f42df93229fd99747712d1b", "text": "Though contact topology was born over two centuries ago, in the work of Huygens, Hamilton and Jacobi on geometric optics, and been studied by many great mathematicians, such as Sophus Lie, Elie Cartan and Darboux, it has only recently moved into the foreground of mathematics. The last decade has witnessed many remarkable breakthroughs in contact topology, resulting in a beautiful theory with many potential applications. More specifically, as a coherent – though sketchy – picture of contact topology has been developed, a surprisingly subtle relationship arose between contact structures and 3(and 4-) dimensional topology. In addition, the applications of contact topology have extended far beyond geometric optics to include non-holonomic dynamics, thermodynamics and more recently Hamiltonian dynamics [25, 40] and hydrodynamics [12]. Despite it long history and all the recent work in contact geometry, it is not overly accessible to those trying to get into the field for the first time. There are a few books giving a brief introduction to the more geometric aspects of the theory. Most notably the last chapter in [1], part of Chapter 3 in [34] and an appendix to the book [2]. There have not, however, been many books or survey articles (with the notable exception of [20]) giving an introduction to the more topological aspects of contact geometry. It is this topological approach that has lead to many of the recent breakthroughs in contact geometry and to which this paper is devoted. I planned these lectures when asked to give an introduction to contact geometry at the Georgia International Topology Conference in the summer of 2001. My idea was to give an introduction to the “classical” theory of contact topology, in which the characteristic foliation plays a central roll, followed by a hint at the more modern trends, where specific foliations take a back seat to dividing curves. This was much too ambitious for the approximately one and a half hours I had for these lectures, but I nonetheless decided to follow this outline in preparing these lecture notes. These notes begin with an introduction to contact structures in Section 2, here all the basic definitions are given and many examples are discussed. In the following section we consider contact structures near a point and near a surface. It is in", "title": "" }, { "docid": "7b25d1c4d20379a8a0fabc7398ea2c28", "text": "In this paper we introduce an efficient and stable implicit SPH method for the physically-based simulation of incompressible fluids. In the area of computer graphics the most efficient SPH approaches focus solely on the correction of the density error to prevent volume compression. However, the continuity equation for incompressible flow also demands a divergence-free velocity field which is neglected by most methods. Although a few methods consider velocity divergence, they are either slow or have a perceivable density fluctuation.\n Our novel method uses an efficient combination of two pressure solvers which enforce low volume compression (below 0.01%) and a divergence-free velocity field. This can be seen as enforcing incompressibility both on position level and velocity level. The first part is essential for realistic physical behavior while the divergence-free state increases the stability significantly and reduces the number of solver iterations. Moreover, it allows larger time steps which yields a considerable performance gain since particle neighborhoods have to be updated less frequently. Therefore, our divergence-free SPH (DFSPH) approach is significantly faster and more stable than current state-of-the-art SPH methods for incompressible fluids. We demonstrate this in simulations with millions of fast moving particles.", "title": "" }, { "docid": "1ccc1b904fa58b1e31f4f3f4e2d76707", "text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.", "title": "" }, { "docid": "7a2c19e94d07afbfe81c7875aed1ff23", "text": "We combine linear discriminant analysis (LDA) and K-means clustering into a coherent framework to adaptively select the most discriminative subspace. We use K-means clustering to generate class labels and use LDA to do subspace selection. The clustering process is thus integrated with the subspace selection process and the data are then simultaneously clustered while the feature subspaces are selected. We show the rich structure of the general LDA-Km framework by examining its variants and their relationships to earlier approaches. Relations among PCA, LDA, K-means are clarified. Extensive experimental results on real-world datasets show the effectiveness of our approach.", "title": "" }, { "docid": "4936a07e1b6a42fde7a8fdf1b420776c", "text": "One of many advantages of the cloud is the elasticity, the ability to dynamically acquire or release computing resources in response to demand. However, this elasticity is only meaningful to the cloud users when the acquired Virtual Machines (VMs) can be provisioned in time and be ready to use within the user expectation. The long unexpected VM startup time could result in resource under-provisioning, which will inevitably hurt the application performance. A better understanding of the VM startup time is therefore needed to help cloud users to plan ahead and make in-time resource provisioning decisions. In this paper, we study the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace. We analyze the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time. We also study the VM startup time of spot instances in EC2, which show a longer waiting time and greater variance compared to on-demand instances.", "title": "" }, { "docid": "473eebca6dccf4e242c87bbabfd4b8a5", "text": "Text analytics systems often rely heavily on detecting and linking entity mentions in documents to knowledge bases for downstream applications such as sentiment analysis, question answering and recommender systems. A major challenge for this task is to be able to accurately detect entities in new languages with limited labeled resources. In this paper we present an accurate and lightweight, multilingual named entity recognition (NER) and linking (NEL) system. The contributions of this paper are three-fold: 1) Lightweight named entity recognition with competitive accuracy; 2) Candidate entity retrieval that uses search click-log data and entity embeddings to achieve high precision with a low memory footprint; and 3) efficient entity disambiguation. Our system achieves state-of-the-art performance on TAC KBP 2013 multilingual data and on English AIDA CONLL data.", "title": "" }, { "docid": "d4cd0dabcf4caa22ad92fab40844c786", "text": "NA", "title": "" }, { "docid": "36c73f8dd9940b2071ad55ae1dd83c27", "text": "Current music recommender systems rely on techniques like collaborative filtering on user-provided information in order to generate relevant recommendations based upon users’ music collections or listening habits. In this paper, we examine whether better recommendations can be obtained by taking into account the music preferences of the user’s social contacts. We assume that music is naturally diffused through the social network of its listeners, and that we can propagate automatic recommendations in the same way through the network. In order to test this statement, we developed a music recommender application called Starnet on a Social Networking Service. It generated recommendations based either on positive ratings of friends (social recommendations), positive ratings of others in the network (nonsocial recommendations), or not based on ratings (random recommendations). The user responses to each type of recommendation indicate that social recommendations are better than non-social recommendations, which are in turn better than random recommendations. Likewise, the discovery of novel and relevant music is more likely via social recommendations than non-social. Social shuffle recommendations enable people to discover music through a serendipitous process powered by human relationships and tastes, exploiting the user’s social network to share cultural experiences.", "title": "" }, { "docid": "a478b6f7accfb227e6ee5a6b35cd7fa1", "text": "This paper presents the development of an ultra-high-speed permanent magnet synchronous motor (PMSM) that produces output shaft power of 2000 W at 200 000 rpm with around 90% efficiency. Due to the guaranteed open-loop stability over the full operating speed range, the developed motor system is compact and low cost since it can avoid the design complexity of a closed-loop controller. This paper introduces the collaborative design approach of the motor system in order to ensure both performance requirements and stability over the full operating speed range. The actual implementation of the motor system is then discussed. Finally, computer simulation and experimental results are provided to validate the proposed design and its effectiveness", "title": "" }, { "docid": "a505cc6496d2ccd64e16a2b0ad074a45", "text": "We tackle the problem of reducing the false positive rate of face detectors by applying a classifier after the detection step. We first define and study this post classification problem. To this end, we first consider the multiple-stage cascade structure which is the most common face detection architecture. Here, each cascade stage aims to solve a binary classification problem, denoted the Face/non-Face (FnF) problem. In this context, the post classification problem can be considered as the most challenging FnF problem, or the Hard FnF (HFnF) problem. To study the HFnF problem, we propose HFnF datasets derived from the recent face detection datasets. A baseline method utilizing the GIST features and Support Vector Machine (SVM) classifier is also proposed. In our evaluation, we found that it is possible to further improve the face detection performance by addressing the HFnF problem.", "title": "" }, { "docid": "829fcf6b704c62acb05c7399604faf78", "text": "A method for estimating the range between moving vehicles by using a monocular camera is proposed. Although most conventional methods use vertical triangulation, the proposed method uses both vertical and horizontal triangulation, which reduces errors due to vehicle's own pitching in the far distance. Unknown vehicle width is estimated by measuring three ranging parameters associated with an image captured by the camera, and the following distance is then computed by horizontal triangulation. Both vehicle width and following distance are dynamically updated during the vehicle-tracking process by single filtering. The proposed method runs in real time and can produce highly accurate estimation of following distance. The key contribution of this study is the coupled estimation of unknown vehicle width and following distance by sequential Bayesian estimation.", "title": "" }, { "docid": "39ccd0efd846c2314da557b73a326e85", "text": "We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.", "title": "" }, { "docid": "4e5f08928f37624178e8e2380e91faf6", "text": "Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a conversation around a rumour as either supporting, denying or questioning the rumour. Using a Gaussian Process classifier, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will show both ordinary users of Twitter and professional news practitioners how others orient to the disputed veracity of a rumour, with the final aim of establishing its actual truth value.", "title": "" } ]
scidocsrr
6492856a78a049634bbffb1b02ff575b
Empirical study of PROXTONE and PROXTONE+ for fast learning of large scale sparse models
[ { "docid": "60f2baba7922543e453a3956eb503c05", "text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.", "title": "" } ]
[ { "docid": "b261d01c522488d6ced34d16b722aaa4", "text": "To successfully deploy femtocells underlaying the macrocell as a heterogeneous network, which has been proven to greatly improve indoor coverage and system capacity, the cross-tier interference among the macrocell and femtocells as well as the intratier interference among femtocells must be mitigated. However, some unique features present a challenge in interference mitigation in a two-tier heterogeneous network such as random deployment for femtocells, nonexistence of macro-femto backhaul coordination, and mandates allowing no modifications of existing macrocells. Carefully examining the existing distributed information acquisition mechanisms, cognitive radio is the most promising solution for two-tier heterogeneous networks. We therefore study possible interference mitigation approaches, including orthogonal radio resource assignment in the time-frequency and antenna spatial domains, as well as interference cancellation via novel decoding techniques. According to the information acquired by cognitive radio technology, recent innovations such as game theory and the Gibbs sampler have been explored to mitigate both cross-tier and intratier interferences. Performance evaluations show that considerable performance improvement can be generally achieved, and thus demonstrate the potential of applying cognitive radio in mitigating interference.", "title": "" }, { "docid": "6bba3dc4f75d403f387f40174d085463", "text": "With the proliferation of wireless devices, wireless networks in various forms have become global information infrastructure and an important part of our daily life, which, at the same time, incur fast escalations of both data volumes and energy demand. In other words, energy-efficient wireless networking is a critical and challenging issue in the big data era. In this paper, we provide a comprehensive survey of recent developments on energy-efficient wireless networking technologies that are effective or promisingly effective in addressing the challenges raised by big data. We categorize existing research into two main parts depending on the roles of big data. The first part focuses on energy-efficient wireless networking techniques in dealing with big data and covers studies in big data acquisition, communication, storage, and computation; while the second part investigates recent approaches based on big data analytics that are promising to enhance energy efficiency of wireless networks. In addition, we identify a number of open issues and discuss future research directions for enhancing energy efficiency of wireless networks in the big data era.", "title": "" }, { "docid": "f3ec01232e9ce081d5684df997d3db54", "text": "The present study used a behavioral version of an anti-saccade task, called the 'faces task', developed by [Bialystok, E., Craik, F. I. M., & Ryan, J. (2006). Executive control in a modified anti-saccade task: Effects of aging and bilingualism. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 1341-1354] to isolate the components of executive functioning responsible for previously reported differences between monolingual and bilingual children and to determine the generality of these differences by comparing bilinguals in two cultures. Three components of executive control were investigated: response suppression, inhibitory control, and cognitive flexibility. Ninety children, 8-years old, belonged to one of three groups: monolinguals in Canada, bilinguals in Canada, and bilinguals in India. The bilingual children in both settings were faster than monolinguals in conditions based on inhibitory control and cognitive flexibility but there was no significant difference between groups in response suppression or on a control condition that did not involve executive control. The children in the two bilingual groups performed equivalently to each other and differently from the monolinguals on all measures in which there were group differences, consistent with the interpretation that bilingualism is responsible for the enhanced executive control. These results contribute to understanding the mechanism responsible for the reported bilingual advantages by identifying the processes that are modified by bilingualism and establishing the generality of these findings across bilingual experiences. They also contribute to theoretical conceptions of the components of executive control and their development.", "title": "" }, { "docid": "fbe8c71c588e0865b82dd36385ec5bc2", "text": "OBJECTIVE\nTo evaluate the frequency and the nature of genital trauma in female children in Jordan, and to stress the role of forensics.\n\n\nMETHODS\nThis is a cross-sectional study conducted between March 2008 and December 2011 in Jordan University Hospital, Amman, Jordan. Sixty-three female children were examined for genital trauma after immediate admission. The mechanism of injury was categorized and reported by the examiners as either straddle, non-straddle blunt, or penetrating.\n\n\nRESULTS\nStraddle injury was the cause of injuries in 90.5% of patients, and contusions were the significant type of injury in 34% of patients, followed by abrasions in both labia majora and labia minora. Only one case suffered from non-intact hymen and 2 had hematuria. These 3 cases (4.7%) required surgical intervention and follow-up after 2 weeks.\n\n\nCONCLUSION\nStraddle injuries were the main cause of genital trauma and rarely affect the hymen; however, due to the sensitivity of the subject and the severity of the traumas, forensic physicians should provide consultation and cooperate with gynecologists to exclude or confirm hymenal injuries, where empathy is necessary to mitigate tension associated with such injuries for the sake of the child and the parents as well, along with good management of the injury type.", "title": "" }, { "docid": "435c6eb000618ef63a0f0f9f919bc0b4", "text": "Selective sampling is an active variant of online learning in which the learner is allowed to adaptively query the label of an observed example. The goal of selective sampling is to achieve a good trade-off between prediction performance and the number of queried labels. Existing selective sampling algorithms are designed for vector-based data. In this paper, motivated by the ubiquity of graph representations in real-world applications, we propose to study selective sampling on graphs. We first present an online version of the well-known Learning with Local and Global Consistency method (OLLGC). It is essentially a second-order online learning algorithm, and can be seen as an online ridge regression in the Hilbert space of functions defined on graphs. We prove its regret bound in terms of the structural property (cut size) of a graph. Based on OLLGC, we present a selective sampling algorithm, namely Selective Sampling with Local and Global Consistency (SSLGC), which queries the label of each node based on the confidence of the linear function on graphs. Its bound on the label complexity is also derived. We analyze the low-rank approximation of graph kernels, which enables the online algorithms scale to large graphs. Experiments on benchmark graph datasets show that OLLGC outperforms the state-of-the-art first-order algorithm significantly, and SSLGC achieves comparable or even better results than OLLGC while querying substantially fewer nodes. Moreover, SSLGC is overwhelmingly better than random sampling.", "title": "" }, { "docid": "b8a3e056fe80783b51190c378d5ddcb2", "text": "We investigate the capability of GPS signals of opportunity to detect and localize targets on the sea surface. The proposed approach to target detection is new, and stems from the advantages offered by GPS-Reflectometry (GPS-R) in terms of spatial and temporal sampling, and low cost/low power technology, extending the range of applications of GPS-R beyond remote sensing. Here the exploitation of GPS signals backscattered from a target is proposed, to enhance the target return with respect to the sea clutter. A link budget is presented, showing that the target return is stronger than the background sea clutter when certain conditions are verified. The findings agree with the only empirical measurement found in literature, where a strong return from a target was fortuitously registered during an airborne campaign. This study provides a first proof-of-concept of GPS-based target detection, highlighting all the potentials of this innovative approach.", "title": "" }, { "docid": "c2891abf8297b5dcf0e21dfa9779a017", "text": "The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large \"knowledge repositories\" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed.\n In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study.", "title": "" }, { "docid": "9291f57ba050037907cf92eb34370ad1", "text": "Different from robotics-based food manufacturing, three-dimensional (3D) food printing integrates 3D printing and digital gastronomy to revolutionize food manufacturing with customized shape, color, flavor, texture, and even nutrition. Hence, food products can be designed and fabricated to meet individual needs through controlling the amount of printing material and nutrition content. The objectives of this study are to collate, analyze, categorize, and summarize published articles and papers pertaining to 3D food printing and its impact on food processing, as well as to provide a critical insight into the direction of its future development. From the available references, both universal platforms and self-developed platforms are utilized for food printing. These platforms could be reconstructed in terms of process reformulation, material processing, and user interface in the near future. Three types of printing materials (i.e., natively printable materials, nonprintable traditional food materials, and alternative ingredients) and two types of recipes (i.e., element-based recipe and traditional recipe) have been used for customized food fabrication. The available 3D food printing technologies and food processing technologies potentially applicable to food printing are presented. Essentially, 3D food printing provides an engineering solution for customized food design and personalized nutrition control, a prototyping tool to facilitate new food product development, and a potential machine to reconfigure a customized food supply chain.", "title": "" }, { "docid": "96010bf04c08ace7932fb5c48b2f8798", "text": "Spatio-temporal databases aim to support extensions to existing models of Spatial Information Systems (SIS) to include time in order to better describe our dynamic environment. Although interest into this area has increased in the past decade, a number of important issues remain to be investigated. With the advances made in temporal database research, we can expect a more uni®ed approach towards aspatial temporal data in SIS and a wider discussion on spatio-temporal data models. This paper provides an overview of previous achievements within the ®eld and highlights areas currently receiving or requiring further investigation.", "title": "" }, { "docid": "92e61ad424b421a5621d490bf664b28f", "text": "Papers and patents that deal with polymorphism (crystal systems for which a substance can exist in structures defined by different unit cells and where each of the forms has the same elemental composition) and solvatomorphism (systems where the crystal structures of the substance are defined by different unit cells but where these unit cells differ in their elemental composition through the inclusion of one or molecules of solvent) have been summarized in an annual review. The works cited in this review were published during 2010 and were drawn from the major physical, crystallographic, and pharmaceutical journals. The review is divided into sections that cover articles of general interest, computational and theoretical studies, preparative and isolation methods, structural characterization and properties of polymorphic and solvatomorphic systems, studies of phase transformations, effects associated with secondary processing, and US patents issued during 2010.", "title": "" }, { "docid": "6998297aeba2e02133a6d62aa94508be", "text": "License Plate Detection and Recognition System is an image processing technique used to identify a vehicle by its license plate. Here we propose an accurate and robust method of license plate detection and recognition from an image using contour analysis. The system is composed of two phases: the detection of the license plate, and the character recognition. The license plate detection is performed for obtaining the candidate region of the vehicle license plate and determined using the edge based text detection technique. In the recognition phase, the contour analysis is used to recognize the characters after segmenting each character. The performance of the proposed system has been tested on various images and provides better results.", "title": "" }, { "docid": "92e186ba05566110020ed92df960f3d5", "text": "From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.", "title": "" }, { "docid": "f23ce789f76fe15e78a734caa5d2bc53", "text": "The importance of location based services (LBS) is steadily increasing with progressive automation and interconnectedness of systems and processes. However, a comprehensive localization and navigation solution is still part of research. Especially for dynamic and harsh indoor environments, accurate and affordable localization and navigation remains a challenge. In this paper, we present a hybrid localization system providing position information and navigation aid to pedestrian in dynamic indoor environments, like construction sites, by combining an IMU and a spatial non-uniform UWB-network. The key contribution of this paper is a hybrid localization concept and experimental results, demonstrating in an application near scenario the enhancements introduced by the combination of an inertial navigation system (INS) and a spatial non-uniform UWB-network.", "title": "" }, { "docid": "2a244146b1cf3433b2e506bdf966e134", "text": "The rate of detection of thyroid nodules and carcinomas has increased with the widespread use of ultrasonography (US), which is the mainstay for the detection and risk stratification of thyroid nodules as well as for providing guidance for their biopsy and nonsurgical treatment. The Korean Society of Thyroid Radiology (KSThR) published their first recommendations for the US-based diagnosis and management of thyroid nodules in 2011. These recommendations have been used as the standard guidelines for the past several years in Korea. Lately, the application of US has been further emphasized for the personalized management of patients with thyroid nodules. The Task Force on Thyroid Nodules of the KSThR has revised the recommendations for the ultrasound diagnosis and imaging-based management of thyroid nodules. The review and recommendations in this report have been based on a comprehensive analysis of the current literature and the consensus of experts.", "title": "" }, { "docid": "15fddcfa5a9cbf80fe6640c815ca89ea", "text": "Relation extraction is one of the core challenges in automated knowledge base construction. One line of approach for relation extraction is to perform multi-hop reasoning on the paths connecting an entity pair to infer new relations. While these methods have been successfully applied for knowledge base completion, they do not utilize the entity or the entity type information to make predictions. In this work, we incorporate selectional preferences, i.e., relations enforce constraints on the allowed entity types for the candidate entities, to multi-hop relation extraction by including entity type information. We achieve a 17.67% (relative) improvement in MAP score in a relation extraction task when compared to a method that does not use entity type information.", "title": "" }, { "docid": "7974d3e3e9c431256ee35c3032288bd1", "text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.", "title": "" }, { "docid": "520faa53674eb384e8e892afc84c7ef4", "text": "Cyber-Physical Systems (CPS), which integrate controls, computing and physical processes are critical infrastructures of any country. They are becoming more vulnerable to cyber attacks due to an increase in computing and network facilities. The increase of monitoring network protocols increases the chances of being attacked. Once an attacker is able to cross the network intrusion detection mechanisms, he can affect the physical operations of the system which may lead to physical damages of components and/or a disaster. Some researchers used constraints of physical processes known as invariants to monitor the system in order to detect cyber attacks or failures. However, invariants generation is lacking in automation. This paper presents a novel method to identify invariants automatically using association rules mining. Through this technique, we show that it is possible to generate a number of invariants that are sometimes hidden from the design layout. Our preliminary study on a secure water treatment plant suggests that this approach is promising.", "title": "" }, { "docid": "3ba2ba9e2fc55476d86bcd8c857c9401", "text": "While model queries are important components in modeldriven tool chains, they are still frequently implemented using traditional programming languages, despite the availability of model query languages due to performance and expressiveness issues. In the current paper, we propose EMF-IncQuery as a novel, graph-based query language for EMF models by adapting the query language of the Viatra2 model transformation framework to inherit its concise, declarative nature, but to properly tailor the new query language to the modeling specificities of EMF. The EMF-IncQuery language includes (i) structural restrictions for queries imposed by EMF models, (ii) syntactic sugar and notational shorthand in queries, (iii) true semantic extensions which introduce new query features, and (iv) a constraint-based static type checking method to detect violations of EMF-specific type inference rules.", "title": "" }, { "docid": "46cc17628bd4caa7d617a5047be0157f", "text": "Many universities and institutes experience difficulty in training people to work with expensive equipments. A common problem faced by educational institutions concerns the limited availability of expensive robotics equipments, with which students in the didactic program can work, in order to acquire valuable “ha nds on” experience. Therefore, the Robot Simulation Software (RSS) nowadays is paramount important.", "title": "" }, { "docid": "d0b16a75fb7b81c030ab5ce1b08d8236", "text": "It is unquestionable that successive hardware generations have significantly improved GPU computing workload performance over the last several years. Moore's law and DRAM scaling have respectively increased single-chip peak instruction throughput by 3X and off-chip bandwidth by 2.2X from NVIDIA's GeForce 8800 GTX in November 2006 to its GeForce GTX 580 in November 2010. However, raw capability numbers typically underestimate the improvements in real application performance over the same time period, due to significant architectural feature improvements. To demonstrate the effects of architecture features and optimizations over time, we conducted experiments on a set of benchmarks from diverse application domains for multiple GPU architecture generations to understand how much performance has truly been improving for those workloads. First, we demonstrate that certain architectural features make a huge difference in the performance of unoptimized code, such as the inclusion of a general cache which can improve performance by 2-4× in some situations. Second, we describe what optimization patterns have been most essential and widely applicable for improving performance for GPU computing workloads across all architecture generations. Some important optimization patterns included data layout transformation, converting scatter accesses to gather accesses, GPU workload regularization, and granularity coarsening, each of which improved performance on some benchmark by over 20%, sometimes by a factor of more than 5×. While hardware improvements to baseline unoptimized code can reduce the speedup magnitude, these patterns remain important for even the most recent GPUs. Finally, we identify which added architectural features created significant new optimization opportunities, such as increased register file capacity or reduced bandwidth penalties for misaligned accesses, which increase performance by 2× or more in the optimized versions of relevant benchmarks.", "title": "" } ]
scidocsrr
1a256da57449613b6d3aeeb2b3915ee2
Low-Profile Planar Filtering Dipole Antenna With Omnidirectional Radiation Pattern
[ { "docid": "e95bef9aac5bb118109d82dec750da26", "text": "A novel microstrip circular disc monopole antenna with a reconfigurable 10-dB impedance bandwidth is proposed in this communication for cognitive radios (CRs). The antenna is fed by a microstrip line integrated with a bandpass filter based on a three-line coupled resonator (TLCR). The reconfiguration of the filter enables the monopole antenna to operate at either a wideband state or a narrowband state by using a PIN diode. For the narrowband state, two varactor diodes are employed to change the antenna operating frequency from 3.9 to 4.82 GHz continuously, which is different from previous work using PIN diodes to realize a discrete tuning. Similar radiation patterns with low cross-polarization levels are achieved for the two operating states. Measured results on tuning range, radiation patterns, and realized gains are provided, which show good agreement with numerical simulations.", "title": "" }, { "docid": "93d80e2015de513a689a41f33d74c45d", "text": "A horizontally polarized omnidirectional antenna with enhanced impedance bandwidth is presented in this letter. The proposed antenna consists of a feeding network, four printed dipole elements with etched slots, parasitic strips, and director elements. Four identically curved and printed dipole elements are placed in a square array and fed by a feeding network with uniform magnitude and phase; thus, the proposed antenna can achieve an omnidirectional radiation. To enhance the impedance bandwidth, parasitic strips and etched slots are introduced to produce additional lower and upper resonant frequencies, respectively. By utilizing four director elements, the gain variation in the horizontal plane can be improved, especially for the upper frequency band. With the structure, a reduced size of <inline-formula> <tex-math notation=\"LaTeX\">$0.63\\,\\lambda _{L} \\times 0.63\\,\\lambda _{L} \\times 0.01\\,\\lambda _{L}$</tex-math> </inline-formula> (<inline-formula><tex-math notation=\"LaTeX\">$\\lambda _{L}$</tex-math></inline-formula> is the free-space wavelength at the lowest frequency) is obtained. The proposed antenna is designed and fabricated. Measurement results reveal that the proposed antenna can provide an impedance bandwidth of 84.2% (1.58–3.88 GHz). Additionally, the gain variation in the horizontal plane is less than 1.5 dB over the frequency band 1.58–3.50 GHz, and increased to 2.2 dB at 3.80 GHz. Within the impedance bandwidth, the cross-polarization level is less than –23 dB in the horizontal plane.", "title": "" } ]
[ { "docid": "92583a036066d87f857ae1be2a9ed109", "text": "The OpenCog software development framework, for advancement of the development and testing of powerful and responsible integrative AGI, is described. The OpenCog Framework (OCF) 1.0, to be released in 2008 under the GPLv2, is comprised of a collection of portable libraries for OpenCog applications, plus an initial collection of cognitive algorithms that operate within the OpenCog framework. The OCF libraries include a flexible knowledge representation embodied in a scalable knowledge store, a cognitive process scheduler, and a plug-in architecture for allowing interaction between cognitive, perceptual, and control algorithms.", "title": "" }, { "docid": "eb0b22f209c47b47eacb2c4edc5453f4", "text": "Current road safety initiatives are approaching the limit of their effectiveness in developed countries. A paradigm shift is needed to address the preventable deaths of thousands on our roads. Previous systems have focused on one or two aspects of driving: environmental sensing, vehicle dynamics or driver monitoring. Our approach is to consider the driver and the vehicle as part of a combined system, operating within the road environment. A driver assistance system is implemented that is not only responsive to the road environment and the driver’s actions but also designed to correlate the driver’s eye gaze with road events to determine the driver’s observations. Driver observation monitoring enables an immediate in-vehicle system able to detect and act on driver inattentiveness, providing the precious seconds for an inattentive human driver to react. We present a prototype system capable of estimating the driver’s observations and detecting driver inattentiveness. Due to the “look but not see” case it is not possible to prove that a road event has been observed by the driver. We show, however, that it is possible to detect missed road events and warn the driver appropriately.", "title": "" }, { "docid": "bd6f23972644f6239ab1a40e9b20aa1e", "text": "This paper presents a machine-learning software solution that performs a multi-dimensional prediction of QoE (Quality of Experience) based on network-related SIFs (System Influence Factors) as input data. The proposed solution is verified through experimental study based on video streaming emulation over LTE (Long Term Evolution) which allows the measurement of network-related SIF (i.e., delay, jitter, loss), and subjective assessment of MOS (Mean Opinion Score). Obtained results show good performance of proposed MOS predictor in terms of mean prediction error and thereby can serve as an encouragement to implement such solution in all-IP (Internet Protocol) real environment.", "title": "" }, { "docid": "9dbb1b0b6a35bd78b35982a4957cdec4", "text": "Many modern Web-services ignore existing Web-standards and develop their own interfaces to publish their services. This reduces interoperability and increases network latency, which in turn reduces scalability of the service. The Web grew from a few thousand requests per day to million requests per hour without significant loss of performance. Applying the same architecture underlying the modern Web to Web-services could improve existing and forthcoming applications. REST is the idealized model of the interactions within an Web-application and became the foundation of the modern Web-architecture, it has been designed to meet the needs of Internet-scale distributed hypermedia systems by emphasizing scalability, generality of interfaces, independent deployment and allowing intermediary components to reduce network latency.", "title": "" }, { "docid": "676cdee75f9bb167d61017c22cf48496", "text": "Since the introduction of passive commercial capsule endoscopes, researchers have been pursuing methods to control and localize these devices, many utilizing magnetic fields [1, 2]. An advantage of magnetics is the ability to both actuate and localize using the same technology. Prior work from our group [3] developed a method to actuate screw-type magnetic capsule endoscopes in the intestines using a single rotating magnetic dipole located at any position with respect to the capsule. This paper presents a companion localization method that uses the same rotating dipole field for full 6-D pose estimation of a capsule endoscope embedded with a small permanet magnet and an array of magnetic-field sensors. Although several magnetic localization algorithms have been previously published, many are not compatible with magnetic actuation [4, 5]. Those that are require the addition of an accelerometer [6, 7], need a priori knowledge of the capsule’s orientation [7], provide only 3-D information [6], or must manipulate the position of the external magnetic source during localization [8, 9]. Kim et al. presented an iterative method for use with rotating magnetic fields, but the method contains errors [10]. Our proposed algorithm is less sensitive to data synchronization issues and sensor noise than our previous non-iterative method [11] because the data from the magnetic sensors is incorporated independently (rather than first using sensor data to estimate the field at the center of the capsule’s magnet), and the full pose is solved simultaneously (instead of position and orientation sequentially).", "title": "" }, { "docid": "4b1c1194a9292adf76452eda03f7f67f", "text": "Fin-type field-effect transistors (FinFETs) are promising substitutes for bulk CMOS at the nanoscale. FinFETs are double-gate devices. The two gates of a FinFET can either be shorted for higher perfomance or independently controlled for lower leakage or reduced transistor count. This gives rise to a rich design space. This chapter provides an introduction to various interesting FinFET logic design styles, novel circuit designs, and layout considerations.", "title": "" }, { "docid": "fd5d8b07b5798d162334cc1f8db10679", "text": "Relational databases have been the leading model for data storage, retrieval and management for over forty years. Majority of the data comes in semi-structured or unstructured format from social media, video and emails. RDBMS are not designed to accommodate unstructured data. Also the data size has increased tremendously to the range of petabytes and RDBMS finds it challenging to handle such huge data volumes. NoSQL database with their less constrained structure and scalable schema design has been developed to cover the limitations of relational database. With the increasing maturity of NoSQL databases as well as the situation of reading data more than writing on large volumes of data, many applications turn to NoSQL. This paper discusses about various methodologies for migrating the existing data in relational database to NoSQL database. Also different NoSQL databases are compared based on their structure, performance, scalability, consistency, transactional features and read/write operational characteristics.", "title": "" }, { "docid": "0d40f7ddda91227fab3cc62a4ca2847c", "text": "Coherent texts are not just simple sequences of clauses and sentences, but rather complex artifacts that have highly elaborate rhetorical structure. This paper explores the extent to which well-formed rhetorical structures can be automatically derived by means of surface-form-based algorithms. These algorithms identify discourse usages of cue phrases and break sentences into clauses, hypothesize rhetorical relations that hold among textual units, and produce valid rhetorical structure trees for unrestricted natural language texts. The algorithms are empirically grounded in a corpus analysis of cue phrases and rely on a first-order formalization of rhetorical structure trees. The algorithms are evaluated both intrinsically and extrinsically. The intrinsic evaluation assesses the resemblance between automatically and manually constructed rhetorical structure trees. The extrinsic evaluation shows that automatically derived rhetorical structures can be successfully exploited in the context of text summarization.", "title": "" }, { "docid": "f82eb2d4cc45577f08c7e867bf012816", "text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.", "title": "" }, { "docid": "6380b60d47e49c9237208d48de9907e4", "text": "To date, conversations about cloud computing have been dominated by vendors who focus more on technology and less on business value. While it is still not fully agreed as to what components constitute cloud computing technology, some examples of its potential uses are emerging. We identify seven cloud capabilities that executives can use to formulate cloud-based strategies. Firms can change the mix of these capabilities to develop cloud strategies for unique competitive benefits. We predict that cloud strategies will lead to more intense ecosystem-based competition; it is therefore imperative that companies prepare for such a future now.", "title": "" }, { "docid": "4a989671768dee7428612adfc6c3f8cc", "text": "We developed computational models to predict the emergence of depression and Post-Traumatic Stress Disorder in Twitter users. Twitter data and details of depression history were collected from 204 individuals (105 depressed, 99 healthy). We extracted predictive features measuring affect, linguistic style, and context from participant tweets (N = 279,951) and built models using these features with supervised learning algorithms. Resulting models successfully discriminated between depressed and healthy content, and compared favorably to general practitioners’ average success rates in diagnosing depression, albeit in a separate population. Results held even when the analysis was restricted to content posted before first depression diagnosis. State-space temporal analysis suggests that onset of depression may be detectable from Twitter data several months prior to diagnosis. Predictive results were replicated with a separate sample of individuals diagnosed with PTSD (Nusers = 174, Ntweets = 243,775). A state-space time series model revealed indicators of PTSD almost immediately post-trauma, often many months prior to clinical diagnosis. These methods suggest a data-driven, predictive approach for early screening and detection of mental illness.", "title": "" }, { "docid": "192f8528ca2416f9a49ce152def2fbe6", "text": "We study the extent to which we can infer users’ geographical locations from social media. Location inference from social media can bene€t many applications, such as disaster management, targeted advertising, and news content tailoring. In recent years, a number of algorithms have been proposed for identifying user locations on social media platforms such as TwiŠer and Facebook from message contents, friend networks, and interactions between users. In this paper, we propose a novel probabilistic model based on factor graphs for location inference that o‚ers several unique advantages for this task. First, the model generalizes previous methods by incorporating content, network, and deep features learned from social context. Œe model is also ƒexible enough to support both supervised learning and semi-supervised learning. Second, we explore several learning algorithms for the proposed model, and present a Two-chain Metropolis-Hastings (MH+) algorithm, which improves the inference accuracy. Œird, we validate the proposed model on three di‚erent genres of data – TwiŠer, Weibo, and Facebook – and demonstrate that the proposed model can substantially improve the inference accuracy (+3.3-18.5% by F1-score) over that of several state-of-the-art methods.", "title": "" }, { "docid": "a35bdf118e84d71b161fea1b9e798a1a", "text": "Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper.", "title": "" }, { "docid": "e241fd603565e7277b4d109c90857e9c", "text": "Most time series comparison algorithms attempt to discover what the members of a set of time series have in common. We investigate a di erent problem, determining what distinguishes time series in that set from other time series obtained from the same source. In both cases the goal is to identify shared patterns, though in the latter case those patterns must be distinctive as well. An e cient incremental algorithm for identifying distinctive subsequences in multivariate, real-valued time series is described and evaluated with data from two very di erent sources: the response of a set of bandpass lters to human speech and the sensors of a mobile robot. Reference Number: 357", "title": "" }, { "docid": "1159d85ed21049f3fb70db58307eafff", "text": "Cannabis sativa L. is an annual dioecious plant from Central Asia. Cannabinoids, flavonoids, stilbenoids, terpenoids, alkaloids and lignans are some of the secondary metabolites present in C. sativa. Earlier reviews were focused on isolation and identification of more than 480 chemical compounds; this review deals with the biosynthesis of the secondary metabolites present in this plant. Cannabinoid biosynthesis and some closely related pathways that involve the same precursors are disscused.", "title": "" }, { "docid": "5de3d16ff4ed5592c992a9d0a928372c", "text": "Received: 24 November 2004 Revised: 6 March 2005 2nd Revision: 22 April 2005 Accepted: 5 May 2005 Abstract Enterprise systems are gaining interest from both practitioners and researchers because of their potential linkages to organizational and individual user’s productivity. Information systems (IS) researchers have been investigating the implementation and adoption issues of enterprise systems based on the organizational IS management perspectives. However, there are few papers that investigate enterprise systems management and implementation issues based on the informal control mechanisms, although the enterprise systems are control tools in the organization. Specifically, this paper applies Enterprise Resource Planning (ERP) adoption and implementation to the informal controls, such as cultural control and self-control, which can be viewed as a tacit perspective in knowledge management. Uncertainty avoidance and perceived enjoyment are used as informal controls in the ERP implementation in this paper, and are linked to the technology acceptance variables to investigate the relationships among them. Sociotechnical design, organizational control mechanism, knowledge management, and individual motivation are reviewed to support this potential linkage in the model. Field data via the online survey of ERP systems user group (n1⁄4 101) are analyzed with partial least squares and supported our hypotheses. Uncertainty avoidance cultural control and intrinsic motivation as self-control are the important antecedents of ERP systems adoption. Furthermore, the result helps the systems manager understand that informal controls should be applied to the ERP systems implementation to enhance tacit and social aspects of IS management. European Journal of Information Systems (2005) 14, 150–161. doi:10.1057/palgrave.ejis.3000532", "title": "" }, { "docid": "f7b8956748e8c19468490f35ed764e4e", "text": "We show how the database community’s notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data-reduction tool; networking approaches, however, have focused on application specific solutions, whereas our innetwork aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and", "title": "" }, { "docid": "135158b230016bb80a08b4c7e2c4f3f2", "text": "Quite recently, two smart-card-based passwords authenticated key exchange protocols were proposed by Lee et al. and Hwang et al. respectively. However, neither of them achieves two-factor authentication fully since they would become completely insecure once one factor is broken. To overcome these congenital defects, this study proposes such a secure authenticated key exchange protocol that achieves fully two-factor authentication and provides forward security of session keys. And yet our scheme is simple and reasonably efficient. Furthermore, we can provide the rigorous proof of the security for it.", "title": "" }, { "docid": "6806ff9626d68336dce539a8f2c440af", "text": "Obesity and hypertension, major risk factors for the metabolic syndrome, render individuals susceptible to an increased risk of cardiovascular complications, such as adverse cardiac remodeling and heart failure. There has been much investigation into the role that an increase in the renin-angiotensin-aldosterone system (RAAS) plays in the pathogenesis of metabolic syndrome and in particular, how aldosterone mediates left ventricular hypertrophy and increased cardiac fibrosis via its interaction with the mineralocorticoid receptor (MR). Here, we review the pertinent findings that link obesity with elevated aldosterone and the development of cardiac hypertrophy and fibrosis associated with the metabolic syndrome. These studies illustrate a complex cross-talk between adipose tissue, the heart, and the adrenal cortex. Furthermore, we discuss findings from our laboratory that suggest that cardiac hypertrophy and fibrosis in the metabolic syndrome may involve cross-talk between aldosterone and adipokines (such as adiponectin).", "title": "" }, { "docid": "5ad95902784fdd9e1ebd3205aa06ddf8", "text": "Thirty-two male Holstein calves were used to investigate the effects of nutritional conditions around weaning and aging on carbonic anhydrase (CA) activity in the parotid gland and epithelium from the rumen and abomasum. We fed calf starter and lucerne hay as well as milk replacer (group N) or fed milk replacer either with (group S) or without (group M) administration of short-chain fatty acids (SCFA) through polypropylene tubing into the forestomach until 13 weeks of age. The diets were fed at 1000 hours and 1600 hours, and SCFA were administrated after milk replacer feeding at 1600 hours. Slaughter and tissue sampling were carried out between 1300 hours and 1430 hours at 1, 3, 7, 13, and 18 weeks of age. Tissue samples from five adult (1.5–2.0 years-old) Holstein steers were obtained from a local abattoir. In group N, CA activity in the parotid gland gradually and significantly increased toward the adult value, whilst in the epithelium from the rumen and abomasum, adult values were reached at 3 and 7 weeks of age, respectively. At 13 weeks, the activity for group N was significantly higher than that for the other two groups in the parotid gland, but there was no significant difference in the epithelium from the rumen and abomasum. The concentration of the carbonic isozyme VI in the parotid gland also changed with age but, in contrast to CA activity, had not reached adult levels by 13 weeks of age. In groups M and S, parotid saliva did not show any change toward an alkaline pH or toward a reciprocal change in the concentrations between Cl– and HCO3 –, even at 13 weeks of age. From these results we conclude that a concentrate-hay based diet around weaning has a crucial role in CA development in the parotid gland, but not in the epithelium of the rumen and abomasum.", "title": "" } ]
scidocsrr
ae6f98f2cd841ea8bb2d09a15590e493
The "Horse" Inside: Seeking Causes Behind the Behaviors of Music Content Analysis Systems
[ { "docid": "10d53a05fcfb93231ab100be7eeb6482", "text": "We present a computer audition system that can both annotate novel audio tracks with semantically meaningful words and retrieve relevant tracks from a database of unlabeled audio content given a text-based query. We consider the related tasks of content-based audio annotation and retrieval as one supervised multiclass, multilabel problem in which we model the joint probability of acoustic features and words. We collect a data set of 1700 human-generated annotations that describe 500 Western popular music tracks. For each word in a vocabulary, we use this data to train a Gaussian mixture model (GMM) over an audio feature space. We estimate the parameters of the model using the weighted mixture hierarchies expectation maximization algorithm. This algorithm is more scalable to large data sets and produces better density estimates than standard parameter estimation techniques. The quality of the music annotations produced by our system is comparable with the performance of humans on the same task. Our ldquoquery-by-textrdquo system can retrieve appropriate songs for a large number of musically relevant words. We also show that our audition system is general by learning a model that can annotate and retrieve sound effects.", "title": "" }, { "docid": "5c598998ffcf3d6008e8e5eed94fc396", "text": "Music information retrieval (MIR) is an emerging research area that receives growing attention from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from large music data set. Classification is a fundamental problem in MIR. Many tasks in MIR can be naturally cast in a classification setting, such as genre classification, mood classification, artist recognition, instrument recognition, etc. Music annotation, a new research area in MIR that has attracted much attention in recent years, is also a classification problem in the general sense. Due to the importance of music classification in MIR research, rapid development of new methods, and lack of review papers on recent progress of the field, we provide a comprehensive review on audio-based classification in this paper and systematically summarize the state-of-the-art techniques for music classification. Specifically, we have stressed the difference in the features and the types of classifiers used for different classification tasks. This survey emphasizes on recent development of the techniques and discusses several open issues for future research.", "title": "" }, { "docid": "90dececdeb4747ccfd87f75da6d53692", "text": "Much of the work on perception and understanding of music by computers has focused on low-level perceptual features such as pitch and tempo. Our work demonstrates that machine learning can be used to build e ective style classi ers for interactive performance systems. We also present an analysis explaining why these techniques work so well when hand-coded approaches have consistently failed. We also describe a reliable real-time performance style classi er.", "title": "" } ]
[ { "docid": "39d1271ce88b840b8d75806faf9463ad", "text": "Dynamically Reconfigurable Systems (DRS), implemented using Field-Programmable Gate Arrays (FPGAs), allow hardware logic to be partially reconfigured while the rest of a design continues to operate. By mapping multiple reconfigurable hardware modules to the same physical region of an FPGA, such systems are able to time-multiplex their circuits at run time and can adapt to changing execution requirements. This architectural flexibility introduces challenges for verifying system functionality. New simulation approaches need to extend traditional simulation techniques to assist designers in testing and debugging the time-varying behavior of DRS. Another significant challenge is the effective use of tools so as to reduce the number of design iterations. This thesis focuses on simulation-based functional verification of modular reconfigurable DRS designs. We propose a methodology and provide tools to assist designers in verifying DRS designs while part of the design is undergoing reconfiguration. This thesis analyzes the challenges in verifying DRS designs with respect to the user design and the physical implementation of such systems. We propose using a simulationonly layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. The simulation-only layer maintains verification productivity by abstracting away the physical details of the FPGA fabric. Furthermore, since the design does not need to be modified for simulation purposes, the design as implemented instead of some variation of it is verified. We provide two possible implementations of the simulation-only layer. Extended ReChannel is a SystemC library that can be used to model DRS at a high level. ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, we demonstrate that with insignificant overheads, our approach seamlessly integrates with the existing, mainstream DRS design flow and with wellestablished verification methodologies such as top-down modeling and coverage-driven verification. The case studies also serve as a guide in the use of our libraries to identify bugs that are related to Dynamic Partial Reconfiguration. Our results demonstrate that using the simulation-only layer is an effective approach to the simulation-based functional verification of DRS designs.", "title": "" }, { "docid": "f829820706687c186e998bfed5be9c42", "text": "As deep learning systems are widely adopted in safetyand securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors – ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).", "title": "" }, { "docid": "f472c2ebd6cf1f361fd8c572f8c516e4", "text": "This article discusses the creation of an educational game intended for UK GCSE-level content, called Elemental. Elemental, developed using Microsoft's XNA studio and deployed both on the PC and Xbox 360 platforms, addresses the periodic table of elements, a subject with extensions in chemistry, physics and engineering. Through the development process of the game but also the eventual pilot user study with 15 subjects (using a pre and post test method to measure learning using the medium and self-report questions), examples are given on how an educator can, without expert knowledge, utilize modern programming tools to create and test custom-made content for delivering part of a secondary education curriculum.", "title": "" }, { "docid": "228ddbe305dd32ad1c7c3986e5ece29d", "text": "We provide a tutorial on learning and inference in hidden Markov models in the context of the recent literature on Bayesian networks. This perspective makes it possible to consider novel generalizations of hidden Markov models with multiple hidden state variables, multiscale representations, and mixed discrete and continuous variables. Although exact inference in these generalizations is usually intractable, one can use approximate inference algorithms such as Markov chain sampling and variational methods. We describe how such methods are applied to these generalized hidden Markov models. We conclude this review with a discussion of Bayesian methods for model selection in generalized HMMs.", "title": "" }, { "docid": "49472bad6101fe7b40165a155b40bbab", "text": "Morphogenesis of the vascular system is strongly modulated by mechanical forces from blood flow. Hereditary hemorrhagic telangiectasia (HHT) is an inherited autosomal-dominant disease in which arteriovenous malformations and telangiectasias accumulate with age. Most cases are linked to heterozygous mutations in Alk1 or Endoglin, receptors for bone morphogenetic proteins (BMPs) 9 and 10. Evidence suggests that a second hit results in clonal expansion of endothelial cells to form lesions with poor mural cell coverage that spontaneously rupture and bleed. We now report that fluid shear stress potentiates BMPs to activate Alk1 signaling, which correlates with enhanced association of Alk1 and endoglin. Alk1 is required for BMP9 and flow responses, whereas endoglin is only required for enhancement by flow. This pathway mediates both inhibition of endothelial proliferation and recruitment of mural cells; thus, its loss blocks flow-induced vascular stabilization. Identification of Alk1 signaling as a convergence point for flow and soluble ligands provides a molecular mechanism for development of HHT lesions.", "title": "" }, { "docid": "6e8e1888658262163d7384d5e94155f5", "text": "Employing case studies taken from work experience in the UK, USA, Canada and Japan this paper observes the evolution of Human Factors (HF) and ergonomics in the railroadfrom a practitioner’s point of view. Practical areas for application of HF at specific points in railroadsignaling and control systems are described. HF considerations in advanced train control systems and the movement towards automation are discussed as well as the impact of these new technologies on the context of operation itself. There is now a greater reliance on the operator to remain vigilant and react efficiently when intervention on automation is required both within the control room and driver cab environments. This paper illustrates some of the human performance concerns for novel transportation control systems that are faced todayand discusses how this area of cognitive attention, human error and workload is difficult to assess and predict. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of AHFE Conference.", "title": "" }, { "docid": "4b051e3908eabb5f550094ebabf6583d", "text": "This paper presents a review of modern cooling system employed for the thermal management of power traction machines. Various solutions for heat extractions are described: high thermal conductivity insulation materials, spray cooling, high thermal conductivity fluids, combined liquid and air forced convection, and loss mitigation techniques.", "title": "" }, { "docid": "5d1881ec6df3ab0cf423285ccb060871", "text": "Currently Virtual Machines (VMs) have many applications and their use is growing constantly as the hardware gets more powerful and usage more regulated allowing for scaling, monitoring, portability, security applications and many other uses. There are many types of virtualization techniques that can be employed on many levels from simple sandbox to full fledged streamlined managed access. While scaling, software lifecycles and diversity are just some of security challenges faced by VM developers the failure to properly implement those mechanisms may lead to VM escape, host access, denial of service and more. There are many exploits found in the last couple of years which were fixed on latest versions but some systems are still running them and vulnerable as presented, mostly to host based attacks and some have dramatic consequences.", "title": "" }, { "docid": "7aa8eb86ef6cedb4a5786b25f88c67e9", "text": "We present ASP Modulo ‘Space-Time’, a declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components. Supported are capabilities for mixed qualitative-quantitative reasoning, consistency checking, and inferring compositions of space-time relations; these capabilities combine and synergise for applications in a range of AI application areas where the processing and interpretation of spatio-temporal data is crucial. The framework and resulting system is the only general KR-based method for declaratively reasoning about the dynamics of ‘space-time’ regions as first-class objects. We present an empirical evaluation (with scalability and robustness results), and include diverse application examples involving interpretation and control tasks.", "title": "" }, { "docid": "9516d06751aa51edb0b0a3e2b75e0bde", "text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.", "title": "" }, { "docid": "23efdc538a2b0847a51bf28fa74ff8c9", "text": "Realityflythrough is a telepresence/tele-reality system that works in the dynamic, uncalibrated environments typically associated with ubiquitous computing. By harnessing networked mobile video cameras, it allows a user to remotely and immersively explore a physical space. RealityFlythrough creates the illusion of complete live camera coverage in a physical environment. This paper describes the architecture of RealityFlythrough, and evaluates it along three dimensions: (1) its support of the abstractions for infinite camera coverage, (2) its scalability, and (3) its robustness to changing user requirements.", "title": "" }, { "docid": "4e2fbac1742c7afe9136e274150d6ee9", "text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel generative model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation-specific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.", "title": "" }, { "docid": "3741dc61c49af3ae6f1be6a69d44c8ad", "text": "Interaction sites on protein surfaces mediate virtually all biological activities, and their identification holds promise for disease treatment and drug design. Novel algorithmic approaches for the prediction of these sites have been produced at a rapid rate, and the field has seen significant advancement over the past decade. However, the most current methods have not yet been reviewed in a systematic and comprehensive fashion. Herein, we describe the intricacies of the biological theory, datasets, and features required for modern protein-protein interaction site (PPIS) prediction, and present an integrative analysis of the state-of-the-art algorithms and their performance. First, the major sources of data used by predictors are reviewed, including training sets, evaluation sets, and methods for their procurement. Then, the features employed and their importance in the biological characterization of PPISs are explored. This is followed by a discussion of the methodologies adopted in contemporary prediction programs, as well as their relative performance on the datasets most recently used for evaluation. In addition, the potential utility that PPIS identification holds for rational drug design, hotspot prediction, and computational molecular docking is described. Finally, an analysis of the most promising areas for future development of the field is presented.", "title": "" }, { "docid": "994922edc3eb0527bba2f70e9b31870c", "text": "A large body of literature explains the inferior position of unskilled workers by imposing a structural shift in the labor force skill composition. This paper takes a different approach by emphasizing the connection between cyclical variations in skilled and unskilled labor markets. Using a stylized business cycle model with search frictions in the respective sub-markets, I find that imperfect substitution between skilled and unskilled labor creates a channel for the variations in the sub-markets. Together with a general labor augmenting technology shock, it can generate downward sloping Beveridge curves. Calibrating the model to US data yields higher volatilities in the unskilled labor markets and reproduces stylized business cycle facts.", "title": "" }, { "docid": "d04042c81f2c2f7f762025e6b2bd9ab8", "text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.", "title": "" }, { "docid": "8d26fc4b31ca7bd2c461483852e70626", "text": "The pili from pathogenic Escherichia coli isolates 566, 1794 and TK3 of chicken and turkey origin were purified. After mechanic detachment from the bacterial cells, the pili were concentrated by precipitation with ammonium sulfate, dialyzed, and solubilized in buffer containing deoxycholate. The fraction containing the pilus was purified further by ultracentrifugation in a sucrose gradient. After ultracentrifugation, the pili at the density of 1.10 to 1.15 g.cm-3 (between 10%-20% of sucrose gradients) were collected, and the purified pili from strain 566, 1794 and TK3 had an apparent molecular weight of 17,500, 17,000 and 17,000 respectively, which retained their ability to bind the erythrocyte in a mannose-inhibitable fashion. Hyperimmunesera raised in BALB/C mice against the purified pili from strain 1794 reacted positively with type 1 pili from both isolates 566 and TK3 by immuno blot. These results revealed that the three strains either Chinese or north american isolates expressed type 1 pili which had molecular weights from 17,000 to 17,500, and they have common antigenic epitopes.", "title": "" }, { "docid": "5fb87b6fe032f4d8ec4026f4994b179c", "text": "Building general-purpose conversation agents is a very challenging task, but necessary on the road toward intelligent agents that can interact with humans in natural language. Neural conversation models – purely data-driven systems trained end-to-end on dialogue corpora – have shown great promise recently, yet they often produce short and generic responses. This work presents new training and decoding methods that improve the quality, coherence, and diversity of long responses generated using sequence-to-sequence models. Our approach adds selfattention to the decoder to maintain coherence in longer responses, and we propose a practical approach, called the glimpse-model, for scaling to large datasets. We introduce a stochastic beam-search algorithm with segment-by-segment reranking which lets us inject diversity earlier in the generation process. We trained on a combined data set of over 2.3B conversation messages mined from the web. In human evaluation studies, our method produces longer responses overall, with a higher proportion rated as acceptable and excellent as length increases, compared to baseline sequence-to-sequence models with explicit length-promotion. A backoff strategy produces better responses overall, in the full spectrum of lengths.", "title": "" }, { "docid": "fa9a2112a687063c2fb3733af7b1ea61", "text": "Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.", "title": "" }, { "docid": "74de5693ada4c4ce9ba327deda8d67a2", "text": "As a result of globalization and climate change, Dirofilaria immitis and Dirofilaria repens, the causative agents of dirofilariosis in Europe, continue to spread from endemic areas in the Mediterranean to northern and northeastern regions of Europe where autochthonous cases of dirofilarial infections have increasingly been observed in dogs and humans. Whilst D. repens was recently reported from mosquitoes in putatively non-endemic areas, D. immitis has never been demonstrated in mosquitoes from Europe outside the Mediterranean. From 2011 to 2013, mosquitoes collected within the framework of a German national mosquito monitoring programme were screened for filarial nematodes using a newly designed filarioid-specific real-time PCR assay. Positive samples were further processed by conventional PCR amplification of the cytochrome c oxidase subunit I (COI) gene, amplicons were sequenced and sequences blasted against GenBank. Approximately 17,000 female mosquitoes were subjected to filarial screening. Out of 955 pools examined, nine tested positive for filariae. Two of the COI sequences indicated D. immitis, one D. repens and four Setaria tundra. Two sequences could not be assigned to a known species due to a lack of similar GenBank entries. Whilst D. immitis and the unknown parasites were detected in Culex pipiens/torrentium, D. repens was found in a single Anopheles daciae and all S. tundra were demonstrated in Aedes vexans. All positive mosquitoes were collected between mid-June and early September. The finding of dirofilariae in German mosquitoes implies the possibility of a local natural transmission cycle. While the routes of introduction to Germany and the origin of the filariae cannot be determined retrospectively, potential culicid vectors and reservoir hosts must prospectively be identified and awareness among physicians, veterinarians and public health personnel be created. The health impact of S. tundra on the indigenous cervid fauna needs further investigation.", "title": "" }, { "docid": "c102e00d44d335b344b56423bd16e7c5", "text": "PURPOSE\nTo evaluate the association between social networking site (SNS) use and depression in older adolescents using an experience sample method (ESM) approach.\n\n\nMETHODS\nOlder adolescent university students completed an online survey containing the Patient Health Questionnaire-9 depression screen (PHQ) and a week-long ESM data collection period to assess SNS use.\n\n\nRESULTS\nParticipants (N = 190) included in the study were 58% female and 91% Caucasian. The mean age was 18.9 years (standard deviation = .8). Most used SNSs for either <30 minutes (n = 100, 53%) or between 30 minutes and 2 hours (n = 74, 39%); a minority of participants reported daily use of SNS >2 hours (n = 16, 8%). The mean PHQ score was 5.4 (standard deviation = 4.2). No associations were seen between SNS use and either any depression (p = .519) or moderate to severe depression (p = .470).\n\n\nCONCLUSIONS\nWe did not find evidence supporting a relationship between SNS use and clinical depression. Counseling patients or parents regarding the risk of \"Facebook Depression\" may be premature.", "title": "" } ]
scidocsrr
4c227cc475823c8b828b566dfe71ff7f
Human Action Recognition Using Factorized Spatio-Temporal Convolutional Networks
[ { "docid": "9f24cf3e8fde24d4622d9f71a2c7998f", "text": "Most of the previous work on video action recognition use complex hand-designed local features, such as SIFT, HOG and SURF, but these approaches are implemented sophisticatedly and difficult to be extended to other sensor modalities. Recent studies discover that there are no universally best hand-engineered features for all datasets, and learning features directly from the data may be more advantageous. One such endeavor is Slow Feature Analysis (SFA) proposed by Wiskott and Sejnowski [33]. SFA can learn the invariant and slowly varying features from input signals and has been proved to be valuable in human action recognition [34]. It is also observed that the multi-layer feature representation has succeeded remarkably in widespread machine learning applications. In this paper, we propose to combine SFA with deep learning techniques to learn hierarchical representations from the video data itself. Specifically, we use a two-layered SFA learning structure with 3D convolution and max pooling operations to scale up the method to large inputs and capture abstract and structural features from the video. Thus, the proposed method is suitable for action recognition. At the same time, sharing the same merits of deep learning, the proposed method is generic and fully automated. Our classification results on Hollywood2, KTH and UCF Sports are competitive with previously published results. To highlight some, on the KTH dataset, our recognition rate shows approximately 1% improvement in comparison to state-of-the-art methods even without supervision or dense sampling.", "title": "" }, { "docid": "812abd8ee942c352bd2b141e3c88ba21", "text": "Video based action recognition is one of the important and challenging problems in computer vision research. Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from local features, which is mainly composed of five steps; (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Although many effort s have been made in each step independently in different scenarios, their effects on action recognition are still unknown. Meanwhile, video data exhibits different views of visual patterns , such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Fusing these descriptors is crucial for boosting the final performance of an action recognition system. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practices to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid supervector , by exploring the complementarity of different BoVW frameworks with improved dense trajectories. Using this representation, we obtain impressive results on the three challenging datasets; HMDB51 (61.9%), UCF50 (92.3%), and UCF101 (87.9%). © 2016 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "b64b2a82cec34a76a84b96c42a09fa0f", "text": "Control of compliant mechanical systems is increasingly being researched for several applications including flexible link robots and ultra-precision positioning systems. The control problem in these systems is challenging, especially with gravity coupling and large deformations, because of inherent underactuation and the combination of lumped and distributed parameters of a nonlinear system. In this paper we consider an ultra-flexible inverted pendulum on a cart and propose a new nonlinear energy shaping controller to keep the pendulum at the upward position with the cart stopped at a desired location. The design is based on a model, obtained via the constrained Lagrange formulation, which previously has been validated experimentally. The controller design consists of a partial feedback linearization step followed by a standard PID controller acting on two passive outputs. Boundedness of all signals and (local) asymptotic stability of the desired equilibrium is theoretically established. Simulations and experimental evidence assess the performance of the proposed controller.", "title": "" }, { "docid": "934539b00ee9131e7ed2cb3bf7d1417e", "text": "Modern GPUs supporting compressed textures allow interactive application developers to save scarce GPU resources such as VRAM and bandwidth. Compressed textures use fixed compression ratios whose lossy representations are significantly poorer quality than traditional image compression formats such as JPEG. We present a new method in the class of supercompressed textures that provides an additional layer of compression to already compressed textures. Our texture representation is designed for endpoint compressed formats such as DXT and PVRTC and decoding on commodity GPUs. We apply our algorithm to commonly used formats by separating their representation into two parts that are processed independently and then entropy encoded. Our method preserves the CPU-GPU bandwidth during the decoding phase and exploits the parallelism of GPUs to provide up to 3X faster decode compared to prior texture supercompression algorithms. Along with the gains in decoding speed, our method maintains both the compression size and quality of current state of the art texture representations.", "title": "" }, { "docid": "7f23e4b069d6c76a3858c3255269edfd", "text": "This study examines the case of a sophomore high school history class where Making History, a video game designed with educational purposes in mind, is used in the classroom to teach about World War II. Data was gathered using observation, focus group and individual interviews, and document analysis. The high school was a rural school located in a small town in the Midwestern United States. The teacher had been teaching with the game for several years and spent one school week teaching World War II, with students playing the game in class for three days of that week. The purpose of this study was to understand teacher and student experiences with and perspectives on the in-class use of an educational video game. Results showed that the use of the video game resulted in a shift from a traditional teachercentered learning environment to a student-centered environment where the students were much more active and engaged. Also, the teacher had evolved implementation strategies based on his past experiences using the game to maximize the focus on learning. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1cbf280d68e83b219e60ba6e34f3e144", "text": "A stroke occurs when the blood supply to a person's brain is interrupted or reduced. The stroke deprives person's brain of oxygen and nutrients, which can cause brain cells to die. Numerous works have been carried out for predicting various diseases by comparing the performance of predictive data mining technologies. In this work, we compare different methods with our approach for stroke prediction on the Cardiovascular Health Study (CHS) dataset. Here, decision tree algorithm is used for feature selection process, principle component analysis algorithm is used for reducing the dimension and adopted back propagation neural network classification algorithm, to construct a classification model. After analyzing and comparing classification efficiencies with different methods and variation models accuracy, our work has the optimum predictive model for the stroke disease with 97.7% accuracy.", "title": "" }, { "docid": "e4574b1e8241599b5c3ef740b461efba", "text": "Increasing awareness of ICS security issues has brought about a growing body of work in this area, including pioneering contributions based on realistic control system logs and network traces. This paper surveys the state of the art in ICS security research, including efforts of industrial researchers, highlighting the most interesting works. Research efforts are grouped into divergent areas, where we add “secure control” as a new category to capture security goals specific to control systems that differ from security goals in traditional IT systems.", "title": "" }, { "docid": "1e3136f97585c985153b3ed43ac8db6c", "text": "In this report, we organize and reflect on recent advances and challenges in the field of sports data visualization. The exponentially-growing body of visualization research based on sports data is a prime indication of the importance and timeliness of this report. Sports data visualization research encompasses the breadth of visualization tasks and goals: exploring the design of new visualization techniques; adapting existing visualizations to a novel domain; and conducting design studies and evaluations in close collaboration with experts, including practitioners, enthusiasts, and journalists. Frequently this research has impact beyond sports in both academia and in industry because it is i) grounded in realistic, highly heterogeneous data, ii) applied to real-world problems, and iii) designed in close collaboration with domain experts. In this report, we analyze current research contributions through the lens of three categories of sports data: box score data (data containing statistical summaries of a sport event such as a game), tracking data (data about in-game actions and trajectories), and meta-data (data about the sport and its participants but not necessarily a given game). We conclude this report with a high-level discussion of sports visualization research informed by our analysis—identifying critical research gaps and valuable opportunities for the visualization community. More information is available at the STAR’s website: https://sportsdataviz.github.io/.", "title": "" }, { "docid": "780fd139195695dce5eda3ab92de6179", "text": "An omnidirectional platform with an Active Offset Split Caster (ASOC) is described and its ability to operate on non-ideal floors is studied. It is shown that all of its driven wheels of the platform will remain in contact with an uneven floor at all times, a condition necessary to maintain good traction and dead-reckoning capabilities. It is shown that planning algorithms developed for an ideally flat floor perform adequately for a realistic uneven floor. Furthermore, it is shown that the ASOC design consumes less power than other conventional wheel omnidirectional designs and is more suitable to heavier loads. Analytical and experimental results are presented.", "title": "" }, { "docid": "0bcb2fdf59b88fca5760bfe456d74116", "text": "A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data onto a low-dimensional manifold, where geometric relationships such as local or global pairwise distances are preserved. However, the projection may not necessarily improve the separability of the data, which is the desirable outcome of clustering. In this paper, we propose a novel unsupervised adaptive metric learning algorithm, called AML, which performs clustering and distance metric learning simultaneously. AML projects the data onto a low-dimensional manifold, where the separability of the data is maximized. We show that the joint clustering and distance metric learning can be formulated as a trace maximization problem, which can be solved via an iterative procedure in the EM framework. Experimental results on a collection of benchmark data sets demonstrated the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "10b7ce647229f3c9fe5aeced5be85e38", "text": "The proliferation of deep learning methods in natural language processing (NLP) and the large amounts of data they often require stands in stark contrast to the relatively data-poor clinical NLP domain. In particular, large text corpora are necessary to build high-quality word embeddings, yet often large corpora that are suitably representative of the target clinical data are unavailable. This forces a choice between building embeddings from small clinical corpora and less representative, larger corpora. This paper explores this trade-off, as well as intermediate compromise solutions. Two standard clinical NLP tasks (the i2b2 2010 concept and assertion tasks) are evaluated with commonly used deep learning models (recurrent neural networks and convolutional neural networks) using a set of six corpora ranging from the target i2b2 data to large open-domain datasets. While combinations of corpora are generally found to work best, the single-best corpus is generally task-dependent.", "title": "" }, { "docid": "d43f2fc0a8542b94375661a4ca15e5aa", "text": "This paper describes a new algorithm that generates a cartoon-style bas-relief surface from photographs of general scenes. Most previous methods for bas-relief generation have focused on accurate restoration of input 3D models on a background plane. The generation of bas-reliefs with artistic effects has rarely been studied. Considering that non-photorealistic rendering (NPR) techniques are currently very popular and 3D printing technology is developing rapidly, extending NPR techniques to the generation of a bas-relief surface with artistic effects is natural and valuable. Furthermore, cartoon is a basic non-realistic and artistic style familiar to general users. From this motivation, our method focuses on generating a cartoon-style bas-relief surface. We use the lens blur function of Google Camera, which is a smartphone application, to obtain a photograph and its depth map as inputs. Using coherent line drawing and histogram-based quantization methods, we construct a depth map that contains the salient features of given input scenes in abstract form. Displacement mapping from the depth map onto a thin plane generates a cartoon-style bas-relief. Experimental results show that our method generates bas-relief surfaces that contain the characteristics of cartoons, such as coherent border lines and quantized layers.", "title": "" }, { "docid": "3b72f2d158aad8b21746f59212698c4f", "text": "22 23 24 25 26", "title": "" }, { "docid": "9d3a5067956e2eeb3e9f0f188f07ab1e", "text": "Recently, neural machine translation (NMT) has been extended to multilinguality, that is to handle more than one translation direction with a single system. Multilingual NMT showed competitive performance against pure bilingual systems. Notably, in low-resource settings, it proved to work effectively and efficiently, thanks to shared representation space that is forced across languages and induces a sort of transfer-learning. Furthermore, multilingual NMT enables so-called zero-shot inference across language pairs never seen at training time. Despite the increasing interest in this framework, an in-depth analysis of what a multilingual NMT model is capable of and what it is not is still missing. Motivated by this, our work (i) provides a quantitative and comparative analysis of the translations produced by bilingual, multilingual and zero-shot systems; (ii) investigates the translation quality of two of the currently dominant neural architectures in MT, which are the Recurrent and the Transformer ones; and (iii) quantitatively explores how the closeness between languages influences the zero-shot translation. Our analysis leverages multiple professional post-edits of automatic translations by several different systems and focuses both on automatic standard metrics (BLEU and TER) and on widely used error categories, which are lexical, morphology, and word order errors.", "title": "" }, { "docid": "69f36a0f043d8966dbcd7fc2607d61f8", "text": "This paper presents a method for modeling and estimation of the state of charge (SOC) of lithium-ion (Li-Ion) batteries using neural networks (NNs) and the extended Kalman filter (EKF). The NN is trained offline using the data collected from the battery-charging process. This network finds the model needed in the state-space equations of the EKF, where the state variables are the battery terminal voltage at the previous sample and the SOC at the present sample. Furthermore, the covariance matrix for the process noise in the EKF is estimated adaptively. The proposed method is implemented on a Li-Ion battery to estimate online the actual SOC of the battery. Experimental results show a good estimation of the SOC and fast convergence of the EKF state variables.", "title": "" }, { "docid": "f1f7f8eb67488defd524800c12bd10ad", "text": "As a serious concern in data publishing and analysis, privacy preserving data processing has received a lot of attention. Privacy preservation often leads to information loss. Consequently, we want to minimize utility loss as long as the privacy is preserved. In this chapter, we survey the utility-based privacy preservation methods systematically. We first briefly discuss the privacy models and utility measures, and then review four recently proposed methods for utilitybased privacy preservation. We first introduce the utility-based anonymization method for maximizing the quality of the anonymized data in query answering and discernability. Then we introduce the top-down specialization (TDS) method and the progressive disclosure algorithm (PDA) for privacy preservation in classification problems. Last, we introduce the anonymized marginal method, which publishes the anonymized projection of a table to increase the utility and satisfy the privacy requirement.", "title": "" }, { "docid": "c28b48557a4eda0d29200170435f2935", "text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.", "title": "" }, { "docid": "3cf60753c37f2520188b26e67e243b6c", "text": "The growing dependence of critical infrastructures and industrial automation on interconnected physical and cyber-based control systems has resulted in a growing and previously unforeseen cyber security threat to supervisory control and data acquisition (SCADA) and distributed control systems (DCSs). It is critical that engineers and managers understand these issues and know how to locate the information they need. This paper provides a broad overview of cyber security and risk assessment for SCADA and DCS, introduces the main industry organizations and government groups working in this area, and gives a comprehensive review of the literature to date. Major concepts related to the risk assessment methods are introduced with references cited for more detail. Included are risk assessment methods such as HHM, IIM, and RFRM which have been applied successfully to SCADA systems with many interdependencies and have highlighted the need for quantifiable metrics. Presented in broad terms is probability risk analysis (PRA) which includes methods such as FTA, ETA, and FEMA. The paper concludes with a general discussion of two recent methods (one based on compromise graphs and one on augmented vulnerability trees) that quantitatively determine the probability of an attack, the impact of the attack, and the reduction in risk associated with a particular countermeasure.", "title": "" }, { "docid": "b7f15089db3f5d04c1ce1d5f09b0b1f0", "text": "Despite the flourishing research on the relationships between affect and language, the characteristics of pain-related words, a specific type of negative words, have never been systematically investigated from a psycholinguistic and emotional perspective, despite their psychological relevance. This study offers psycholinguistic, affective, and pain-related norms for words expressing physical and social pain. This may provide a useful tool for the selection of stimulus materials in future studies on negative emotions and/or pain. We explored the relationships between psycholinguistic, affective, and pain-related properties of 512 Italian words (nouns, adjectives, and verbs) conveying physical and social pain by asking 1020 Italian participants to provide ratings of Familiarity, Age of Acquisition, Imageability, Concreteness, Context Availability, Valence, Arousal, Pain-Relatedness, Intensity, and Unpleasantness. We also collected data concerning Length, Written Frequency (Subtlex-IT), N-Size, Orthographic Levenshtein Distance 20, Neighbor Mean Frequency, and Neighbor Maximum Frequency of each word. Interestingly, the words expressing social pain were rated as more negative, arousing, pain-related, and conveying more intense and unpleasant experiences than the words conveying physical pain.", "title": "" }, { "docid": "6a91c45e0cfac9dd472f68aec15889eb", "text": "UNLABELLED\nThe Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos.\n\n\nAVAILABILITY AND IMPLEMENTATION\nXPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/.\n\n\nCONTACT\njohannes.stegmaier@kit.edu\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "509075d64990cf7258c13dd0dfd5e282", "text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.", "title": "" }, { "docid": "f79258e52f34f29ae28099af08e349e4", "text": "The enormous scale of unlabeled text available today necessitates scalable schemes for representation learning in natural language processing. For instance, in this paper we are interested in classifying the intent of a user query. While our labeled data is quite limited, we have access to virtually an unlimited amount of unlabeled queries, which could be used to induce useful representations: for instance by principal component analysis (PCA). However, it is prohibitive to even store the data in memory due to its sheer size, let alone apply conventional batch algorithms. In this work, we apply the recently proposed matrix sketching algorithm to entirely obviate the problem with scalability (Liberty, 2013). This algorithm approximates the data within a specified memory bound while preserving the covariance structure necessary for PCA. Using matrix sketching, we significantly improve the user intent classification accuracy by leveraging large amounts of unlabeled queries.", "title": "" } ]
scidocsrr
c868b0652c450231b70d78c1104c9c52
Comparison of two modes of stress measurement: Daily hassles and uplifts versus major life events
[ { "docid": "f84f279b6ef3b112a0411f5cba82e1b0", "text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed", "title": "" } ]
[ { "docid": "cdd3dd7a367027ebfe4b3f59eca99267", "text": "3 Computation of the shearlet transform 13 3.1 Finite discrete shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 A discrete shearlet frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Inversion of the shearlet transform . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4 Smooth shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.2 Computation of spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Short documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Download & Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.8 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.9 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32", "title": "" }, { "docid": "a62ee8c670c1dd34a440f7b69a7b5846", "text": "The main purpose of this special issue is to present an overview of the progress of a modeling technique which is known as total least squares (TLS) in computational mathematics and engineering, and as errors-in-variables (EIV) modeling or orthogonal regression in the statistical community. The TLS method is one of several linear parameter estimation techniques that has been devised to compensate for data errors. The basic motivation is the following: let a set of multidimensional data points (vectors) be given. How can one obtain a linear model that explains these data? The idea is to modify all data points in such a way that some norm of the modification is minimized subject to the constraint that the modified vectors satisfy a linear relation. Although the name “TLS” appeared in the literature only 27 years (Golub and Van Loan, 1980) ago, this method of fitting is certainly not new and has a long history in the statistical literature, where the method is known as “orthogonal regression”, “EIV regression” or “measurement error (ME) modeling”. The univariate line fitting problem was already discussed since 1877 (Adcock, 1877). More recently, the TLS approach to fitting has also stimulated interests outside statistics. One of the main reasons for its popularity is the availability of efficient and numerically robust algorithms in which the singular value decomposition (SVD) plays a prominent role (Golub and Van Loan, 1980). Another reason is the fact that TLS is an application oriented procedure. It is suited for situations in which all data are corrupted by noise, which is almost always the case in engineering applications ( Van Huffel et al., 2007). In this sense, TLS and EIV modeling are a powerful extension of classical least squares and ordinary regression, which corresponds only to a partial modification of the data. The problem of linear parameter estimation arises in a broad class of scientific disciplines such as signal processing, automatic control, system theory and in general engineering, statistics, physics, economics, biology, medicine, etc. It starts from a model described by a linear equation:", "title": "" }, { "docid": "71aa6e0b75f29abd7b51406218077ca7", "text": "The score function estimator is widely used for estimating gradients of stochastic objectives in stochastic computation graphs (SCG), e.g., in reinforcement learning and meta-learning. While deriving the first order gradient estimators by differentiating a surrogate loss (SL) objective is computationally and conceptually simple, using the same approach for higher order derivatives is more challenging. Firstly, analytically deriving and implementing such estimators is laborious and not compliant with automatic differentiation. Secondly, repeatedly applying SL to construct new objectives for each order derivative involves increasingly cumbersome graph manipulations. Lastly, to match the first order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher order derivatives. To address all these shortcomings in a unified way, we introduce DICE, which provides a single objective that can be differentiated repeatedly, generating correct estimators of derivatives of any order in SCGs. Unlike SL, DICE relies on automatic differentiation for performing the requisite graph manipulations. We verify the correctness of DICE both through a proof and numerical evaluation of the DICE derivative estimates. We also use DICE to propose and evaluate a novel approach for multi-agent learning. Our code is available at github.com/alshedivat/lola.", "title": "" }, { "docid": "c6283ee48fd5115d28e4ea0812150f25", "text": "Stochastic regular bi-languages has been recently proposed to model the joint probability distributions appearing in some statistical approaches of Spoken Dialog Systems. To this end a deterministic and probabilistic finite state biautomaton was defined to model the distribution probabilities for the dialog model. In this work we propose and evaluate decision strategies over the defined probabilistic finite state bi-automaton to select the best system action at each step of the interaction. To this end the paper proposes some heuristic decision functions that consider both action probabilities learn from a corpus and number of known attributes at running time. We compare either heuristics based on a single next turn or based on entire paths over the automaton. Experimental evaluation was carried out to test the model and the strategies over the Let’s Go Bus Information system. The results obtained show good system performances. They also show that local decisions can lead to better system performances than best path-based decisions due to the unpredictability of the user behaviors.", "title": "" }, { "docid": "4aecf3efd5de0ab468fc1f47d7662357", "text": "AIM\nThis article presents a discussion of generational differences and their impact on the nursing workforce and how this impact affects the work environment.\n\n\nBACKGROUND\nThe global nursing workforce represents four generations of nurses. This generational diversity frames attitudes, beliefs, work habits and expectations associated with the role of the nurse in the provision of care and in the way the nurse manages their day-to-day activities.\n\n\nDATA SOURCES\nAn electronic search of MEDLINE, PubMed and Cinahl databases was performed using the words generational diversity, nurse managers and workforce. The search was limited to 2000-2012.\n\n\nDISCUSSION\nGenerational differences present challenges to contemporary nurse managers working in a healthcare environment which is complex and dynamic, in terms of managing nurses who think and behave in a different way because of disparate core personal and generational values, namely, the three Cs of communication, commitment and compensation.\n\n\nIMPLICATIONS FOR NURSING\nAn acceptance of generational diversity in the workplace allows a richer scope for practice as the experiences and knowledge of each generation in the nursing environment creates an environment of acceptance and harmony facilitating retention of nurses.\n\n\nCONCLUSION\nAcknowledgement of generational characteristics provides the nurse manager with strategies which focus on mentoring and motivation; communication, the increased use of technology and the ethics of nursing, to bridge the gap between generations of nurses and to increase nursing workforce cohesion.", "title": "" }, { "docid": "82af5212b43e8dfe6d54582de621d96c", "text": "The use of multiple radar configurations can overcome some of the geometrical limitations that exist when obtaining radar images of a target using inverse synthetic aperture radar (ISAR) techniques. It is shown here how a particular bistatic configuration can produce three view angles and three ISAR images simultaneously. A new ISAR signal model is proposed and the applicability of employing existing monostatic ISAR techniques to bistatic configurations is analytically demonstrated. An analysis of the distortion introduced by the bistatic geometry to the ISAR image point spread function (PSF) is then carried out and the limits of the applicability of ISAR techniques (without the introduction of additional signal processing) are found and discussed. Simulations and proof of concept experimental data are also provided that support the theory.", "title": "" }, { "docid": "8ef7838ec34920af4e73f85c221d47b7", "text": "Cluster analysis is an important tool in many scientific disciplines, and many clustering methods are available (see e.g. Everitt (1974) or Jain and Dubes (1988)). A single clustering method or algorithm cannot solve all the possible clustering problems, hence the proliferation of many techniques. Most clustering methods are plagued ~ith the problem of noisy data, i.e., characterization o f good clusters amongst noisy data. In some cases, even a few noisy points or outliers affect the outcome of the method by severely biasing the algorithm. The noise that is just due to the statistical distribution o f the measuring instrument is usually of no concern. On the other hand, the completely arbitrary noise points that just do not belong to the pattern or class being searched for are o f real concern. A good example o f that is in image processing, where one is searching for certain shapes, for instance, amongst all the edge elements detected. An approach that is frequently recommended (for example, Jain and Dubes (1988)) is where one tries to identify such data and removes it before application o f the clustering algorithms. In many cases, however, that may not be possible or it may be extremely difficult. In this paper, a class of algorithms based on the square-error clustering (a sub-class o f partitional clustering) is considered. The performance of the algorithms o f this kind is highly susceptible to outliers or noisy points. The K-meaus type algorithms is one example where each point in the data-set must be assimaed to one o f the dusters. Because of this requirement, even the noise points have to be allotted to one o f the good clusters, and that would deteriorate the performance o f the algorithm. One approach to solve this problem is as proposed by Jolion and Rosenfeld (1989), where each data point is # y en a weight proportional to the density o f data points in its vicinity, thus assigning higher weights to the points belonging to the clusters, while assigning lower weights to the noise or background points. Thus the approach results in preprocessing o f the data in order to reduce the bias due to noise background. The", "title": "" }, { "docid": "b454900556cc392edd39b888de746298", "text": "As developers of a highly multilingual named entity recognition (NER) system, we face an evaluation resource bottleneck problem: we need evaluation data in many languages, the annotation should not be too time-consuming, and the evaluation results across languages should be comparable. We solve the problem by automatically annotating the English version of a multi-parallel corpus and by projecting the annotations into all the other language versions. For the translation of English entities, we use a phrase-based statistical machine translation system as well as a lookup of known names from a multilingual name database. For the projection, we incrementally apply different methods: perfect string matching, perfect consonant signature matching and edit distance similarity. The resulting annotated parallel corpus will be made available for reuse.", "title": "" }, { "docid": "7b959708a44209df1772c7caa6860f3d", "text": "In response to the revival of virtualized technology by Rosenblum and Garfinkel [2005], NIST defined cloud computing, a new paradigm in service computing infrastructures. In cloud environments, the basic security mechanism is ingrained in virtualization—that is, the execution of instructions at different privilege levels. Despite its obvious benefits, the caveat is that a crashed virtual machine (VM) is much harder to recover than a crashed workstation. When crashed, a VM is nothing but a giant corrupt binary file and quite unrecoverable by standard disk-based forensics. Therefore, VM crashes should be avoided at all costs. Security is one of the major contributors to such VM crashes. This includes compromising the hypervisor, cloud storage, images of VMs used infrequently, and remote cloud client used by the customer as well as threat from malicious insiders. Although using secure infrastructures such as private clouds alleviate several of these security problems, most cloud users end up using cheaper options such as third-party infrastructures (i.e., private clouds), thus a thorough discussion of all known security issues is pertinent. Hence, in this article, we discuss ongoing research in cloud security in order of the attack scenarios exploited most often in the cloud environment. We explore attack scenarios that call for securing the hypervisor, exploiting co-residency of VMs, VM image management, mitigating insider threats, securing storage in clouds, abusing lightweight software-as-a-service clients, and protecting data propagation in clouds. Wearing a practitioner's glasses, we explore the relevance of each attack scenario to a service company like Infosys. At the same time, we draw parallels between cloud security research and implementation of security solutions in the form of enterprise security suites for the cloud. We discuss the state of practice in the form of enterprise security suites that include cryptographic solutions, access control policies in the cloud, new techniques for attack detection, and security quality assurance in clouds.", "title": "" }, { "docid": "8a8b33eabebb6d53d74ae97f8081bf7b", "text": "Social networks are inevitable part of modern life. A class of social networks is those with both positive (friendship or trust) and negative (enmity or distrust) links. Ranking nodes in signed networks remains a hot topic in computer science. In this manuscript, we review different ranking algorithms to rank the nodes in signed networks, and apply them to the sign prediction problem. Ranking scores are used to obtain reputation and optimism, which are used as features in the sign prediction problem. Reputation of a node shows patterns of voting towards the node and its optimism demonstrates how optimistic a node thinks about others. To assess the performance of different ranking algorithms, we apply them on three signed networks including Epinions, Slashdot and Wikipedia. In this paper, we introduce three novel ranking algorithms for signed networks and compare their ability in predicting signs of edges with already existing ones. We use logistic regression as the predictor and the reputation and optimism values for the trustee and trustor as features (that are obtained based on different ranking algorithms). We find that ranking algorithms resulting in correlated ranking scores, leads to almost the same prediction accuracy. Furthermore, our analysis identifies a number of ranking algorithms that result in higher prediction accuracy compared to others.", "title": "" }, { "docid": "b759b2b5ad04bbc22604d042d0b2d37e", "text": "Distributed Ledgers (DLs), also known as blockchains, provide decentralised, tamper-free registries of transactions among partners that distrust each other. For the scientific community, DLs have been proposed to decentralise and make more transparent each step of the scientific workflow. For the particular case of dissemination and peerreviewing, DLs can provide the cornerstone to realise open decentralised publishing systems where social interactions between peers are tamperfree, enabling trustworthy computation of bibliometrics. In this paper, we propose the use of DL-backed Smart Contracts to track a subset of social interactions for scholarly publications in a decentralised and reliable way, yielding Smart Papers. We show how our Smart Papers approach complements current models for decentralised publishing, and analyse cost implications.", "title": "" }, { "docid": "90709f620b27196fdc7fc380e3757518", "text": "The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions (“dual dictionaries” of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.", "title": "" }, { "docid": "094bb78ae482f2ad4877e53a446236f0", "text": "While the amount of available information on the Web is increasing rapidly, the problem of managing it becomes more difficult. We present two applications, Thinkbase and Thinkpedia, which aim to make Web content more accessible and usable by utilizing visualizations of the semantic graph as a means to navigate and explore large knowledge repositories. Both of our applications implement a similar concept: They extract semantically enriched contents from a large knowledge spaces (Freebase and Wikipedia respectively), create an interactive graph-based representation out of it, and combine them into one interface together with the original text based content. We describe the design and implementation of our applications, and provide a discussion based on an informal evaluation. Author", "title": "" }, { "docid": "19b16abf5ec7efe971008291f38de4d4", "text": "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ2-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.", "title": "" }, { "docid": "26f2b200bf22006ab54051c9288420e8", "text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "a7b5bfb508b577fe98ececadf0820e3f", "text": "Endoscopic polypectomy is currently one of the most effective interventions for the prevention of colorectal cancer (CRC). Although the most common carcinoma precursor is the tubular adenoma, the detection, diagnosis, and follow-up of serrated precursors are also of clinical importance since the serrated pathway is implicated in about 30% of CRC [1, 2]. Serrated polyps can be categorized into three groups: the frequently encountered hyperplastic polyps (HPs) which are flat and distal; sessile serrated adenomas/polyps (SSA/Ps) which are flat, proximal, and account for about 10% of all serrated polyps; and traditional serrated adenomas (TSAs) which are distal, protruding, and account for a small percentage of serrated polyps [3]. HPs are further divided into microvesicular (MVHP) and goblet cell-rich (GCHP) types and are believed to be the precursors of SSA/Ps and TSAs, respectively. Although by definition SSA/Ps are non-dysplastic, they acquire cytologic dysplasia as they progress to CRC. Conversely, all TSAs harbor cytologic dysplasia. Thus, both SSA/P and TSA are established precursors of CRC. Detection and diagnosis of TSAs are straightforward given their unique endoscopic and histologic features. In contrast, HPs and SSA/Ps have similar endoscopic appearances and overlapping histologic features, complicating endoscopic and pathologic differentiation between the two lesions [4, 5]. Furthermore, based on recent data, distinguishing SSA/Ps from HPs may also have some significance for metachronous risk assessment [6], and therefore, differentiating HPs from SSA/Ps is a common concern for endoscopists. A simple, accurate, and reproducible way to endoscopically distinguish HPs from SSA/Ps would aid endoscopists in their efforts to identify and remove all serrated lesions with malignant potential. Furthermore, these methods would be helpful in implementation of new paradigms in which diminutive polyps are optically diagnosed and either not removed or resected and not recovered for subsequent pathologic examination [7]. The American Society for Gastrointestinal Endoscopy (ASGE) Technology Committee’s “Preservation and Incorporation of Valuable endoscopic Innovations” (PIVI) paper provides recommendations for adoption of new technologies or strategies into clinical practice, which can be used to optically diagnose diminutive (≤ 5 mm) polyps [8]. Although developed for adenomas, strategies for optical diagnosis can also be applied by endoscopists for serrated polyps. For example, use of the “diagnose and leave” strategy for serrated polyps could decrease the risk and cost of colonoscopy by obviating the need for polypectomy in polyps that were endoscopically diagnosed as HPs [9]. Alternatively, the “resect and discard” could decrease cost by eliminating the need for pathologic interpretation of serrated polyps that endoscopists were confident were HPs and not SSA/Ps. In this month’s issue of Digestive Diseases and Sciences [10], Aoki et al. in Sapporo, Japan, characterized serrated polyps, conventional adenomas, and CRC using endoscopic, pathologic, and molecular features. In addition to size and location, trained endoscopists used the Paris classification [11] to characterize the shape of the lesions. Magnification chromoendoscopy that has gained popularity in the Far East enables visualization of the “pit pattern” which reflects the colonic pit structure and enables the differentiation of the many types of polyps. The pit patterns are currently categorized by the Kudo classification [12, 13] where Type I indicates normal mucosa, Type II is consistent with HP, and Types III, IV, and V are consistent with dysplastic changes. Disclaimer The contents of this work do not represent the views of the Department of Veterans Affairs or the United States Government.", "title": "" }, { "docid": "483578f69e60298f5afba28eff514120", "text": "This paper proposes a multiport power electronic transformer (PET) topology with multiwinding medium-frequency transformer (MW-MFT) isolation along with the associated modeling analysis and control scheme. The power balance at different ports can be controlled using the multiwinding transformer's common flux linkage. The potential applications of the proposed multiport PET are high-power traction systems for locomotives and electric multiple units, marine propulsion, wind power generation, and utility grid distribution applications. The complementary polygon equivalent circuit modeling of an MW-MFT is presented. The current and power characteristics of the virtual circuit branches and the multiports with general-phase-shift control are described. The general current and power analysis for the multiple active bridge (MAB) isolation units is investigated. Power decoupling methods, including nonlinear solution for power balancing are proposed. The zero-voltage-switching conditions for the MAB are discussed. Control strategies including soft-switching-phase-shift control and voltage balancing control based on the power decoupling calculations are described. Simulations and experiments are presented to verify the performance of the proposed topology and control algorithms.", "title": "" }, { "docid": "35e377e94b9b23283eabf141bde029a2", "text": "We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.", "title": "" }, { "docid": "b6d6da15fd000be1a01d4b0f1bb0d087", "text": "Purpose – The purpose of the paper is to distinguish features of m-commerce from those of e-commerce and identify factors to influence customer satisfaction (m-satisfaction) and loyalty (m-loyalty) in m-commerce by empirically-based case study. Design/methodology/approach – First, based on previous literature, the paper builds sets of customer satisfaction factors for both e-commerce and m-commerce. Second, features of m-commerce are identified by comparing it with current e-commerce through decision tree (DT). Third, with the derived factors from DT, significant factors and relationships among the factors, m-satisfaction and m-loyalty are examined by m-satisfaction model employing structural equation model. Findings – The paper finds that m-commerce is partially similar in factors like “transaction process” and “customization” which lead customer satisfaction after connecting an m-commerce site, but it has unique aspects of “content reliability”, “availability”, and “perceived price level of mobile Internet (m-Internet)” which build customer’s intention to the m-commerce site. Through the m-satisfaction model, “content reliability”, and “transaction process” are proven to be significantly influential factors to m-satisfaction and m-loyalty. Research implications/limitations – The paper can be a meaningful step to provide empirical analysis and evaluation based on questionnaire survey targeting actual users. The research is based on a case study on digital music transaction, which is indicative, rather than general. Practical implications – The paper meets the needs to focus on customer under the fiercer competition in Korean m-commerce market. It can guide those who want to initiate, move or broaden their business to m-commerce from e-commerce. Originality/value – The paper develops a revised ACSI model to identify individual critical factors and the degree of effect.", "title": "" } ]
scidocsrr
1fd18a4802f5dadb4377769e98e08355
Exploiting N-Best Hypotheses to Improve an SMT Approach to Grammatical Error Correction
[ { "docid": "effbe5c9cd150b01e0659707e72650a9", "text": "Research on grammatical error correction has received considerable attention. For dealing with all types of errors, grammatical error correction methods that employ statistical machine translation (SMT) have been proposed in recent years. An SMT system generates candidates with scores for all candidates and selects the sentence with the highest score as the correction result. However, the 1-best result of an SMT system is not always the best result. Thus, we propose a reranking approach for grammatical error correction. The reranking approach is used to re-score N-best results of the SMT and reorder the results. Our experiments show that our reranking system using parts of speech and syntactic features improves performance and achieves state-of-theart quality, with an F0.5 score of 40.0.", "title": "" } ]
[ { "docid": "1580e188796e4e7b6c5930e346629849", "text": "This paper describes the development process of FarsNet; a lexical ontology for the Persian language. FarsNet is designed to contain a Persian WordNet with about 10000 synsets in its first phase and grow to cover verbs' argument structures and their selectional restrictions in its second phase. In this paper we discuss the semi-automatic approach to create the first phase: the Persian WordNet.", "title": "" }, { "docid": "de64aaa37e53beacb832d3686b293a9b", "text": "By using a population-based cohort of the general Dutch population, the authors studied whether an excessively negative orientation toward pain (pain catastrophizing) and fear of movement/(re)injury (kinesiophobia) are important in the etiology of chronic low back pain and associated disability, as clinical studies have suggested. A total of 1,845 of the 2,338 inhabitants (without severe disease) aged 25-64 years who participated in a 1998 population-based questionnaire survey on musculoskeletal pain were sent a second questionnaire after 6 months; 1,571 (85 percent) participated. For subjects with low back pain at baseline, a high level of pain catastrophizing predicted low back pain at follow-up (odds ratio (OR) = 1.7, 95% confidence interval (CI): 1.0, 2.8) and chronic low back pain (OR = 1.7, 95% CI: 1.0, 2.3), in particular severe low back pain (OR = 3.0, 95% CI: 1.7, 5.2) and low back pain with disability (OR = 3.0, 95% CI: 1.7, 5.4). A high level of kinesiophobia showed similar associations. The significant associations remained after adjustment for pain duration, pain severity, or disability at baseline. For those without low back pain at baseline, a high level of pain catastrophizing or kinesiophobia predicted low back pain with disability during follow-up. These cognitive and emotional factors should be considered when prevention programs are developed for chronic low back pain and related disability.", "title": "" }, { "docid": "c20733b414a1b39122ef54d161885d81", "text": "This paper discusses the role of clusters and focal firms in the economic performance of small firms in Italy. Using the example of the packaging industry of northern Italy, it shows how clusters of small firms have emerged around a few focal or leading companies. These companies have helped the clusters grow and diversify through technological and managerial spillover effects, through the provision of purchase orders, and sometimes through financial links. The role of common local training institutes, whose graduates often start up small firms within the local cluster, is also discussed.", "title": "" }, { "docid": "af69cdae1b331c012dab38c47e2c786c", "text": "A 44 μW self-powered power line monitoring sensor node is implemented in 65 nm CMOS. A 450 kHz 30 kbps BPSK-modulated transceiver allows for 1.5-meter node-to-node powerline communication at 10E-6 BER. The node has a 3.354 ENOB 50 kSps SAR ADC for current measurement and a 440 Sps time-to-digital converter capable of measuring temperature from 0-100 °C in 1.12 °C steps. All components operate at a nominal supply voltage of 0.5 V, and are powered by dedicated regulators enabling fine-grained power management.", "title": "" }, { "docid": "bb5ce42707f086d4ca2c6a5d23587070", "text": "Supervoxel methods such as Simple Linear Iterative Clustering (SLIC) are an effective technique for partitioning an image or volume into locally similar regions, and are a common building block for the development of detection, segmentation and analysis methods. We introduce maskSLIC an extension of SLIC to create supervoxels within regions-of-interest, and demonstrate, on examples from 2-dimensions to 4-dimensions, that maskSLIC overcomes issues that affect SLIC within an irregular mask. We highlight the benefits of this method through examples, and show that it is able to better represent underlying tumour subregions and achieves significantly better results than SLIC on the BRATS 2013 brain tumour challenge data (p=0.001) – outperforming SLIC on 18/20 scans. Finally, we show an application of this method for the analysis of functional tumour subregions and demonstrate that it is more effective than voxel clustering.", "title": "" }, { "docid": "741dbabfa94b787f31bccf12471724a4", "text": "In this paper is proposed a Takagi-Sugeno Fuzzy controller (TSF) applied to the direct torque control scheme with space vector modulation. In conventional DTC-SVM scheme, two PI controllers are used to generate the reference stator voltage vector. To improve the drawback of this conventional DTC-SVM scheme is proposed the TSF controller to substitute both PI controllers. The proposed controller calculates the reference quadrature components of the stator voltage vector. The rule base for the proposed controller is defined in function of the stator flux error and the electromagnetic torque error using trapezoidal and triangular membership functions. Constant switching frequency and low torque ripple are obtained using space vector modulation technique. Performance of the proposed DTC-SVM with TSF controller is analyzed in terms of several performance measures such as rise time, settling time and torque ripple considering different operating conditions. The simulation results shown that the proposed scheme ensure fast torque response and low torque ripple validating the proposed scheme.", "title": "" }, { "docid": "7526ae65780945b311f24e212f6a3d4b", "text": "We present Prophet, a novel patch generation system that works with a set of successful human patches obtained from open- source software repositories to learn a probabilistic, application-independent model of correct code. It generates a space of candidate patches, uses the model to rank the candidate patches in order of likely correctness, and validates the ranked patches against a suite of test cases to find correct patches. Experimental results show that, on a benchmark set of 69 real-world defects drawn from eight open-source projects, Prophet significantly outperforms the previous state-of-the-art patch generation system.", "title": "" }, { "docid": "e6662ebd9842e43bd31926ac171807ca", "text": "INTRODUCTION\nDisruptions in sleep and circadian rhythms are observed in individuals with bipolar disorders (BD), both during acute mood episodes and remission. Such abnormalities may relate to dysfunction of the molecular circadian clock and could offer a target for new drugs.\n\n\nAREAS COVERED\nThis review focuses on clinical, actigraphic, biochemical and genetic biomarkers of BDs, as well as animal and cellular models, and highlights that sleep and circadian rhythm disturbances are closely linked to the susceptibility to BDs and vulnerability to mood relapses. As lithium is likely to act as a synchronizer and stabilizer of circadian rhythms, we will review pharmacogenetic studies testing circadian gene polymorphisms and prophylactic response to lithium. Interventions such as sleep deprivation, light therapy and psychological therapies may also target sleep and circadian disruptions in BDs efficiently for treatment and prevention of bipolar depression.\n\n\nEXPERT OPINION\nWe suggest that future research should clarify the associations between sleep and circadian rhythm disturbances and alterations of the molecular clock in order to identify critical targets within the circadian pathway. The investigation of such targets using human cellular models or animal models combined with 'omics' approaches are crucial steps for new drug development.", "title": "" }, { "docid": "13f7b5a92e830bff44c14c77056f9743", "text": "Many pneumatic energy sources are available for use in autonomous and wearable soft robotics, but it is often not obvious which options are most desirable or even how to compare them. To address this, we compare pneumatic energy sources and review their relative merits. We evaluate commercially available battery-based microcompressors (singly, in parallel, and in series) and cylinders of high-pressure fluid (air and carbon dioxide). We identify energy density (joules/gram) and flow capacity (liters/gram) normalized by the mass of the entire fuel system (versus net fuel mass) as key metrics for soft robotic power systems. We also review research projects using combustion (methane and butane) and monopropellant decomposition (hydrogen peroxide), citing theoretical and experimental values. Comparison factors including heat, effective energy density, and working pressure/flow rate are covered. We conclude by comparing the key metrics behind each technology. Battery-powered microcompressors provide relatively high capacity, but maximum pressure and flow rates are low. Cylinders of compressed fluid provide high pressures and flow rates, but their limited capacity leads to short operating times. While methane and butane possess the highest net fuel energy densities, they typically react at speeds and pressures too high for many soft robots and require extensive system-level development. Hydrogen peroxide decomposition requires not only few additional parts (no pump or ignition system) but also considerable system-level development. We anticipate that this study will provide a framework for configuring fuel systems in soft robotics.", "title": "" }, { "docid": "c5154cbd6721c18f844228c9dc711dc6", "text": "The finite difference time domain (FDTD) method is widely used as a computational tool for development, validation, and optimization of emerging microwave breast cancer detection and treatment techniques. When expressed in terms of Debye parameters, dispersive breast tissue dielectric properties can be efficiently incorporated into FDTD codes. Previously, we experimentally characterized the dielectric properties of a large number of excised normal and malignant breast tissue samples from 0.5 to 20 GHz. We subdivided the large database of normal tissue data into three groups based on the percent adipose tissue present in a particular sample. In addition, we formed a group of all cancer samples that contained at least 30% malignant tissue. We summarized the data using one-pole Cole-Cole models that were rigorously fit to the median dielectric properties of the three normal tissue groups and one malignant tissue group. In this letter, we present computationally simpler one- and two-pole Debye models that retain the high accuracy of the Cole-Cole models. Model parameters are derived for two sets of frequency ranges: the entire measurement frequency range from 0.5 to 20 GHz, and the 3.1-10.6 GHz FCC band allocated for ultrawideband medical applications. The proposed Debye models provide a means for creating computationally efficient FDTD breast models with realistic wideband dielectric properties derived from the largest and most comprehensive experimental study conducted to date on human breast tissue.", "title": "" }, { "docid": "a93833a6ad41bdc5011a992509e77c9a", "text": "We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.", "title": "" }, { "docid": "556f33d199e6516a4aa8ebca998facf2", "text": "R ecommender systems have become important tools in ecommerce. They combine one user’s ratings of products or services with ratings from other users to answer queries such as “Would I like X?” with predictions and suggestions. Users thus receive anonymous recommendations from people with similar tastes. While this process seems innocuous, it aggregates user preferences in ways analogous to statistical database queries, which can be exploited to identify information about a particular user. This is especially true for users with eclectic tastes who rate products across different types or domains in the systems. These straddlers highlight the conflict between personalization and privacy in recommender systems. While straddlers enable serendipitous recommendations, information about their existence could be used in conjunction with other data sources to uncover identities and reveal personal details. We use a graph-theoretic model to study the benefit from and risk to straddlers.", "title": "" }, { "docid": "49f2f870496d34fe379c0b077197bde3", "text": "Ultra wideband components have been developed using SIW technology. The various components including a GCPW transition with less than 0.4dB insertion loss are developed. In addition to, T and Y-junctions are optimized with relatively wide bandwidth of greater than 63% and 40% respectively that have less than 0.6 dB insertion loss. The developed transition was utilized to design an X-band 8 way power divider that demonstrated excellent performance over a 5 GHz bandwidth with less than ±4º and ±0.9 dB phase and amplitude imbalance, respectively. The developed SIW power divider has a low profile and is particularly suitable for circuits' integration.", "title": "" }, { "docid": "6f1669cf7fe464c42b5cb0d68efb042e", "text": "BACKGROUND\nLevine and Drennan described the tibial metaphyseal-diaphyseal angle (MDA) in an attempt to identify patients with infantile Blount's disease. Pediatric orthopaedic surgeons have debated not only the use, but also the reliability of this measure. Two techniques have been described to measure the MDA. These techniques involved using both the lateral border of the tibial cortex and the center of the tibial shaft as the longitudinal axis for radiographic measurements. The use of digital images poses another variable in the reliability of the MDA as digital images are used more commonly.\n\n\nMETHODS\nThe radiographs of 21 children (42 limbs) were retrospectively reviewed by 27 staff pediatric orthopaedic surgeons. Interobserver reliability was determined using the intraclass correlation coefficients (ICCs). Nine duplicate radiographs (18 duplicate limbs) that appeared in the data set were used to calculate ICCs representing the intraobserver reliability. A scatter plot was created comparing the mean MDA determined by the 2 methods. The strength of a linear relationship between the 2 methods was measured with the Pearson correlation coefficient. Finally, we tested for a difference in variability between the 2 measures at angles of 11 degrees or less and greater than 11 degrees by comparing the variance ratios using the F test.\n\n\nRESULTS\nThe interobserver reliability was calculated using the ICC as 0.821 for the single-measure method and 0.992 for the average-measure method. The intraobserver reliability was similarly calculated using the ICC as 0.886 for the single-measure method and 0.940 for the average-measure method. Pearson correlation coefficient (0.9848) revealed a highly linear relationship between the 2 methods (P = 0.00001). We also found that there was no statistically significant variability between the 2 methods of calculating the MDA at angles of 11 degrees or less compared with angles greater than 11 degrees (P = 0.596688).\n\n\nCONCLUSIONS\nThere was excellent interobserver reliability and intraobserver reliability among reviewers. Using either the lateral diaphyseal line or center diaphyseal line produces reasonable reliability with no significant variability at angles of 11 degrees or less or greater than 11 degrees.\n\n\nLEVEL OF EVIDENCE\nLevel IV.", "title": "" }, { "docid": "a377ebca5f4918f9c774d56d5d86e42a", "text": "Cloud computing has emerged as an extremely successful paradigm for deploying web applications. Scalability, elasticity, pay-per-use pricing, and economies of scale from large scale operations are the major reasons for the successful and widespread adoption of cloud infrastructures. Since a majority of cloud applications are data driven, database management systems (DBMSs) powering these applications form a critical component in the cloud software stack. In this article, we present an overview of our work on instilling these above mentioned “cloud features” in a database system designed to support a variety of applications deployed in the cloud: designing scalable database management architectures using the concepts of data fission and data fusion, enabling lightweight elasticity using low cost live database migration, and designing intelligent and autonomic controllers for system management without human intervention.", "title": "" }, { "docid": "035d329a90c3b7ad2562b5914baa571c", "text": "Microblogging platforms such as Twitter provide active communication channels during mass convergence and emergency events such as earthquakes, typhoons. During the sudden onset of a crisis situation, affected people post useful information on Twitter that can be used for situational awareness and other humanitarian disaster response efforts, if processed timely and effectively. Processing social media information pose multiple challenges such as parsing noisy, brief and informal messages, learning information categories from the incoming stream of messages and classifying them into different classes among others. One of the basic necessities of many of these tasks is the availability of data, in particular human-annotated data. In this paper, we present human-annotated Twitter corpora collected during 19 different crises that took place between 2013 and 2015. To demonstrate the utility of the annotations, we train machine learning classifiers. Moreover, we publish first largest word2vec word embeddings trained on 52 million crisis-related tweets. To deal with tweets language issues, we present human-annotated normalized lexical resources for different lexical variations.", "title": "" }, { "docid": "1d12470ab31735721a1f50ac48ac65bd", "text": "In this work, we investigate the role of relational bonds in keeping students engaged in online courses. Specifically, we quantify the manner in which students who demonstrate similar behavior patterns influence each other’s commitment to the course through their interaction with them either explicitly or implicitly. To this end, we design five alternative operationalizations of relationship bonds, which together allow us to infer a scaled measure of relationship between pairs of students. Using this, we construct three variables, namely number of significant bonds, number of significant bonds with people who have dropped out in the previous week, and number of such bonds with people who have dropped in the current week. Using a survival analysis, we are able to measure the prediction strength of these variables with respect to dropout at each time point. Results indicate that higher numbers of significant bonds predicts lower rates of dropout; while loss of significant bonds is associated with higher rates of dropout.", "title": "" }, { "docid": "32f0cc62e05f18e60f39d0c0595129e2", "text": "Learning from multi-view data is important in many applications. In this paper, we propose a novel convex subspace representation learning method for unsupervised multi-view clustering. We first formulate the subspace learning with multiple views as a joint optimization problem with a common subspace representation matrix and a group sparsity inducing norm. By exploiting the properties of dual norms, we then show a convex min-max dual formulation with a sparsity inducing trace norm can be obtained. We develop a proximal bundle optimization algorithm to globally solve the minmax optimization problem. Our empirical study shows the proposed subspace representation learning method can effectively facilitate multi-view clustering and induce superior clustering results than alternative multiview clustering methods.", "title": "" }, { "docid": "35e377e94b9b23283eabf141bde029a2", "text": "We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.", "title": "" }, { "docid": "66a6e9bbdd461fa85a0a09ec1ceb2031", "text": "BACKGROUND\nConverging evidence indicates a functional disruption in the neural systems for reading in adults with dyslexia. We examined brain activation patterns in dyslexic and nonimpaired children during pseudoword and real-word reading tasks that required phonologic analysis (i.e., tapped the problems experienced by dyslexic children in sounding out words).\n\n\nMETHODS\nWe used functional magnetic resonance imaging (fMRI) to study 144 right-handed children, 70 dyslexic readers, and 74 nonimpaired readers as they read pseudowords and real words.\n\n\nRESULTS\nChildren with dyslexia demonstrated a disruption in neural systems for reading involving posterior brain regions, including parietotemporal sites and sites in the occipitotemporal area. Reading skill was positively correlated with the magnitude of activation in the left occipitotemporal region. Activation in the left and right inferior frontal gyri was greater in older compared with younger dyslexic children.\n\n\nCONCLUSIONS\nThese findings provide neurobiological evidence of an underlying disruption in the neural systems for reading in children with dyslexia and indicate that it is evident at a young age. The locus of the disruption places childhood dyslexia within the same neurobiological framework as dyslexia, and acquired alexia, occurring in adults.", "title": "" } ]
scidocsrr
6859c8733c8775f2c9cd59d314f1e9ad
How Do Humans Teach: On Curriculum Learning and Teaching Dimension
[ { "docid": "05f36ee9c051f8f9ea6e48d4fdd28dae", "text": "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303{314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108.", "title": "" }, { "docid": "5a537d2454ee09199444e319ac045b57", "text": "Objects vary in their visual complexity, yet existing discovery methods perform “batch” clustering, paying equal attention to all instances simultaneously — regardless of the strength of their appearance or context cues. We propose a self-paced approach that instead focuses on the easiest instances first, and progressively expands its repertoire to include more complex objects. Easier regions are defined as those with both high likelihood of generic objectness and high familiarity of surrounding objects. At each cycle of the discovery process, we re-estimate the easiness of each subwindow in the pool of unlabeled images, and then retrieve a single prominent cluster from among the easiest instances. Critically, as the system gradually accumulates models, each new (more difficult) discovery benefits from the context provided by earlier discoveries. Our experiments demonstrate the clear advantages of self-paced discovery relative to conventional batch approaches, including both more accurate summarization as well as stronger predictive models for novel data.", "title": "" }, { "docid": "a8164a657a247761147c6012fd5442c9", "text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.", "title": "" } ]
[ { "docid": "f82ce890d66c746a169a38fdad702749", "text": "The following review paper presents an overview of the current crop yield forecasting methods and early warning systems for the global strategy to improve agricultural and rural statistics across the globe. Different sections describing simulation models, remote sensing, yield gap analysis, and methods to yield forecasting compose the manuscript. 1. Rationale Sustainable land management for crop production is a hierarchy of systems operating in— and interacting with—economic, ecological, social, and political components of the Earth. This hierarchy ranges from a field managed by a single farmer to regional, national, and global scales where policies and decisions influence crop production, resource use, economics, and ecosystems at other levels. Because sustainability concepts must integrate these diverse issues, agricultural researchers who wish to develop sustainable productive systems and policy makers who attempt to influence agricultural production are confronted with many challenges. A multiplicity of problems can prevent production systems from being sustainable; on the other hand, with sufficient attention to indicators of sustainability, a number of practices and policies could be implemented to accelerate progress. Indicators to quantify changes in crop production systems over time at different hierarchical levels are needed for evaluating the sustainability of different land management strategies. To develop and test sustainability concepts and yield forecast methods globally, it requires the implementation of long-term crop and soil management experiments that include measurements of crop yields, soil properties, biogeochemical fluxes, and relevant socioeconomic indicators. Long-term field experiments cannot be conducted with sufficient detail in space and time to find the best land management practices suitable for sustainable crop production. Crop and soil simulation models, when suitably tested in reasonably diverse space and time, provide a critical tool for finding combinations of management strategies to reach multiple goals required for sustainable crop production. The models can help provide land managers and policy makers with a tool to extrapolate experimental results from one location to others where there is a lack of response information. Agricultural production is significantly affected by environmental factors. Weather influences crop growth and development, causing large intra-seasonal yield variability. In addition, spatial variability of soil properties, interacting with the weather, cause spatial yield variability. Crop agronomic management (e.g. planting, fertilizer application, irrigation, tillage, and so on) can be used to offset the loss in yield due to effects of weather. As a result, yield forecasting represents an important tool for optimizing crop yield and to evaluate the crop-area insurance …", "title": "" }, { "docid": "512a298da64b87cc6acab66ecfdbaf11", "text": "Organizations will always strive to be more effective in everything they do even when it comes to business intelligence. In order to streamline the processes for decision support, organizations have begun switching to self-service business intelligence where people at an operational level create their own reports and make their own analyzes. When this happens staff members in the organization need to acquire new skills. But what skills do the staff members really need? That is what ́s being investigated in this work. The investigation will be conducted by means of a case study. Literature will be reviewed and interviews will be conducted with people working with decision support. The work investigates the skills required by an end user, but excludes how to acquire these skills. The results show that there are three main categories of competencies, these are technical abilities, business skills and analytical abilities. Under these categories there are six skills; Business data, BI tool, Data habit, Own operations, Industry knowledge and Analytical thinking. Innehållsförteckning 1 INLEDNING ................................................................................................................ 1 1.1 Problemområde .................................................................................................................................................... 1 1.1.1 Forskningsfråga ................................................................................................................................................................... 3 1.1.2 Avgränsningar ...................................................................................................................................................................... 3 1.1.3 Förväntat resultat ............................................................................................................................................................... 3 2 BAKGRUND ................................................................................................................ 4 2.1 Business Intelligence .......................................................................................................................................... 4 2.1.1 Utvecklingen av BI .............................................................................................................................................................. 4 2.1.2 Nivåer inom BI ...................................................................................................................................................................... 5 2.1.3 BI Processen .......................................................................................................................................................................... 7 2.1.4 Business Intelligence Competence Center – BICC ................................................................................................ 7 2.2 Self-Service Business Intelligence .................................................................................................................. 7 2.2.1 Fyra fokusområden inom SSBI ...................................................................................................................................... 8 2.2.2 Olika användare inom SSBI ............................................................................................................................................. 8 2.2.3 Nivåer inom SSBI ................................................................................................................................................................. 9 2.2.4 Fördelarna med SSBI ...................................................................................................................................................... 10 2.3 Kompetens ........................................................................................................................................................... 11 2.3.1 Kompetensnivåer ............................................................................................................................................................. 11 2.3.2 Kompetens inom SSBI slutanvändare .................................................................................................................. 11 3 METOD .................................................................................................................... 13 3.1 Fallstudie .............................................................................................................................................................. 14 3.1.1 Litteraturgranskning ...................................................................................................................................................... 14 3.1.2 Intervjuer ............................................................................................................................................................................. 15 3.2 Kvalitativ dataanalys ....................................................................................................................................... 15 3.3 Etiska aspekter ................................................................................................................................................... 16 3.4 Förväntat resultat ............................................................................................................................................. 17 4 GENOMFÖRANDE .................................................................................................... 18 4.1 Litteraturgranskning ....................................................................................................................................... 18 4.2 Intervjuer ............................................................................................................................................................. 18 4.3 Kodning ................................................................................................................................................................. 19 5 ANALYS .................................................................................................................... 20 5.1 Litteraturgranskning ....................................................................................................................................... 20 5.2 Intervjuer ............................................................................................................................................................. 21 5.2.1 Tekniska kunskaper ........................................................................................................................................................ 22 5.2.2 Verksamhetskunskaper................................................................................................................................................. 23 5.2.3 Analytiska kunskaper ..................................................................................................................................................... 25 5.3 Kompetensnivå .................................................................................................................................................. 26 6 RESULTAT ................................................................................................................ 30 6.1 Verksamhetskunskaper .................................................................................................................................. 31 6.1.1 Verksamheten .................................................................................................................................................................... 31 6.1.2 Branschkunskaper ........................................................................................................................................................... 31 6.1.3 Verksamhetsdata .............................................................................................................................................................. 31 6.2 Analytiska Förmågor........................................................................................................................................ 31 6.2.1 Analytiskt tänk .................................................................................................................................................................. 32 6.3 Tekniska kunskaper ........................................................................................................................................ 32 6.3.1 Datavana .............................................................................................................................................................................. 32 6.3.2 BI Verktyget ..................................................................................................................................................................... 32 7 SLUTSATS ................................................................................................................. 33 7.1 Teoretiskt bidrag ............................................................................................................................................... 33 7.2 Praktiskt Bidrag ................................................................................................................................................. 33 8 DISKUSSION ............................................................................................................. 34 8.1 Metodval och Genomförande ........................................................................................................................ 34 8.2 Resultat .................................................................................................................................................................", "title": "" }, { "docid": "476c102cd8942d54751cfb7f403099f2", "text": "Cognitive radio (CR) represents the proper technological solution in case of radio resources scarcity and availability of shared channels. For the deployment of CR solutions, it is important to implement proper sensing procedures, which are aimed at continuously surveying the status of the channels. However, accurate views of the resources status can be achieved only through the cooperation of many sensing devices. For these reasons, in this paper, we propose the utilization of the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way, with respect to the rules set by their owners. The resulting social network enables faster and trustworthy information/service discovery exploiting the social network of “friend” objects. We first describe the general approach according to which members of the SIoT collaborate to exchange channel status information. Then, we discuss the main features, i.e., the possibility to implement a distributed approach for a low-complexity cooperation and the scalability feature in heterogeneous networks. Simulations have also been run to show the advantages in terms of increased capacity and decreased interference probability.", "title": "" }, { "docid": "13aef8ba225dd15dd013e155c319310e", "text": "ness and Approximations Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as followsness and Approximations • This rather absurd attack goes as follows Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” • The problem is that Davis fails to recognize that a lot of th hypercomputational models are abstract models that no one hopes to build in the near future. Thursday, June 9, 2011 Necessity of Noncomputable Reals Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines • Kieu-type Quantum Computation Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends The Main Case Science of Sciences Part 1: Chain Store Paradox Part 2: Turing-level Actors Part 3:MDL Computational Learning Theory CLT-based Model of Science", "title": "" }, { "docid": "ca4150a5346b13825f1a0f199eddfc45", "text": "Fine-grained classification involves distinguishing between similar sub-categories based on subtle differences in highly localized regions, therefore, accurate localization of discriminative regions remains a major challenge. We describe a patch-based framework to address this problem. We introduce triplets of patches with geometric constraints to improve the accuracy of patch localization, and automatically mine discriminative geometrically-constrained triplets for classification. The resulting approach only requires object bounding boxes. Its effectiveness is demonstrated using four publicly available fine-grained datasets, on which it outperforms or achieves comparable performance to the state-of-the-art in classification.", "title": "" }, { "docid": "be3721ebf2c55972146c3e87aee475ba", "text": "Advances in computation and communication are taking shape in the form of the Internet of Things, Machine-to-Machine technology, Industry 4.0, and Cyber-Physical Systems (CPS). The impact on engineering such systems is a new technical systems paradigm based on ensembles of collaborating embedded software systems. To successfully facilitate this paradigm, multiple needs can be identified along three axes: (i) online configuring an ensemble of systems, (ii) achieving a concerted function of collaborating systems, and (iii) providing the enabling infrastructure. This work focuses on the collaborative function dimension and presents a set of concrete examples of CPS challenges. The examples are illustrated based on a pick and place machine that solves a distributed version of the Towers of Hanoi puzzle. The system includes a physical environment, a wireless network, concurrent computing resources, and computational functionality such as, service arbitration, various forms of control, and processing of streaming video. The pick and place machine is of medium-size complexity. It is representative of issues occurring in industrial systems that are coming online. The entire study is provided at a computational model level, with the intent to contribute to the model-based research agenda in terms of design methods and implementation technologies necessary to make the next generation systems a reality.", "title": "" }, { "docid": "458e4b5196805b608e15ee9c566123c9", "text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK", "title": "" }, { "docid": "4cad0f6f9fde2d6dc5020153c4edae45", "text": "Rotation estimation is a fundamental step for various robotic applications such as automatic control of ground/aerial vehicles, motion estimation and 3D reconstruction. However it is now well established that traditional navigation equipments, such as global positioning systems (GPSs) or inertial measurement units (IMUs), suffer from several disadvantages. Hence, some vision-based works have been proposed recently. Whereas interesting results can be obtained, the existing methods have non-negligible limitations such as a difficult feature matching (e.g. repeated textures, blur or illumination changes) and a high computational cost (e.g. analyze in the frequency domain). Moreover, most of them utilize conventional perspective cameras and thus have a limited field of view. In order to overcome these limitations, in this paper we present a novel rotation estimation approach based on the extraction of vanishing points in omnidirectional images. The first advantage is that our rotation estimation is decoupled from the translation computation, which accelerates the execution time and results in a better control solution. This is made possible by our complete framework dedicated to omnidirectional vision, whereas conventional vision has a rotation/translation ambiguity. Second, we propose a top-down approach which maintains the important constraint of vanishing point orthogonality by inverting the problem: instead of performing a difficult line clustering preliminary step, we directly search for the orthogonal vanishing points. Finally, experimental results on various data sets for diverse robotic applications have demonstrated that our novel framework is accurate, robust, maintains the orthogonality of the vanishing points and can run in real-time.", "title": "" }, { "docid": "0321ef8aeb0458770cd2efc35615e11c", "text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.", "title": "" }, { "docid": "2642188d1f62f49450b9034f9180baa5", "text": "A graphical abstract (GA) provides a concise visual summary of a scientific contribution. GAs are increasingly required by journals to help make scientific publications more accessible to readers. We characterize the design space of GAs through a qualitative analysis of 54 GAs from a range of disciplines, and descriptions of GA design principles from scientific publishers. We present a set of design dimensions, visual structures, and design templates that describe how GAs communicate via pictorial and symbolic elements. By reflecting on how GAs employ visual metaphors, representational genres, and text relative to prior characterizations of how diagrams communicate, our work sheds light on how and why GAs may be distinct. We outline steps for future work at the intersection of HCI, AI, and scientific communication aimed at the creation of GAs.", "title": "" }, { "docid": "8589ec481e78d14fbeb3e6e4205eee50", "text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5f01e9cd6dc2f9bd051e172b3108f06d", "text": "Head pose estimation is recently a more and more popular area of research. For the last three decades new approaches have constantly been developed, and steadily better accuracy was achieved. Unsurprisingly, a very broad range of methods was explored statistical, geometrical and tracking-based to name a few. This paper presents a brief summary of the evolution of head pose estimation and a glimpse at the current state-of-the-art in this eld.", "title": "" }, { "docid": "5213ed67780b194a609220677b9c1dd4", "text": "Cardiovascular diseases (CVD) are initiated by endothelial dysfunction and resultant expression of adhesion molecules for inflammatory cells. Inflammatory cells secrete cytokines/chemokines and growth factors and promote CVD. Additionally, vascular cells themselves produce and secrete several factors, some of which can be useful for the early diagnosis and evaluation of disease severity of CVD. Among vascular cells, abundant vascular smooth muscle cells (VSMCs) secrete a variety of humoral factors that affect vascular functions in an autocrine/paracrine manner. Among these factors, we reported that CyPA (cyclophilin A) is secreted mainly from VSMCs in response to Rho-kinase activation and excessive reactive oxygen species (ROS). Additionally, extracellular CyPA augments ROS production, damages vascular functions, and promotes CVD. Importantly, a recent study in ATVB demonstrated that ambient air pollution increases serum levels of inflammatory cytokines. Moreover, Bell et al reported an association of air pollution exposure with high-density lipoprotein (HDL) cholesterol and particle number. In a large, multiethnic cohort study of men and women free of prevalent clinical CVD, they found that higher concentrations of PM2.5 over a 3-month time period was associated with lower HDL particle number, and higher annual concentrations of black carbon were associated with lower HDL cholesterol. Together with the authors’ previous work on biomarkers of oxidative stress, they provided evidence for potential pathways that may explain the link between air pollution exposure and acute cardiovascular events. The objective of this review is to highlight the novel research in the field of biomarkers for CVD.", "title": "" }, { "docid": "d86ed46cf03298129055a7a734c0ef3c", "text": "Photosynthetic CO2 uptake rate and early growth parameters of radish Raphanus sativus L. seedlings exposed to an extremely low frequency magnetic field (ELF MF) were investigated. Radish seedlings were exposed to a 60 Hz, 50 microT(rms) (root mean square) sinusoidal magnetic field (MF) and a parallel 48 microT static MF for 6 or 15 d immediately after germination. Control seedlings were exposed to the ambient MF but not the ELF MF. The CO2 uptake rate of ELF MF exposed seedlings on day 5 and later was lower than that of the control seedlings. The dry weight and the cotyledon area of ELF MF exposed seedlings on day 6 and the fresh weight, the dry weight and the leaf area of ELF MF exposed seedlings on day 15 were significantly lower than those of the control seedlings, respectively. In another experiment, radish seedlings were grown without ELF MF exposure for 14 d immediately after germination, and then exposed to the ELF MF for about 2 h, and the photosynthetic CO2 uptake rate was measured during the short-term ELF MF exposure. The CO2 uptake rate of the same seedlings was subsequently measured in the ambient MF (control) without the ELF MF. There was no difference in the CO2 uptake rate of seedlings exposed to the ELF MF or the ambient MF. These results indicate that continuous exposure to 60 Hz, 50 microT(rms) sinusoidal MF with a parallel 48 microT static MF affects the early growth of radish seedlings, but the effect is not so severe that modification of photosynthetic CO2 uptake can observed during short-term MF exposure.", "title": "" }, { "docid": "7e61652a45c490c230d368d653ef63e8", "text": "Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.", "title": "" }, { "docid": "7267e5082c890dfa56a745d3b28425cc", "text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.", "title": "" }, { "docid": "78b8da26d1ca148b8c261c6cfdc9b2b6", "text": "Collaborative filtering (CF) aims to build a model from users' past behaviors and/or similar decisions made by other users, and use the model to recommend items for users. Despite of the success of previous collaborative filtering approaches, they are all based on the assumption that there are sufficient rating scores available for building high-quality recommendation models. In real world applications, however, it is often difficult to collect sufficient rating scores, especially when new items are introduced into the system, which makes the recommendation task challenging. We find that there are often \" short \" texts describing features of items, based on which we can approximate the similarity of items and make recommendation together with rating scores. In this paper we \" borrow \" the idea of vector representation of words to capture the information of short texts and embed it into a matrix factorization framework. We empirically show that our approach is effective by comparing it with state-of-the-art approaches.", "title": "" }, { "docid": "72ec56d2f4ee2fc257a8c8dd5484aee1", "text": "Attack graphs model possible paths that a potential attacker can use to intrude into a target network. They can be used in determining both proactive and reactive security measures. Attack graph generation is a process that includes vulnerability information processing, collecting network topology and application information, determining reachability conditions among network hosts, and applying the core graph building algorithm. This article introduces a classification scheme for a systematical study of the methods applied in each phase of the attack graph generation process, including the usage of attack graphs for network security. The related works in the literature are stated based on the proposed classification scheme and contributive ideas about potential challenges and open issues for attack graph generation and usage are provided. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e28dc048119c6272290cea29e598d74d", "text": "Chronic testicular pain (orchialgia) after inguinal hernia repair is a recognized and complex entity [1, 2]. In the postherniorrhaphy pain cohort, a subset of patients present with orchialgia as a primary complaint or a coexisting problem. This disabling post-operative complication is often incorrectly attributed to neuropathy of the genitofemoral nerve (GFN), which is commonly mistaken as the sensory nerve of the testicle. The spectrum of operations ranging from resection of the GFN to orchiectomy has been suggested for its surgical management often with limited efficacy. This is exemplified by the cohort of patients who did not improve after orchiectomy in the recently published Finnish Audit. Complicating matters is the significant overlap of inguinal, scrotal, and testicular pain as well as nociceptive versus neuropathic causes [2]. Delineating the etiology and appropriate management is a diagnostic and therapeutic challenge for hernia surgeons and urologists alike. The exact pathophysiology of testicular pain is poorly understood but, in the post-inguinal herniorrhaphy population, causes include ischemic orchitis, epididymitis, edema, infection, cord fibrosis and scarring, varicocele or hydrocele formation, torsion, referred pain from radiculitis or ureteral pathology, and entrapment or disruption of the paravasal nerve fibers or autonomic plexus within the cord [1–8]. Careful delineation of testicular pain from groin and scrotal pain is necessary to address the overlap of orchialgia with inguinodynia and scrotal pain. Additionally, the difference between nociceptive versus neuropathic testicular pain must be understood as their treatments differ significantly. Scrotal pain may be elicited by palpation of scrotum, pinching the scrotal skin, and dermatomal mapping. It is neuropathic and somatic in nature with symptoms of burning, hyper or hypoesthesia, allodynia, and radiation. Orchialgia may be elicited by compressing the testicle or epididymis and is typically a visceral sensation that is dull, aching, and constant. It may be accompanied by anatomic changes such as testicular enlargement or atrophy, epididymal swelling, and hydrocele or varicocele formation. The anterior wall of the scrotum is primarily affected in post-inguinal herniorrhaphy inguinodynia and innervated by somatic fibers of the ilioinguinal nerve (IIN), the iliohypogastric nerve (IHN), and particularly the genital branch of the genitofemoral nerve (GFN) [1–3]. The posterior surface of the scrotum is innervated by scrotal branches of the superficial perineal nerves via the perineal branch of the pudendal nerve (S1–S3). Injury to the inguinal segment of genital branch of the GFN with anterior-based approaches may arise during removal of the cremasteric layer (protects the cord structures including the genital branch and vas deferens), cord mobilization, dissection or ligation of the hernia sac, suturing of the inguinal floor, or placement of plug. Injury to the preperitoneal segment of GFN may arise from preperitoneal mesh or plug placement. With open or laparoscopic posterior-based repairs, injury to the preperitoneal segment of genital branch, femoral branch, or GFN trunk can arise during dissection of the preperitoneal plane, mesh fixation, or placement of mesh within the parietal compartment of the preperitoneal space in direct contact with an unprotected GFN. Mirilas and associates have meticulously delineated This comment refers to the article available at doi:10.1007/s10029-013-1150-3.", "title": "" }, { "docid": "9415adaa3ec2f7873a23cc2017a2f1ee", "text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.", "title": "" } ]
scidocsrr
b09165bd68c9009d4950c1ce8b27ca20
Joint Feature Selection and Structure Preservation for Domain Adaptation
[ { "docid": "3df5ef9cda0812a5112c59c2e40125ab", "text": "Recent work has demonstrated the effectiveness of domain adaptation methods for computer vision applications. In this work, we propose a new multiple source domain adaptation method called Domain Selection Machine (DSM) for event recognition in consumer videos by leveraging a large number of loosely labeled web images from different sources (e.g., Flickr.com and Photosig.com), in which there are no labeled consumer videos. Specifically, we first train a set of SVM classifiers (referred to as source classifiers) by using the SIFT features of web images from different source domains. We propose a new parametric target decision function to effectively integrate the static SIFT features from web images/video keyframes and the spacetime (ST) features from consumer videos. In order to select the most relevant source domains, we further introduce a new data-dependent regularizer into the objective of Support Vector Regression (SVR) using the ϵ-insensitive loss, which enforces the target classifier shares similar decision values on the unlabeled consumer videos with the selected source classifiers. Moreover, we develop an alternating optimization algorithm to iteratively solve the target decision function and a domain selection vector which indicates the most relevant source domains. Extensive experiments on three real-world datasets demonstrate the effectiveness of our proposed method DSM over the state-of-the-art by a performance gain up to 46.41%.", "title": "" }, { "docid": "18b3328725661770be1f408f37c7eb64", "text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.", "title": "" }, { "docid": "0cec4473828bf542d97b20b64071a890", "text": "The effectiveness of knowledge transfer using classification algorithms depends on the difference between the distribution that generates the training examples and the one from which test examples are to be drawn. The task can be especially difficult when the training examples are from one or several domains different from the test domain. In this paper, we propose a locally weighted ensemble framework to combine multiple models for transfer learning, where the weights are dynamically assigned according to a model's predictive power on each test example. It can integrate the advantages of various learning algorithms and the labeled information from multiple training domains into one unified classification model, which can then be applied on a different domain. Importantly, different from many previously proposed methods, none of the base learning method is required to be specifically designed for transfer learning. We show the optimality of a locally weighted ensemble framework as a general approach to combine multiple models for domain transfer. We then propose an implementation of the local weight assignments by mapping the structures of a model onto the structures of the test domain, and then weighting each model locally according to its consistency with the neighborhood structure around the test example. Experimental results on text classification, spam filtering and intrusion detection data sets demonstrate significant improvements in classification accuracy gained by the framework. On a transfer learning task of newsgroup message categorization, the proposed locally weighted ensemble framework achieves 97% accuracy when the best single model predicts correctly only on 73% of the test examples. In summary, the improvement in accuracy is over 10% and up to 30% across different problems.", "title": "" }, { "docid": "6148a8847c01d46931250b959087b1b1", "text": "Recognizing visual content in unconstrained videos has become a very important problem for many applications. Existing corpora for video analysis lack scale and/or content diversity, and thus limited the needed progress in this critical area. In this paper, we describe and release a new database called CCV, containing 9,317 web videos over 20 semantic categories, including events like \"baseball\" and \"parade\", scenes like \"beach\", and objects like \"cat\". The database was collected with extra care to ensure relevance to consumer interest and originality of video content without post-editing. Such videos typically have very little textual annotation and thus can benefit from the development of automatic content analysis techniques.\n We used Amazon MTurk platform to perform manual annotation, and studied the behaviors and performance of human annotators on MTurk. We also compared the abilities in understanding consumer video content by humans and machines. For the latter, we implemented automatic classifiers using state-of-the-art multi-modal approach that achieved top performance in recent TRECVID multimedia event detection task. Results confirmed classifiers fusing audio and video features significantly outperform single-modality solutions. We also found that humans are much better at understanding categories of nonrigid objects such as \"cat\", while current automatic techniques are relatively close to humans in recognizing categories that have distinctive background scenes or audio patterns.", "title": "" } ]
[ { "docid": "af4fb49257f949ade17aa08f6696afcf", "text": "Point Pair Features is a widely used method to detect 3D objects in point clouds, however they are prone to fail in presence of sensor noise and background clutter. We introduce novel sampling and voting schemes that significantly reduces the influence of clutter and sensor noise. Our experiments show that with our improvements, PPFs become competitive against state-of-the-art methods as it outperforms them on several objects from challenging benchmarks, at a low computational cost.", "title": "" }, { "docid": "c8ba8d59bb92778921eea146181fa2b8", "text": "MOTIVATION\nProtein interaction networks provide an important system-level view of biological processes. One of the fundamental problems in biological network analysis is the global alignment of a pair of networks, which puts the proteins of one network into correspondence with the proteins of another network in a manner that conserves their interactions while respecting other evidence of their homology. By providing a mapping between the networks of different species, alignments can be used to inform hypotheses about the functions of unannotated proteins, the existence of unobserved interactions, the evolutionary divergence between the two species and the evolution of complexes and pathways.\n\n\nRESULTS\nWe introduce GHOST, a global pairwise network aligner that uses a novel spectral signature to measure topological similarity between subnetworks. It combines a seed-and-extend global alignment phase with a local search procedure and exceeds state-of-the-art performance on several network alignment tasks. We show that the spectral signature used by GHOST is highly discriminative, whereas the alignments it produces are also robust to experimental noise. When compared with other recent approaches, we find that GHOST is able to recover larger and more biologically significant, shared subnetworks between species.\n\n\nAVAILABILITY\nAn efficient and parallelized implementation of GHOST, released under the Apache 2.0 license, is available at http://cbcb.umd.edu/kingsford_group/ghost\n\n\nCONTACT\nrob@cs.umd.edu.", "title": "" }, { "docid": "5e9fabc2dbe3c5b95602c6c9e86fd15c", "text": "The question of the self has intrigued philosophers and psychologists for a long time. More recently, distinct concepts of self have also been suggested in neuroscience. However, the exact relationship between these concepts and neural processing across different brain regions remains unclear. This article reviews neuroimaging studies comparing neural correlates during processing of stimuli related to the self with those of non-self-referential stimuli. All studies revealed activation in the medial regions of our brains' cortex during self-related stimuli. The activation in these so-called cortical midline structures (CMS) occurred across all functional domains (e.g., verbal, spatial, emotional, and facial). Cluster and factor analyses indicate functional specialization into ventral, dorsal, and posterior CMS remaining independent of domains. Taken together, our results suggest that self-referential processing is mediated by cortical midline structures. Since the CMS are densely and reciprocally connected to subcortical midline regions, we advocate an integrated cortical-subcortical midline system underlying human self. We conclude that self-referential processing in CMS constitutes the core of our self and is critical for elaborating experiential feelings of self, uniting several distinct concepts evident in current neuroscience.", "title": "" }, { "docid": "36b2ce9d30b2fc98d7c3f98b94cc0b4e", "text": "Efficient energy management in residential areas is a key issue in modern energy systems. In this scenario, induction heating (IH) becomes an alternative to classical heating technologies because of its advantages such as efficiency, quickness, safety, and accurate power control. In this article, the design of modern flexible cooking surfaces featuring IH technology is presented. The main advantages and technical challenges are given, and the design of the inductor system and the power electronic converter is detailed. The feasibility of the proposed system is verified through a laboratory prototype.", "title": "" }, { "docid": "7512d936d3d170774ad34bac9b8adef3", "text": "Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.", "title": "" }, { "docid": "176dc8d5d0ed24cc9822924ae2b8ca9b", "text": "Detection of image forgery is an important part of digital forensics and has attracted a lot of attention in the past few years. Previous research has examined residual pattern noise, wavelet transform and statistics, image pixel value histogram and other features of images to authenticate the primordial nature. With the development of neural network technologies, some effort has recently applied convolutional neural networks to detecting image forgery to achieve high-level image representation. This paper proposes to build a convolutional neural network different from the related work in which we try to understand extracted features from each convolutional layer and detect different types of image tampering through automatic feature learning. The proposed network involves five convolutional layers, two full-connected layers and a Softmax classifier. Our experiment has utilized CASIA v1.0, a public image set that contains authentic images and splicing images, and its further reformed versions containing retouching images and re-compressing images as the training data. Experimental results can clearly demonstrate the effectiveness and adaptability of the proposed network.", "title": "" }, { "docid": "bc272e837f1071fabcc7056134bae784", "text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.", "title": "" }, { "docid": "301fc0a18bec8128165ec73e15e66eb1", "text": "data structure queries (A). Some queries check properties of abstract data struct [11][131] such as stacks, hash tables, trees, and so on. These queries are not domain because the data structures can hold data of any domain. These queries are also differ the programming construct queries, because they check the constraints of well-defined a data structures. For example, a query about a binary tree may find the number of its nod have only one child. On the other hand, programming construct queries usually span di data structures. Abstract data structure queries can usually be expressed as class invar could be packaged with the class that implements an ADT. However, the queries that p information rather than detect violations are best answered by dynamic queries. For ex monitoring B+ trees using queries may indicate whether this data structure is efficient f underlying problem. Program construct queries (P). Program construct queries verify object relationships that related to the program implementation and not directly to the problem domain. Such q verify and visualize groups of objects that have to conform to some constraints because lower level of program design and implementation. For example, in a graphical user int implementation, every window object has a parent window, and this window referenc children widgets through the widget_collection collection (section 5.2.2). Such construct is n", "title": "" }, { "docid": "24a6a976899de474d6a9e1cbc3b3bfb0", "text": "The authors describe a 50 kHz 5 kVA voltage-source inverter using insulated gate bipolar transistors (IGBTs), a series-resonant circuit including a step-up transformer of turn ratio 1:10, and a corona surface treater. The series-resonant circuit is used as a matching circuit between the inverter of output voltage 250 V and the corona surface treater of input voltage 10 kV. Experimental results obtained from the prototype inverter system are shown to verify the stable inverter operation and proper corona discharge irrespective of load conditions. The estimated inverter efficiency is 95%, and the measured overall efficiency of the system is 74%.<<ETX>>", "title": "" }, { "docid": "2949191659d01de73abdc749d5e51ca7", "text": "BACKGROUND\nIsolated infraspinatus muscle atrophy is common in overhead athletes, who place significant and repetitive stresses across their dominant shoulders. Studies on volleyball and baseball players report infraspinatus atrophy in 4% to 34% of players; however, the prevalence of infraspinatus atrophy in professional tennis players has not been reported.\n\n\nPURPOSE\nTo investigate the incidence of isolated infraspinatus atrophy in professional tennis players and to identify any correlations with other physical examination findings, ranking performance, and concurrent shoulder injuries.\n\n\nSTUDY DESIGN\nCross-sectional study; Level of evidence, 3.\n\n\nMETHODS\nA total of 125 professional female tennis players underwent a comprehensive preparticipation physical health status examination. Two orthopaedic surgeons examined the shoulders of all players and obtained digital goniometric measurements of range of motion (ROM). Infraspinatus atrophy was defined as loss of soft tissue bulk in the infraspinatus scapula fossa (and increased prominence of dorsal scapular bony anatomy) of the dominant shoulder with clear asymmetry when compared with the contralateral side. Correlations were examined between infraspinatus atrophy and concurrent shoulder disorders, clinical examination findings, ROM, glenohumeral internal rotation deficit, singles tennis ranking, and age.\n\n\nRESULTS\nThere were 65 players (52%) with evidence of infraspinatus atrophy in their dominant shoulders. No wasting was noted in the nondominant shoulder of any player. No statistically significant differences were seen in mean age, left- or right-hand dominance, height, weight, or body mass index for players with or without atrophy. Of the 77 players ranked in the top 100, 58% had clinical infraspinatus atrophy, compared with 40% of players ranked outside the top 100. No associations were found with static physical examination findings (scapular dyskinesis, ROM glenohumeral internal rotation deficit, postural abnormalities), concurrent shoulder disorders, or compromised performance when measured by singles ranking.\n\n\nCONCLUSION\nThis study reports a high level of clinical infraspinatus atrophy in the dominant shoulder of elite female tennis players. Infraspinatus atrophy was associated with a higher performance ranking, and no functional deficits or associations with concurrent shoulder disorders were found. Team physicians can be reassured that infraspinatus atrophy is a common finding in high-performing tennis players and, if asymptomatic, does not appear to significantly compromise performance.", "title": "" }, { "docid": "1edf460bcfc83ebc8bd66f2cb51e4a61", "text": "A distributed system with interchangeable constraints for studying skillful human movements via haptic displays is presented. A unified interface provides easy linking of various physical models with spatial constraints, and the graphical contents related to the models as well. Theoretical and experimental kinematic profiles are compared for several cases of basic reaching rest-to-rest tasks: curve-constrained motions, flexible object control, and cooperative two-hand movements. The experimental patterns exhibit the best agreement with the optimal control models based on force-change minimization criteria.", "title": "" }, { "docid": "af43017e25de9eebc44cb20430c1d9d5", "text": "A coplanar waveguide (CPW) center-fed four-arm slot sinuous antenna is introduced in this letter. The antenna demonstrates broadband characteristics in terms of its split-beam radiation pattern and angle of maximum gain as well as multiband characteristics in terms of axial ratio (AR), omnidirectionality, polarization, and return loss. It is observed experimentally and computationally that regions of low AR with alternating polarization handedness, good omnidirectionality, low return loss, and high antenna gain/efficiency appear in narrow frequency bands. Measured and simulated results are presented to discuss the principles behind the antenna operation and venues for future performance optimization.", "title": "" }, { "docid": "e25388e14ab9a60e1019b2bcb7071090", "text": "Switched Reluctance Motor (SRM) has become a competitive selection for many applications of electric machine drive systems recently due to its relative simple construction and its robustness. . This paper describes the design of new converter, consisting of “halfbridge” IGBT modules and SCRs, for closed loop control of switched reluctance motor drives are proposed. The proposed converter topology is a variation of the conventional asymmetric bridge converter for switched reluctance motor drives. However, utilization of switch modules is enhanced considerably. The requirements of converters for switched reluctance motor drives and the operation of the proposed new converter are analyzed and discussed. In this paper a new converter topology for speed control of a switched reluctance motor (SRM) is proposed. The topology is verified through MATLAB simulation. Keywords–Switched reluctance motor drives, Converter topology.", "title": "" }, { "docid": "de44dca36f4a93d6722cd23ce4f0d139", "text": "Pairwise meta-analysis is an established statistical tool for synthesizing evidence from multiple trials, but it is informative only about the relative efficacy of two specific interventions. The usefulness of pairwise meta-analysis is thus limited in real-life medical practice, where many competing interventions may be available for a certain condition and studies informing some of the pairwise comparisons may be lacking. This commonly encountered scenario has led to the development of network meta-analysis (NMA). In the last decade, several applications, methodological developments, and empirical studies in NMA have been published, and the area is thriving as its relevance to public health is increasingly recognized. This article presents a review of the relevant literature on NMA methodology aiming to pinpoint the developments that have appeared in the field. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "265bf26646113a56101c594f563cb6dc", "text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.", "title": "" }, { "docid": "9fdf625f46c227c819cec1e4c00160b1", "text": "Employment of ground-based positioning systems has been consistently growing over the past decades due to the growing number of applications that require location information where the conventional satellite-based systems have limitations. Such systems have been successfully adopted in the context of wireless emergency services, tactical military operations, and various other applications offering location-based services. In current and previous generation of cellular systems, i.e., 3G, 4G, and LTE, the base stations, which have known locations, have been assumed to be stationary and fixed. However, with the possibility of having mobile relays in 5G networks, there is a demand for novel algorithms that address the challenges that did not exist in the previous generations of localization systems. This paper includes a review of various fundamental techniques, current trends, and state-of-the-art systems and algorithms employed in wireless position estimation using moving receivers. Subsequently, performance criteria comparisons are given for the aforementioned techniques and systems. Moreover, a discussion addressing potential research directions when dealing with moving receivers, e.g., receiver's movement pattern for efficient and accurate localization, non-line-of-sight problem, sensor fusion, and cooperative localization, is briefly given.", "title": "" }, { "docid": "ba200e034a08a3317ea066fddaf7c4c9", "text": "We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system.", "title": "" }, { "docid": "42fa2e99d0c17cf706e6674dafb898a7", "text": "To improve software productivity, when constructing new software systems, developers often reuse existing class libraries or frameworks by invoking their APIs. Those APIs, however, are often complex and not well documented, posing barriers for developers to use them in new client code. To get familiar with how those APIs are used, developers may search the Web using a general search engine to find relevant documents or code examples. Developers can also use a source code search engine to search open source repositories for source files that use the same APIs. Nevertheless, the number of returned source files is often large. It is difficult for developers to learn API usages from a large number of returned results. In order to help developers understand API usages and write API client code more effectively, we have developed an API usage mining framework and its supporting tool called MAPO (for <u>M</u>ining <u>AP</u>I usages from <u>O</u>pen source repositories). Given a query that describes a method, class, or package for an API, MAPO leverages the existing source code search engines to gather relevant source files and conducts data mining. The mining leads to a short list of frequent API usages for developers to inspect. MAPO currently consists of five components: a code search engine, a source code analyzer, a sequence preprocessor, a frequent sequence miner, and a frequent sequence post processor. We have examined the effectiveness of MAPO using a set of various queries. The preliminary results show that the framework is practical for providing informative and succinct API usage patterns.", "title": "" }, { "docid": "a38e20a392e7f03509e29839196628d5", "text": "We investigate the hypothesis that the combination of three related innovations—1) information technology (IT), 2) complementary workplace reorganization, and 3) new products and services—constitute a significant skill-biased technical change affecting labor demand in the United States. Using detailed firm-level data, we find evidence of complementarities among all three of these innovations in factor demand and productivity regressions. In addition, firms that adopt these innovations tend to use more skilled labor. The effects of IT on labor demand are greater when IT is combined with the particular organizational investments we identify, highlighting the importance of IT-enabled organizational change. Disciplines Business Administration, Management, and Operations | Economics | Labor Economics | Other Business | Technology and Innovation This journal article is available at ScholarlyCommons: http://repository.upenn.edu/oid_papers/108 For more information, ebusiness@mit.edu or 617-253-7054 please visit our website at http://ebusiness.mit.edu or contact the Center directly at A research and education initiative at the MIT Sloan School of Management Information Technology, Workplace Organization, and the Demand for Skilled Labor: Firm-level Evidence", "title": "" }, { "docid": "8b0278400c9576c4a3a77a4ec742809c", "text": "Storyline detection aims to connect seemly irrelevant single documents into meaningful chains, which provides opportunities for understanding how events evolve over time and what triggers such evolutions. Most previous work generated the storylines through unsupervised methods that can hardly reveal underlying factors driving the evolution process. This paper introduces a Bayesian model to generate storylines from massive documents and infer the corresponding hidden relations and topics. In addition, our model is the first attempt that utilizes Twitter data as human input to ``supervise'' the generation of storylines. Through extensive experiments, we demonstrate our proposed model can achieve significant improvement over baseline methods and can be used to discover interesting patterns for real world cases.", "title": "" } ]
scidocsrr
22d597624034744e1413b1a3e323757d
Role of Text Pre-processing in Twitter Sentiment Analysis
[ { "docid": "cb8a21bf8d0642ee9410419ecf472b21", "text": "Sentiment analysis or opinion mining is one of the major tasks of NLP (Natural Language Processing). Sentiment analysis has gain much attention in recent years. In this paper, we aim to tackle the problem of sentiment polarity categorization, which is one of the fundamental problems of sentiment analysis. A general process for sentiment polarity categorization is proposed with detailed process descriptions. Data used in this study are online product reviews collected from Amazon.com. Experiments for both sentence-level categorization and review-level categorization are performed with promising outcomes. At last, we also give insight into our future work on sentiment analysis.", "title": "" }, { "docid": "57666e9d9b7e69c38d7530633d556589", "text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.", "title": "" }, { "docid": "eabd54407a2f1de0126795e98cdcb194", "text": "This paper reports our submissions to the four subtasks of Aspect Based Sentiment Analysis (ABSA) task (i.e., task 4) in SemEval 2014 including aspect term extraction and aspect sentiment polarity classification (Aspect-level tasks), aspect category detection and aspect category sentiment polarity classification (Categorylevel tasks). For aspect term extraction, we present three methods, i.e., noun phrase (NP) extraction, Named Entity Recognition (NER) and a combination of NP and NER method. For aspect sentiment classification, we extracted several features, i.e., topic features, sentiment lexicon features, and adopted a Maximum Entropy classifier. Our submissions rank above average.", "title": "" } ]
[ { "docid": "a880d38d37862b46dc638b9a7e45b6ee", "text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.", "title": "" }, { "docid": "ae83e004c2b8f4f85f31b03ad2c596f6", "text": "Approximating a probability density in a tractable manner is a central task in Bayesian statistics. Variational Inference (VI) is a popular technique that achieves tractability by choosing a relatively simple variational approximation. Borrowing ideas from the classic boosting framework, recent approaches attempt to boost VI by replacing the selection of a single density with an iteratively constructed mixture of densities. In order to guarantee convergence, previous works impose stringent assumptions that require significant effort for practitioners. Specifically, they require a custom implementation of the greedy step (called the LMO) for every probabilistic model with respect to an unnatural variational family of truncated distributions. Our work fixes these issues with novel theoretical and algorithmic insights. On the theoretical side, we show that boosting VI satisfies a relaxed smoothness assumption which is sufficient for the convergence of the functional Frank-Wolfe (FW) algorithm. Furthermore, we rephrase the LMO problem and propose to maximize the Residual ELBO (RELBO) which replaces the standard ELBO optimization in VI. These theoretical enhancements allow for black box implementation of the boosting subroutine. Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.", "title": "" }, { "docid": "f3945811395f9a1903caef1e9d4e8860", "text": "The purpose of this study was to compare the in vitro effectiveness of Morinda citrifolia juice (MCJ) with sodium hypochlorite (NaOCl) and chlorhexidine gluconate (CHX) to remove the smear layer from the canal walls of endodontically instrumented teeth. Sixty extracted, single-rooted, mature, permanent, human premolar teeth with a single canal were inoculated with Enterococcus faecalis at 37 degrees C in a CO2 atmosphere for 30 days. The teeth were randomly allocated to 6 treatment groups; the pulp chamber was accessed, cleaned, and shaped by using ProTaper and ProFile rotary instrumentation to a size 35. During instrumentation the irrigation was provided by MCJ, NaOCl, CHX, MCJ/CHX, followed by a final flush of 17% ethylenediaminetetraacetic acid (EDTA). MCJ irrigation was also followed by a final flush of saline, and saline irrigation was also used as a negative control. The teeth were then processed for scanning electron microscopy, and the removal of smear layer was examined. Data were analyzed by chi2 statistical tests (P values) at a significance of 95%. The most effective removal of smear layer occurred with MCJ and NaOCl, both with a rinse of 17% EDTA. Both MCJ and NaOCl treatments were similarly effective with a rinse of 17% EDTA (P < .2471) to completely remove up to 80% of the smear layer from some aspects of the root canal. MCJ was more effective than CHX for removing smear layer (P < .0085) and saline as the negative control (P < .0001). The efficacy of MJC was similar to NaOCl in conjunction with EDTA as an intracanal irrigant. MJC appears to be the first fruit juice to be identified as a possible alternative to the use of NaOCl as an intracanal irrigant.", "title": "" }, { "docid": "89596e6eedbc1f13f63ea144b79fdc64", "text": "This paper describes our work in integrating three different lexical resources: FrameNet, VerbNet, and WordNet, into a unified, richer knowledge-base, to the end of enabling more robust semantic parsing. The construction of each of these lexical resources has required many years of laborious human effort, and they all have their strengths and shortcomings. By linking them together, we build an improved resource in which (1) the coverage of FrameNet is extended, (2) the VerbNet lexicon is augmented with frame semantics, and (3) selectional restrictions are implemented using WordNet semantic classes. The synergistic exploitation of various lexical resources is crucial for many complex language processing applications, and we prove it once again effective in building a robust semantic parser.", "title": "" }, { "docid": "53afae9502234d778015f172fc1c3a68", "text": "Polynomial chaos expansions (PCE) are an attractive technique for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. When tailoring the orthogonal polynomial bases to match the forms of the input uncertainties in a Wiener-Askey scheme, excellent convergence properties can be achieved for general probabilistic analysis problems. Non-intrusive PCE methods allow the use of simulations as black boxes within UQ studies, and involve the calculation of chaos expansion coefficients based on a set of response function evaluations. These methods may be characterized as being either Galerkin projection methods, using sampling or numerical integration, or regression approaches (also known as point collocation or stochastic response surfaces), using linear least squares. Numerical integration methods may be further categorized as either tensor product quadrature or sparse grid Smolyak cubature and as either isotropic or anisotropic. Experience with these approaches is presented for algebraic and PDE-based benchmark test problems, demonstrating the need for accurate, efficient coefficient estimation approaches that scale for problems with significant numbers of random variables.", "title": "" }, { "docid": "717605f0fb1a17825b3e851187b85299", "text": "We present a new method for measuring photoplethysmogram signals remotely using ambient light and a digital camera that allows for accurate recovery of the waveform morphology (from a distance of 3 m). In particular, we show that the peak-to-peak time between the systolic peak and diastolic peak/inflection can be automatically recovered using the second-order derivative of the remotely measured waveform. We compare measurements from the face with those captured using a contact fingertip sensor and show high agreement in peak and interval timings. Furthermore, we show that results can be significantly improved using orange, green, and cyan color channels compared to the tradition red, green, and blue channel combination. The absolute error in interbeat intervals was 26 ms and the absolute error in mean systolic-diastolic peak-to-peak times was 12 ms. The mean systolic-diastolic peak-to-peak times measured using the contact sensor and the camera were highly correlated, ρ = 0.94 (p <; 0.001). The results were obtained with a camera frame-rate of only 30 Hz. This technology has significant potential for advancing healthcare.", "title": "" }, { "docid": "e8eaeb8a2bb6fa71997aa97306bf1bb0", "text": "Article history: Available online 18 February 2016", "title": "" }, { "docid": "c9c202dc1138e8cd330e6dde9e08fcc4", "text": "Background: Diagnosing breast cancer at an early stage can have a great impact on cancer mortality. One of the fundamental problems in cancer treatment is the lack of a proper method for early detection, which may lead to diagnostic errors. Using data analysis techniques can significantly help in early diagnosis of the disease. The purpose of this study was to evaluate and compare the efficacy of two data mining techniques, i.e., multilayer neural network and C4.5, in early diagnosis of breast cancer. Methods: A data set from Motamed Cancer Institute's breast cancer research clinic, Tehran, containing 2860 records related to breast cancer risk factors were used. Of the records, 1141 (40%) were related to malignant changes and breast cancer and 1719 (60%) to benign tumors. The data set was analyzed using perceptron neural network and decision tree algorithms, and was split into two a training data set (70%) and a testing data set (30%) using Rapid Miner 5.2. Results: For neural networks, accuracy was 80.52%, precision 88.91%, and sensitivity 90.88%; and for decision tree, accuracy was 80.98%, precision 80.97%, and sensitivity 89.32%. Results indicated that both algorithms have acceptable capabilities for analyzing breast cancer data. Conclusion: Although both models provided good results, neural network showed more reliable diagnosis for positive cases. Data set type and analysis method affect results. On the other hand, information about more powerful risk factors of breast cancer, such as genetic mutations, can provide models with high coverage. Received: 13 October 2017 Revised: 19 January 2018 Accepted: 26 January 2018", "title": "" }, { "docid": "78db8b57c3221378847092e5283ad754", "text": "This paper analyzes correlations and causalities between Bitcoin market indicators and Twitter posts containing emotional signals on Bitcoin. Within a timeframe of 104 days (November 23 2013 March 7 2014), about 160,000 Twitter posts containing ”bitcoin” and a positive, negative or uncertainty related term were collected and further analyzed. For instance, the terms ”happy”, ”love”, ”fun”, ”good”, ”bad”, ”sad” and ”unhappy” represent positive and negative emotional signals, while ”hope”, ”fear” and ”worry” are considered as indicators of uncertainty. The static (daily) Pearson correlation results show a significant positive correlation between emotional tweets and the close price, trading volume and intraday price spread of Bitcoin. However, a dynamic Granger causality analysis does not confirm a causal effect of emotional Tweets on Bitcoin market values. To the contrary, the analyzed data shows that a higher Bitcoin trading volume Granger causes more signals of uncertainty within a 24 to 72hour timeframe. This result leads to the interpretation that emotional sentiments rather mirror the market than that they make it predictable. Finally, the conclusion of this paper is that the microblogging platform Twitter is Bitcoins virtual trading floor, emotionally reflecting its trading dynamics.2", "title": "" }, { "docid": "d3c4c641f46800c15c0995ce9e1943f7", "text": "We present a computationally e cient architecture for image super-resolution that achieves state-of-the-art results on images with large spatial extend. Apart from utilizing Convolutional Neural Networks, our approach leverages recent advances in fast approximate inference for sparse coding. We empirically show that upsampling methods work much better on latent representations than in the original spatial domain. Our experiments indicate that the proposed architecture can serve as a basis for additional future improvements in image superresolution.", "title": "" }, { "docid": "db54908608579efd067853fed5d3e4e8", "text": "The detection of moving objects from stationary cameras is usually approached by background subtraction, i.e. by constructing and maintaining an up-to-date model of the background and detecting moving objects as those that deviate from such a model. We adopt a previously proposed approach to background subtraction based on self-organization through artificial neural networks, that has been shown to well cope with several of the well known issues for background maintenance. Here, we propose a spatial coherence variant to such approach to enhance robustness against false detections and formulate a fuzzy model to deal with decision problems typically arising when crisp settings are involved. We show through experimental results and comparisons that higher accuracy values can be reached for color video sequences that represent typical situations critical for moving object detection.", "title": "" }, { "docid": "69ca1ebc519ed772e0d7444c98547060", "text": "The direct position determination (DPD) approach is a single-step method, which uses the maximum likelihood estimator to localize sources emitting electromagnetic energy using combined data from all available sensors. The DPD is known to outperform the traditional two-step methods under low signal-to-noise ratio conditions. We propose an improvement to the DPD approach, using the well-known minimum-variance-distortionless-response (MVDR) approach. Unlike maximum likelihood, the number of sources needs not be known before applying the method. The combination of both the direct approach and MVDR yields unprecedented localization accuracy and resolution for weak sources. We demonstrate this approach on the problem of multistatic radar, but the method can easily be extended to general localization problems.", "title": "" }, { "docid": "0d2efad6cab1543bc487510873803cff", "text": "Vehicle to grid (V2G) network is a crucial part of smart grid. An electric vehicle (EV) in a V2G network uses electricity instead of gasoline, and this benefits the environment and helps mitigate the energy crisis. By using its battery capacity, the vehicle can serve temporarily as a distributed energy storage system to mitigate peak load of the power grid. However, the two-way communication and power flows not only facilitate the functionality of V2G network, but they also facilitate attackers as well. Privacy is now a big obstacle in the way of the development of V2G networks. The privacy preservation problem in V2G networks could be more severe than in other parts of Smart Grid due to its e-mobility. In this paper, we will analyze and summarize privacy preservation approaches which achieve various privacy preservation goals. We will survey research works, based on existing privacy preservation techniques, which address various privacy preservation problems in V2G networks, including anonymous authentication, location privacy, identification privacy, concealed data aggregation, privacy-preserving billing and payment, and privacy-preserving data publication. These techniques include homomorphic encryption, blind signature, group signature, ring signature, third party anonymity, and anonymity networks. We will summarize solved problems and issues of these techniques, and introduce possible solutions for unsolved problems. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a8a4bad208ee585ae4b4a0b3c5afe97a", "text": "English-speaking children with specific language impairment (SLI) are known to have particular difficulty with the acquisition of grammatical morphemes that carry tense and agreement features, such as the past tense -ed and third-person singular present -s. In this study, an Extended Optional Infinitive (EOI) account of SLI is evaluated. In this account, -ed, -s, BE, and DO are regarded as finiteness markers. This model predicts that finiteness markers are omitted for an extended period of time for nonimpaired children, and that this period will be extended for a longer time in children with SLI. At the same time, it predicts that if finiteness markers are present, they will be used correctly. These predictions are tested in this study. Subjects were 18 5-year-old children with SLI with expressive and receptive language deficits and two comparison groups of children developing language normally: 22 CA-equivalent (5N) and 20 younger, MLU-equivalent children (3N). It was found that the children with SLI used nonfinite forms of lexical verbs, or omitted BE and DO, more frequently than children in the 5N and 3N groups. At the same time, like the normally developing children, when the children with SLI marked finiteness, they did so appropriately. Most strikingly, the SLI group was highly accurate in marking agreement on BE and DO forms. The findings are discussed in terms of the predictions of the EOI model, in comparison to other models of the grammatical limitations of children with SLI.", "title": "" }, { "docid": "fd29fe16c84434138fb7cd17b72c94e1", "text": "To help activists call new volunteers to action, we present Botivist: a platform that uses Twitter bots to find potential volunteers and request contributions. By leveraging different Twitter accounts, Botivist employs different strategies to encourage participation. We explore how people respond to bots calling them to action using a test case about corruption in Latin America. Our results show that the majority of volunteers (80\\%) who responded to Botivist's calls to action contributed relevant proposals to address the assigned social problem. Different strategies produced differences in the quantity and relevance of contributions. Some strategies that work well offline and face-to-face appeared to hinder people's participation when used by an online bot. We analyze user behavior in response to being approached by bots with an activist purpose. We also provide strong evidence for the value of this type of civic media, and derive design implications.", "title": "" }, { "docid": "5e0d5cf53369cc1065bdf0dedb74c557", "text": "The automatic detection of diseases in images acquired through chest X-rays can be useful in clinical diagnosis because of a shortage of experienced doctors. Compared with natural images, those acquired through chest X-rays are obtained by using penetrating imaging technology, such that there are multiple levels of features in an image. It is thus difficult to extract the features of a disease for further diagnosis. In practice, healthy people are in a majority and the morbidities of different disease vary, because of which the obtained labels are imbalanced. The two main challenges of diagnosis though chest X-ray images are to extract discriminative features from X-ray images and handle the problem of imbalanced data distribution. In this paper, we propose a deep neural network called DeepCXray that simultaneously solves these two problems. An InceptionV3 model is trained to extract features from raw images, and a new objective function is designed to address the problem of imbalanced data distribution. The proposed objective function is a performance index based on cross entropy loss that automatically weights the ratio of positive to negative samples. In other words, the proposed loss function can automatically reduce the influence of an overwhelming number of negative samples by shrinking each cross entropy terms by a different extent. Extensive experiments highlight the promising performance of DeepCXray on the ChestXray14 dataset of the National Institutes of Health in terms of the area under the receiver operating characteristic curve.", "title": "" }, { "docid": "fead6ca9612b29697f73cb5e57c0a1cc", "text": "This research examines the effect of online social capital and Internet use on the normally negative effects of technology addiction, especially for individuals prone to self-concealment. Self-concealment is a personality trait that describes individuals who are more likely to withhold personal and private information, inhibiting catharsis and wellbeing. Addiction, in any context, is also typically associated with negative outcomes. However, we investigate the hypothesis that communication technology addiction may positively affect wellbeing for self-concealing individuals when online interaction is positive, builds relationships, or fosters a sense of community. Within these parameters, increased communication through mediated channels (and even addiction) may reverse the otherwise negative effects of self-concealment on wellbeing. Overall, the proposed model offers qualified support for the continued analysis of mediated communication as a potential source for improving the wellbeing for particular individuals. This study is important because we know that healthy communication in relationships, including disclosure, is important to wellbeing. This study recognizes that not all people are comfortable communicating in face-to-face settings. Our findings offer evidence that the presence of computers in human behaviors (e.g., mediated channels of communication and NCTs) enables some individuals to communicate and fos ter beneficial interpersonal relationships, and improve their wellbeing.", "title": "" }, { "docid": "a4073ab337c0d4ef73dceb1a32e1f878", "text": "Conditional belief networks introduce stochastic binary variables in neural networks. Contrary to a classical neural network, a belief network can predict more than the expected value of the output Y given the input X . It can predict a distribution of outputs Y which is useful when an input can admit multiple outputs whose average is not necessarily a valid answer. Such networks are particularly relevant to inverse problems such as image prediction for denoising, or text to speech. However, traditional sigmoid belief networks are hard to train and are not suited to continuous problems. This work introduces a new family of networks called linearizing belief nets or LBNs. A LBN decomposes into a deep linear network where each linear unit can be turned on or off by non-deterministic binary latent units. It is a universal approximator of real-valued conditional distributions and can be trained using gradient descent. Moreover, the linear pathways efficiently propagate continuous information and they act as multiplicative skip-connections that help optimization by removing gradient diffusion. This yields a model which trains efficiently and improves the state-of-the-art on image denoising and facial expression generation with the Toronto faces dataset.", "title": "" }, { "docid": "f281b48aba953acc8778aecf35ab310d", "text": "This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new architecture where alignment pairs are compared, compressed and then propagated to upper layers for enhanced representation learning. Secondly, we adopt factorization layers for efficient and expressive compression of alignment vectors into scalar features, which are then used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving competitive performance on all. A lightweight parameterization of our model also enjoys a≈ 3 times reduction in parameter size compared to the existing state-of-the-art models, e.g., ESIM and DIIN, while maintaining competitive performance. Additionally, visual analysis shows that our propagated features are highly interpretable.", "title": "" }, { "docid": "e5b2857bfe745468453ef9dabbf5c527", "text": "We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. This paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. The prior treats a latent vector as belonging to Cartesian product of subspaces, each of which is quantized separately with a Gaussian mixture model. Some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. Through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated.", "title": "" } ]
scidocsrr
b4a633d2165e090203ab8f1282932921
Expression invariant 3D face recognition with a Morphable Model
[ { "docid": "3171893b6863e777141160c65f1b9616", "text": "This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.", "title": "" } ]
[ { "docid": "7e848e98909c69378f624ce7db31dbfa", "text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.", "title": "" }, { "docid": "73cfe07d02651eee42773824d03dcfa1", "text": "Discovery of usage patterns from Web data is one of the primary purposes for Web Usage Mining. In this paper, a technique to generate Significant Usage Patterns (SUP) is proposed and used to acquire significant “user preferred navigational trails”. The technique uses pipelined processing phases including sub-abstraction of sessionized Web clickstreams, clustering of the abstracted Web sessions, concept-based abstraction of the clustered sessions, and SUP generation. Using this technique, valuable customer behavior information can be extracted by Web site practitioners. Experiments conducted using Web log data provided by J.C.Penney demonstrate that SUPs of different types of customers are distinguishable and interpretable. This technique is particularly suited for analysis of dynamic websites.", "title": "" }, { "docid": "2e0b2bc23117bbe8d41f400761410638", "text": "Free radicals and other reactive species (RS) are thought to play an important role in many human diseases. Establishing their precise role requires the ability to measure them and the oxidative damage that they cause. This article first reviews what is meant by the terms free radical, RS, antioxidant, oxidative damage and oxidative stress. It then critically examines methods used to trap RS, including spin trapping and aromatic hydroxylation, with a particular emphasis on those methods applicable to human studies. Methods used to measure oxidative damage to DNA, lipids and proteins and methods used to detect RS in cell culture, especially the various fluorescent \"probes\" of RS, are also critically reviewed. The emphasis throughout is on the caution that is needed in applying these methods in view of possible errors and artifacts in interpreting the results.", "title": "" }, { "docid": "7ea9a21bdbbda91c4cfa3e75e4fbed6f", "text": "We present algorithms for fast quantile and frequency estimation in large data streams using graphics processors (GPUs). We exploit the high computation power and memory bandwidth of graphics processors and present a new sorting algorithm that performs rasterization operations on the GPUs. We use sorting as the main computational component for histogram approximation and construction of ε-approximate quantile and frequency summaries. Our algorithms for numerical statistics computation on data streams are deterministic, applicable to fixed or variable-sized sliding windows and use a limited memory footprint. We use GPU as a co-processor and minimize the data transmission between the CPU and GPU by taking into account the low bus bandwidth. We implemented our algorithms on a PC with a NVIDIA GeForce FX 6800 Ultra GPU and a 3.4 GHz Pentium IV CPU and applied them to large data streams consisting of more than 100 million values. We also compared the performance of our GPU-based algorithms with optimized implementations of prior CPU-based algorithms. Overall, our results demonstrate that the graphics processors available on a commodity computer system are efficient stream-processor and useful co-processors for mining data streams.", "title": "" }, { "docid": "d13ecf582ac820cdb8ea6353c44c535f", "text": "We have previously shown that, while the intrinsic quality of the oocyte is the main factor affecting blastocyst yield during bovine embryo development in vitro, the main factor affecting the quality of the blastocyst is the postfertilization culture conditions. Therefore, any improvement in the quality of blastocysts produced in vitro is likely to derive from the modification of the postfertilization culture conditions. The objective of this study was to examine the effect of the presence or absence of serum and the concentration of BSA during the period of embryo culture in vitro on 1) cleavage rate, 2) the kinetics of embryo development, 3) blastocyst yield, and 4) blastocyst quality, as assessed by cryotolerance and gene expression patterns. The quantification of all gene transcripts was carried out by real-time quantitative reverse transcription-polymerase chain reaction. Bovine blastocysts from four sources were used: 1) in vitro culture in synthetic oviduct fluid (SOF) supplemented with 3 mg/ml BSA and 10% fetal calf serum (FCS), 2) in vitro culture in SOF + 3 mg/ml BSA in the absence of serum, 3) in vitro culture in SOF + 16 mg/ml BSA in the absence of serum, and 4) in vivo blastocysts. There was no difference in overall blastocyst yield at Day 9 between the groups. However, significantly more blastocysts were present by Day 6 in the presence of 10% serum (20.0%) compared with 3 mg/ml BSA (4.6%, P < 0.001) or 16 mg/ml BSA (11.6%, P < 0.01). By Day 7, however, this difference had disappeared. Following vitrification, there was no difference in survival between blastocysts produced in the presence of 16 mg/ml BSA or those produced in the presence of 10% FCS; the survival of both groups was significantly lower than the in vivo controls at all time points and in terms of hatching rate. In contrast, survival of blastocysts produced in SOF + 3 mg/ml BSA in the absence of serum was intermediate, with no difference remaining at 72 h when compared with in vivo embryos. Differences in relative mRNA abundance among the two groups of blastocysts analyzed were found for genes related to apoptosis (Bax), oxidative stress (MnSOD, CuZnSOD, and SOX), communication through gap junctions (Cx31 and Cx43), maternal recognition of pregnancy (IFN-tau), and differentiation and implantation (LIF and LR-beta). The presence of serum during the culture period resulted in a significant increase in the level of expression of MnSOD, SOX, Bax, LIF, and LR-beta. The level of expression of Cx31 and Cu/ZnSOD also tended to be increased, although the difference was not significant. In contrast, the level of expression of Cx43 and IFN-tau was decreased in the presence of serum. In conclusion, using a combination of measures of developmental competence (cleavage and blastocyst rates) and qualitative measures such as cryotolerance and relative mRNA abundance to give a more complete picture of the consequences of modifying medium composition on the embryo, we have shown that conditions of postfertilization culture, in particular, the presence of serum in the medium, can affect the speed of embryo development and the quality of the resulting blastocysts. The reduced cryotolerance of blastocysts generated in the presence of serum is accompanied by deviations in the relative abundance of developmentally important gene transcripts. Omission of serum during the postfertilization culture period can significantly improve the cryotolerance of the blastocysts to a level intermediate between serum-generated blastocysts and those derived in vivo. The challenge now is to try and bridge this gap.", "title": "" }, { "docid": "78437d8aafd3bf09522993447b0a4d50", "text": "Over the past 30 years, policy makers and professionals who provide services to older adults with chronic conditions and impairments have placed greater emphasis on conceptualizing aging in place as an attainable and worthwhile goal. Little is known, however, of the changes in how this concept has evolved in aging research. To track trends in aging in place, we examined scholarly articles published from 1980 to 2010 that included the concept in eleven academic gerontology journals. We report an increase in the absolute number and proportion of aging-in-place manuscripts published during this period, with marked growth in the 2000s. Topics related to the environment and services were the most commonly examined during 2000-2010 (35% and 31%, resp.), with a substantial increase in manuscripts pertaining to technology and health/functioning. This underscores the increase in diversity of topics that surround the concept of aging-in-place literature in gerontological research.", "title": "" }, { "docid": "ec3f0abd53fa730574a2f23958edf95d", "text": "Does distraction or rumination work better to diffuse anger? Catharsis theory predicts that rumination works best, but empirical evidence is lacking. In this study, angered participants hit a punching bag and thought about the person who had angered them (rumination group) or thought about becoming physically fit (distraction group). After hitting the punching bag, they reported how angry they felt. Next, they were given the chance to administer loud blasts of noise to the person who had angered them. There also was a no punching bag control group. People in the rumination group felt angrier than did people in the distraction or control groups. People in the rumination group were also most aggressive, followed respectively by people in the distraction and control groups. Rumination increased rather than decreased anger and aggression. Doing nothing at all was more effective than venting anger. These results directly contradict catharsis theory.", "title": "" }, { "docid": "890758b7ed5c5c879fba957bf3f13527", "text": "Existing approaches to identify the tie strength between users involve typically only one type of network. To date, no studies exist that investigate the intensity of social relations and in particular partnership between users across social networks. To fill this gap in the literature, we studied over 50 social proximity features to detect the tie strength of users defined as partnership in two different types of networks: location-based and online social networks. We compared user pairs in terms of partners and non-partners and found significant differences between those users. Following these observations, we evaluated the social proximity of users via supervised and unsupervised learning approaches and establish that location-based social networks have a great potential for the identification of a partner relationship. In particular, we established that location-based social networks and correspondingly induced features based on events attended by users could identify partnership with 0.922 AUC, while online social network data had a classification power of 0.892 AUC. When utilizing data from both types of networks, a partnership could be identified to a great extent with 0.946 AUC. This article is relevant for engineers, researchers and teachers who are interested in social network analysis and mining.", "title": "" }, { "docid": "4c1b42e12fd4f19870b5fc9e2f9a5f07", "text": "Similar to face-to-face communication in daily life, more and more evidence suggests that human emotions also spread in online social media through virtual interactions. However, the mechanism underlying the emotion contagion, like whether different feelings spread unlikely or how the spread is coupled with the social network, is rarely investigated. Indeed, due to the costly expense and spatio-temporal limitations, it is challenging for conventional questionnaires or controlled experiments. While given the instinct of collecting natural affective responses of massive connected individuals, online social media offer an ideal proxy to tackle this issue from the perspective of computational social science. In this paper, based on the analysis of millions of tweets in Weibo, a Twitter-like service in China, we surprisingly find that anger is more contagious than joy, indicating that it can sparkle more angry follow-up tweets; and anger prefers weaker ties than joy for the dissemination in social network, indicating that it can penetrate different communities and break local traps by more sharings between strangers. Through a simple diffusion model, it is unraveled that easier contagion and weaker ties function cooperatively in speeding up anger’s spread, which is further testified by the diffusion of realistic bursty events with different dominant emotions. To our best knowledge, for the first time we quantificationally provide the long-term evidence to disclose the difference between joy and anger in dissemination mechanism and our findings would shed lights on personal anger management in human communication and collective outrage control in cyber space.", "title": "" }, { "docid": "8618b407f851f0806920f6e28fdefe3f", "text": "The explosive growth of Internet applications and content, during the last decade, has revealed an increasing need for information filtering and recommendation. Most research in the area of recommendation systems has focused on designing and implementing efficient algorithms that provide accurate recommendations. However, the selection of appropriate recommendation content and the presentation of information are equally important in creating successful recommender applications. This paper addresses issues related to the presentation of recommendations in the movies domain. The current work reviews previous research approaches and popular recommender systems, and focuses on user persuasion and satisfaction. In our experiments, we compare different presentation methods in terms of recommendations’ organization in a list (i.e. top N-items list and structured overview) and recommendation modality (i.e. simple text, combination of text and image, and combination of text and video). The most efficient presentation methods, regarding user persuasion and satisfaction, proved to be the “structured overview” and the “text and video” interfaces, while a strong positive correlation was also found between user satisfaction and persuasion in all experimental conditions.", "title": "" }, { "docid": "43100f1c6563b4af125c1c6040daa437", "text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: linliang@ieee.org). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian", "title": "" }, { "docid": "a95f77c59a06b2d101584babc74896fb", "text": "Magnetic wall and ceiling climbing robots have been proposed in many industrial applications where robots must move over ferromagnetic material surfaces. The magnetic circuit design with magnetic attractive force calculation of permanent magnetic wheel plays an important role which significantly affects the system reliability, payload ability and power consumption of the robot. In this paper, a flexible wall and ceiling climbing robot with six permanent magnetic wheels is proposed to climb along the vertical wall and overhead ceiling of steel cargo containers as part of an illegal contraband inspection system. The permanent magnetic wheels are designed to apply to the wall and ceiling climbing robot, whilst finite element method is employed to estimate the permanent magnetic wheels with various wheel rims. The distributions of magnetic flux lines and magnetic attractive forces are compared on both plane and corner scenarios so that the robot can adaptively travel through the convex and concave surfaces of the cargo container. Optimisation of wheel rims is presented to achieve the equivalent magnetic adhesive forces along with the estimation of magnetic ring dimensions in the axial and radial directions. Finally, the practical issues correlated with the applications of the techniques are discussed and the conclusions are drawn with further improvement and prototyping.", "title": "" }, { "docid": "c7f38e2284ad6f1258fdfda3417a6e14", "text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.", "title": "" }, { "docid": "77579a39108209535de1af9494f205cc", "text": "Sentiment analysis aims to extract the sentiment polarity of given segment of text. Polarity resources that indicate the sentiment polarity of words are commonly used in different approaches. While English is the richest language in regard to having such resources, the majority of other languages, including Turkish, lack polarity resources. In this work we present the first comprehensive Turkish polarity resource, SentiTurkNet, where three polarity scores are assigned to each synset in the Turkish WordNet, indicating its positivity, negativity, and objectivity (neutrality) levels. Our method is general and applicable to other languages. Evaluation results for Turkish show that the polarity scores obtained through this method are more accurate compared to those obtained through direct translation (mapping) from SentiWordNet.", "title": "" }, { "docid": "6bea1d7242fc23ec8f462b1c8478f2c1", "text": "Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews.", "title": "" }, { "docid": "4c5cc3a99a02b6400b6c425acaf6284e", "text": "Matrix factorizations and their extensions to tensor factorizations and decompositions have become prominent tech niques for linear and multilinear blind source separation (BSS), especia lly multiway Independent Component Analysis (ICA), Nonnegative Matrix and Tensor Factorization (NMF /NTF), Smooth Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover, tensor decompo sitions have many other potential applications beyond multilinear BSS, especially feature extraction, classification, dimensionality reductio n and multiway clustering. In this paper, we briefly overview new and emerging mo dels and approaches for tensor decompositions in applications to gr oup and linked multiway BSS/ICA, feature extraction, classification and Multiway Parti l Least Squares (MPLS) regression problems. key words: Multilinear BSS, linked multiway BSS /ICA, tensor factorizations and decompositions, constrained Tucker and CP models , P nalized Tensor Decompositions (PTD), feature extraction, classifi cat on, multiway PLS and CCA.", "title": "" }, { "docid": "a89fe7e741003b873ecab38bf7c7c3fb", "text": "Commercially available glucose measurement device for diabetes monitoring require extracting of blood and this means there will be a physical contact with human body. Demand on non-invasive measurement has invites research and development of new detection methods to measure blood glucose level. In this work, a very sensitive optical polarimetry measurement technique using ratio-metric photon counting detection has been introduced and tested for a range of known glucose concentrations that mimic the level of glucose in human blood. The setup utilizes 785nm diode laser that emits weak coherent optical signal onto glucose concentration samples in aqueous. The result shows a linear proportional of different glucose concentration and successfully detected 10260 mg/dl to 260 mg/dl glucose samples. This indicates a potential improvement method for non-invasive glucose measurement by a sensitive polarimetry based optical sensor in single photon level for biomedical applications.", "title": "" }, { "docid": "c26e9f486621e37d66bf0925d8ff2a3e", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "e35194cb3fdd3edee6eac35c45b2da83", "text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.", "title": "" } ]
scidocsrr
2157feb408d7ccaeed79ac16c8887a15
Real-Time Swing-up of Double Inverted Pendulum by Nonlinear Model Predictive Control
[ { "docid": "9637537d6aeb6545d59eefaaaf2bdafa", "text": "The swing-up maneuver of the double pendulum on a cart serves to demonstrate a new approach of inversion-based feedforward control design introduced recently. The concept treats the transition task as a nonlinear two-point boundary value problem of the internal dynamics by providing free parameters in the desired output trajectory for the cart position. A feedback control is designed with linear methods to stabilize the swing-up maneuver. The emphasis of the paper is on the experimental realization of the double pendulum swing-up, which reveals the accuracy of the feedforward/feedback control scheme. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "189d370fc5c12157b1fffa6196195798", "text": "In this report a number of algorithms for optimal control of a double inverted pendulum on a cart (DIPC) are investigated and compared. Modeling is based on Euler-Lagrange equations derived by specifying a Lagrangian, difference between kinetic and potential energy of the DIPC system. This results in a system of nonlinear differential equations consisting of three 2-nd order equations. This system of equations is then transformed into a usual form of six 1-st order ordinary differential equations (ODE) for control design purposes. Control of a DIPC poses a certain challenge, since unlike a robot, the system is underactuated: one controlling force per three degrees of freedom (DOF). In this report, problem of optimal control minimizing a quadratic cost functional is addressed. Several approaches are tested: linear quadratic regulator (LQR), state-dependent Riccati equation (SDRE), optimal neural network (NN) control, and combinations of the NN with the LQR and the SDRE. Simulations reveal superior performance of the SDRE over the LQR and improvements provided by the NN, which compensates for model inadequacies in the LQR. Limited capabilities of the NN to approximate functions over the wide range of arguments prevent it from significantly improving the SDRE performance, providing only marginal benefits at larger pendulum deflections.", "title": "" } ]
[ { "docid": "0dc3c4e628053e8f7c32c0074a2d1a59", "text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.", "title": "" }, { "docid": "59626458b4f250a59bb5c47586afe023", "text": "Previous work for relation extraction from free text is mainly based on intra-sentence information. As relations might be mentioned across sentences, inter-sentence information can be leveraged to improve distantly supervised relation extraction. To effectively exploit inter-sentence information , we propose a ranking-based approach, which first learns a scoring function based on a listwise learning-to-rank model and then uses it for multi-label relation extraction. Experimental results verify the effectiveness of our method for aggregating information across sentences. Additionally, to further improve the ranking of high-quality extractions, we propose an effective method to rank relations from different entity pairs. This method can be easily integrated into our overall relation extraction framework, and boosts the precision significantly.", "title": "" }, { "docid": "da694b74b3eaae46d15f589e1abef4b8", "text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d197875ea8637bf36d2746a2a1861c23", "text": "There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.", "title": "" }, { "docid": "e3537eb7ab5da891aea70306c548f8c6", "text": "In recent era of ubiquitous computing the internet of things and sensor networks are researched widely. The deployment of the wireless sensor networks in the harsh environments ascends issues associated with delay clustering approaches, packet drop, delay, energy, link quality, mobility and coverage. Various research studies are proposing routing protocols clustering algorithm with research goal for reduction in terms of energy and delay. This paper focuses on delay and energy by introducing threshold based scheme. Furthermore energy and delay efficient routing protocol is proposed for cluster head selection in the heterogeneous wireless sensor networks. We have introduced delay and energy based adaptive threshold scheme in this paper to solve this problem. Furthermore this study presents new routing algorithm which contains energy and delay and velocity threshold based cluster-head election scheme. The cluster head is selected according to distance, velocity and energy where probability is set for the residual energy. The nodes are classified into normal, advanced and herculean levels. This paper presents new routing protocol named as energy and delay efficient routing protocol (EDERP). The MATLAB is used for simulation and comparison of the routing protocol with other protocols. The simulations results indicate that this protocol is effective regarding delay and energy.", "title": "" }, { "docid": "65031bb814a4812e499a8906d3a67fc4", "text": "The training process in industries is assisted with computer solutions to reduce costs. Normally, computer systems created to simulate assembly or machine manipulation are implemented with traditional Human-Computer interfaces (keyboard, mouse, etc). But, this usually leads to systems that are far from the real procedures, and thus not efficient in term of training. Two techniques could improve this procedure: mixed-reality and haptic feedback. We propose in this paper to investigate the integration of both of them inside a single framework. We present the hardware used to design our training system. A feasibility study allows one to establish testing protocol. The results of these tests convince us that such system should not try to simulate realistically the interaction between real and virtual objects as if it was only real objects.", "title": "" }, { "docid": "3202cd03c9af446bd6bc2ca0b384c2ac", "text": "ABSTRACT\nSurgical correction for nonsyndromic craniosynostosis has continued to evolve over the last century. The criterion standard has remained open correction of the cranial deformities, and many techniques have been described that yield satisfactory results. However, technology has allowed for minimally invasive techniques to be developed with the aid of endoscopic visualization. With proper patient selection and the aid of postoperative helmet therapy, there is increasing evidence that supports these techniques' safety and efficacy. In this article, our purpose was to describe our algorithm for treating nonsyndromic craniosynostosis at Rady Children's Hospital.", "title": "" }, { "docid": "c1a8e30586aad77395e429556545675c", "text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.", "title": "" }, { "docid": "c3b2372aea5faf4c2816b295d290095f", "text": "This paper presents a physically based model for the metal–oxide–semiconductor (MOS) transistor suitable for analysis and design of analog integrated circuits. Static and dynamic characteristics of the MOS field-effect transistor are accurately described by single-piece functions of two saturation currents in all regions of operation. Simple expressions for the transconductance-to-current ratio, the drain-to-source saturation voltage, and the cutoff frequency in terms of the inversion level are given. The design of a common-source amplifier illustrates the application of the proposed model.", "title": "" }, { "docid": "07cd406cead1a086f61f363269de1aac", "text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.", "title": "" }, { "docid": "47505c95f8a3cf136b3b5a76847990fc", "text": "We present a hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces. Our formulation uses a GPU-based interior point filter to cull away many of the points that do not lie on the boundary. The convex hull of remaining points is computed on a CPU. The GPU-based filter proceeds in an incremental manner and computes a pseudo-hull that is contained inside the convex hull of the original points. The pseudo-hull computation involves only localized operations and maps well to GPU architectures. Furthermore, the underlying approach extends to high dimensional point sets and deforming points. In practice, our culling filter can reduce the number of candidate points by two orders of magnitude. We have implemented the hybrid algorithm on commodity GPUs, and evaluated its performance on several large point sets. In practice, the GPU-based filtering algorithm can cull up to 85M interior points per second on an NVIDIA GeForce GTX 580 and the hybrid algorithm improves the overall performance of convex hull computation by 10 − 27 times (for static point sets) and 22 − 46 times (for deforming point sets).", "title": "" }, { "docid": "e56b2242eb08ec8b02f8a0353c19761c", "text": "Five experiments examined the effects of environmental context on recall and recognition. In Experiment 1, variability of input environments produced higher free recall performance than unchanged input environments. Experiment 2 showed improvements in cued recall when storage and test contexts matched, using a paradigm that unconfounded the variables of context mismatching and context change. In Experiment 3, recall of categories and recall of words within a category were better for same-context than different-context recall. In Experiment 4, subjects given identical input conditions showed strong effects of environmental context when given a free recall test, yet showed no main effects of context on a recognition test. The absence of an environmental context effect on recognition was replicated in Experiment 5, using a cued recognition task to control the semantic encodings of test words. In the discussion of these experiments, environmental context is compared with other types of context, and an attempt is made to identify the memory processes influenced by environmental context.", "title": "" }, { "docid": "4519e039416fe4548e08a15b30b8a14f", "text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.", "title": "" }, { "docid": "25be81188b38af7ec939b881706fdc2f", "text": "OBJECTIVES\nTo outline the prevalence and disparities of inattention and hyperactivity among school-aged urban minority youth, causal pathways through which inattention and hyperactivity adversely affects academic achievement, and proven or promising approaches for schools to address these problems.\n\n\nMETHODS\nLiterature review.\n\n\nRESULTS\nApproximately 4.6 million (8.4%) of American youth aged 6-17 have received a diagnosis of attention deficit/hyperactivity disorder (ADHD), and almost two thirds of these youth are reportedly under treatment with prescription medications. Urban minority youth are not only more likely to be affected but also less likely to receive accurate diagnosis and treatment. Causal pathways through which ADHD may affect academic achievement include sensory perceptions, cognition, school connectedness, absenteeism, and dropping out. In one study, youth with diagnosed ADHD were 2.7 times as likely to drop out (10.0% vs. 22.9%). A similar odds ratio for not graduating from high school was found in another prospective study, with an 8-year follow-up period (odds ratio = 2.4). There are many children who are below the clinical diagnostic threshold for ADHD but who exhibit signs and symptoms that interfere with learning. Evidence-based programs emphasizing functional academic and social outcomes are available.\n\n\nCONCLUSIONS\nInattention and hyperactivity are highly and disproportionately prevalent among school-aged urban minority youth, have a negative impact on academic achievement through their effects on sensory perceptions, cognition, school connectedness, absenteeism, and dropping out, and effective practices are available for schools to address these problems. This prevalent and complex syndrome has very powerful effects on academic achievement and educational attainment, and should be a high priority in efforts to help close the achievement gap.", "title": "" }, { "docid": "5bb040a8b1efdf69edda2cb6461c28d3", "text": "Health surveillance systems based on online user-generated content often rely on the identification of textual markers that are related to a target disease. Given the high volume of available data, these systems benefit from an automatic feature selection process. This is accomplished either by applying statistical learning techniques, which do not consider the semantic relationship between the selected features and the inference task, or by developing labour-intensive text classifiers. In this paper, we use neural word embeddings, trained on social media content from Twitter, to determine, in an unsupervised manner, how strongly textual features are semantically linked to an underlying health concept. We then refine conventional feature selection methods by a priori operating on textual variables that are sufficiently close to a target concept. Our experiments focus on the supervised learning problem of estimating influenza-like illness rates from Google search queries. A “flu infection” concept is formulated and used to reduce spurious —and potentially confounding— features that were selected by previously applied approaches. In this way, we also address forms of scepticism regarding the appropriateness of the feature space, alleviating potential cases of overfitting. Ultimately, the proposed hybrid feature selection method creates a more reliable model that, according to our empirical analysis, improves the inference performance (Mean Absolute Error) of linear and nonlinear regressors by 12% and 28.7%, respectively.", "title": "" }, { "docid": "18f8d1fef840c1a4441b5949d6b97d9e", "text": "Geospatial web service of agricultural information has a wide variety of consumers. An operational agricultural service will receive considerable requests and process a huge amount of datasets each day. To ensure the service quality, many strategies have to be taken during developing and deploying agricultural information services. This paper presents a set of methods to build robust geospatial web service for agricultural information extraction and sharing. The service is designed to serve the public and handle heavy-load requests for a long-lasting term with least maintenance. We have developed a web service to validate our approach. The service is used to serve more than 10 TB data product of agricultural drought. The performance is tested. The result shows that the service has an excellent response time and the use of system resources is stable. We have plugged the service into an operational system for global drought monitoring. The statistics and feedbacks show our approach is feasible and efficient in operational web systems.", "title": "" }, { "docid": "668252a8b0bb419198c03aa96d113655", "text": "This study aims at revealing how commercial hotness of urban commercial districts (UCDs) is shaped by social contexts of surrounding areas so as to render predictive business planning. We define social contexts for a given region as the number of visitors, the region functions, the population and buying power of local residents, the average price of services, and the rating scores of customers, which are computed from heterogeneous data including taxi GPS trajectories, point of interests, geographical data, and user-generated comments. Then, we apply sparse representation to discover the impactor factor of each variable of the social contexts in terms of predicting commercial activeness of UCDs under a linear predictive model. The experiments show that a linear correlation between social contexts and commercial activeness exists for Beijing and Shanghai based on an average prediction accuracy of 77.69% but the impact factors of social contexts vary from city to city, where the key factors are rich life services, diversity of restaurants, good shopping experience, large number of local residents with relatively high purchasing power, and convenient transportation. This study reveals the underlying mechanism of urban business ecosystems, and promise social context-aware business planning over heterogeneous urban big data.", "title": "" }, { "docid": "79d044e9d88a510d9ae547bb1048edc0", "text": "TimeStream is a distributed system designed specifically for low-latency continuous processing of big streaming data on a large cluster of commodity machines. The unique characteristics of this emerging application domain have led to a significantly different design from the popular MapReduce-style batch data processing. In particular, we advocate a powerful new abstraction called resilient substitution that caters to the specific needs in this new computation model to handle failure recovery and dynamic reconfiguration in response to load changes. Several real-world applications running on our prototype have been shown to scale robustly with low latency while at the same time maintaining the simple and concise declarative programming model. TimeStream handles an on-line advertising aggregation pipeline at a rate of 700,000 URLs per second with a 2-second delay, while performing sentiment analysis of Twitter data at a peak rate close to 10,000 tweets per second, with approximately 2-second delay.", "title": "" }, { "docid": "b9879e6bdcc08250bde4a59c357062a8", "text": "Constructing datasets to analyse the progression of conflicts has been a longstanding objective of peace and conflict studies research. In essence, the problem is to reliably extract relevant text snippets and code (annotate) them using an ontology that is meaningful to social scientists. Such an ontology usually characterizes either types of violent events (killing, bombing, etc.), and/or the underlying drivers of conflict, themselves hierarchically structured, for example security, governance and economics, subdivided into conflict-specific indicators. Numerous coding approaches have been proposed in the social science literature, ranging from fully automated “machine” coding to human coding. Machine coding is highly error prone, especially for labelling complex drivers, and suffers from extraction of duplicated events, but human coding is expensive, and suffers from inconsistency between annotators; thus hybrid approaches are required. In this paper, we analyse experimentally how human input can most effectively be used in a hybrid system to complement machine coding. Using two newly created real-world datasets, we show that machine learning methods improve on rule-based automated coding for filtering large volumes of input, while human verification of relevant/irrelevant text leads to improved performance of machine learning for predicting multiple labels in the ontology.", "title": "" }, { "docid": "72f9d32f241992d02990a7a2e9aad9bb", "text": "— Improved methods are proposed for disk drive failure prediction. The SMART (Self Monitoring and Reporting Technology) failure prediction system is currently implemented in disk drives. Its purpose is to predict the near-term failure of an individual hard disk drive, and issue a backup warning to prevent data loss. Two experimentally tests of SMART showed only moderate accuracy at low false alarm rates. (A rate of 0.2% of total drives per year implies that 20% of drive returns would be good drives, relative to ≈1% annual failure rate of drives). This requirement for very low false alarm rates is well known in medical diagnostic tests for rare diseases, and methodology used there suggests ways to improve SMART. ACRONYMS ATA Standard drive interface, desktop computers FA Failure analysis of apparently failed drive FAR False alarm rate, 100 times probability value MVRS Multivariate rank sum statistical test NPF Drive failed, but “No problem found” in FA RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value SCSI Standard drive interface, high-end computers SMART “Self monitoring and reporting technology” WA Failure warning accuracy (probability) Two improved SMART algorithms are proposed here. They use the SMART internal drive attribute measurements in present drives. The present warning algorithm based on maximum error thresholds is replaced by distribution-free statistical hypothesis tests. These improved algorithms are computationally simple enough to be implemented in drive microprocessor firmware code. They require only integer sort operations to put several hundred attribute values in rank order. Some tens of these ranks are added up and the SMART warning is issued if the sum exceeds a prestored limit. NOTATION: n Number of reference (old) measurements m Number of warning (new) measurements N Total ranked measurements (n+m) p Number of different attributes measured Q(X) Normal probability Pr(x>X) RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value", "title": "" } ]
scidocsrr
7d49e51e9a295e5793bf0478b112fb7f
Stylometry with R : A Package for Computational Text Analysis
[ { "docid": "02ac566cb1b11c3a3fe0edfde7181c32", "text": "During the last decade text mining has become a widely used discipline utilizing statistical and machine learning methods. We present the tm package which provides a framework for text mining applications within R. We give a survey on text mining facilities in R and explain how typical application tasks can be carried out using our framework. We present techniques for count-based analysis methods, text clustering, text classification and string kernels.", "title": "" } ]
[ { "docid": "4071b0a0f3887a5ad210509e6ad5498a", "text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.", "title": "" }, { "docid": "28b2da27bf62b7989861390a82940d88", "text": "End users are said to be “the weakest link” in information systems (IS) security management in the workplace. they often knowingly engage in certain insecure uses of IS and violate security policies without malicious intentions. Few studies, however, have examined end user motivation to engage in such behavior. to fill this research gap, in the present study we propose and test empirically a nonmalicious security violation (NMSV) model with data from a survey of end users at work. the results suggest that utilitarian outcomes (relative advantage for job performance, perceived security risk), normative outcomes (workgroup norms), and self-identity outcomes (perceived identity match) are key determinants of end user intentions to engage in NMSVs. In contrast, the influences of attitudes toward security policy and perceived sanctions are not significant. this study makes several significant contributions to research on security-related behavior by (1) highlighting the importance of job performance goals and security risk perceptions on shaping user attitudes, (2) demonstrating the effect of workgroup norms on both user attitudes and behavioral intentions, (3) introducing and testing the effect of perceived identity match on user attitudes and behavioral intentions, and (4) identifying nonlinear relationships between constructs. this study also informs security management practices on the importance of linking security and business objectives, obtaining user buy-in of security measures, and cultivating a culture of secure behavior at local workgroup levels in organizations. KeY words and PHrases: information systems security, nonlinear construct relationships, nonmalicious security violation, perceived identity match, perceived security risk, relative advantage for job performance, workgroup norms. information sYstems (is) securitY Has become a major cHallenGe for organizations thanks to the increasing corporate use of the Internet and, more recently, wireless networks. In the 2010 computer Security Institute (cSI) survey of computer security practitioners in u.S. organizations, more than 41 percent of the respondents reported security incidents [68]. In the united Kingdom, a similar survey found that 45 percent of the participating companies had security incidents in 2008 [37]. While the causes for these security incidents may be difficult to fully identify, it is generally understood that insiders from within organizations pose a major threat to IS security [36, 55]. For example, peer-to-peer file-sharing software installed by employees may cause inadvertent disclosure of sensitive business information over the Internet [41]. Employees writing down passwords on a sticky note or choosing easy-to-guess passwords may risk having their system access privilege be abused by others [98]. the 2010 cSI survey found that nonmalicious insiders are a big issue [68]. according to the survey, more than 14 percent of the respondents reported that nearly all their losses were due to nonmalicious, careless behaviors of insiders. Indeed, end users are often viewed as “the weakest link” in the IS security chain [73], and fundamentally IS security has a “behavioral root” [94]. uNDErStaNDING NONMalIcIOuS SEcurItY VIOlatIONS IN tHE WOrKPlacE 205 a frequently recommended organizational measure for dealing with internal threats posed by end user behavior is security policy [6]. For example, a security policy may specify what end users should (or should not) do with organizational IS assets, and it may also spell out the consequences of policy violations. Having a policy in place, however, does not necessarily guarantee security because end users may not always act as prescribed [7]. a practitioner survey found that even if end users were aware of potential security problems related to their actions, many of them did not follow security best practices and continued to engage in behaviors that could open their organizations’ IS to serious security risks [62]. For example, the survey found that many employees allowed others to use their computing devices at work despite their awareness of possible security implications. It was also reported that many end users do not follow policies and some of them knowingly violate policies without worry of repercussions [22]. this phenomenon raises an important question: What factors motivate end users to engage in such behaviors? the role of motivation has not been considered seriously in the IS security literature [75] and our understanding of the factors that motivate those undesirable user behaviors is still very limited. to fill this gap, the current study aims to investigate factors that influence end user attitudes and behavior toward organizational IS security. the rest of the paper is organized as follows. In the next section, we review the literature on end user security-related behaviors. We then propose a theoretical model of nonmalicious security violation and develop related hypotheses. this is followed by discussions of our research methods and data analysis. In the final section, we discuss our findings, implications for research and practice, limitations, and further research directions.", "title": "" }, { "docid": "4ccea211a4b3b01361a4205990491764", "text": "published by the press syndicate of the university of cambridge Vygotsky's educational theory in cultural context / edited by Alex Kozulin. .. [et al.]. p. cm. – (Learning in doing) Includes bibliographical references and index.", "title": "" }, { "docid": "a71911827603e753e9de542aff0521c4", "text": "Blood pressure (BP) is one of the most important indicator of human health. In this paper, we investigate the relationship between BP and health behavior (e.g. sleep and exercise). Using the data collected from off-the-shelf wearable devices and wireless home BP monitors, we propose a data driven personalized model to predict daily BP level and provide actionable insight into health behavior and daily BP. In the proposed machine learning model using Random Forest (RF), trend and periodicity features of BP time-series are extracted to improve prediction. To further enhance the performance of the prediction model, we propose RF with Feature Selection (RFFS), which performs RF-based feature selection to filter out unnecessary features. Our experimental results demonstrate that the proposed approach is robust to different individuals and has smaller prediction error than existing methods. We also validate the effectiveness of personalized recommendation of health behavior generated by RFFS model.", "title": "" }, { "docid": "631dc14ab0df1e5def0998bcf6ad016e", "text": "This study investigates the performance of two open source intrusion detection systems (IDSs) namely Snort and Suricata for accurately detecting the malicious traffic on computer networks. Snort and Suricata were installed on two different but identical computers and the performance was evaluated at 10 Gbps network speed. It was noted that Suricata could process a higher speed of network traffic than Snort with lower packet drop rate but it consumed higher computational resources. Snort had higher detection accuracy and was thus selected for further experiments. It was observed that Snort triggered a high rate of false positive alarms. To solve this problem a Snort adaptive plug-in was developed. To select the best performing algorithm for the Snort adaptive plug-in, an empirical study was carried out with different learning algorithms and Support Vector Machine (SVM) was selected. A hybrid version of SVM and Fuzzy logic produced a better detection accuracy. But the best result was achieved using an optimized SVM with the firefly algorithm with FPR (false positive rate) as 8.6% and FNR (false negative rate) as 2.2%, which is a good result. The novelty of this work is the performance comparison of two IDSs at 10 Gbps and the application of hybrid and optimized machine learning algorithms to Snort.", "title": "" }, { "docid": "ef48ea0c6b8ec11126924c2bb0b1bf30", "text": "A 13.5-mW 10-Gb/s four-level pulse-amplitude modulation (4-PAM) serial link transmitter is presented. To improve the power efficiency, a voltage-mode 4-PAM driver is proposed. It consists of voltage-scaled pull-up and pull-down networks, instead of conventional current switching networks. Not employing a tail current source, the proposed 4-PAM driver achieves the higher output voltage swing and lower power dissipation than conventional 4-PAM drivers. As a result, the proposed 4-PAM transmitter implemented in a 0.13-μm CMOS process achieved 10-Gb/s data rate with only 13.5-mW power dissipation.", "title": "" }, { "docid": "e39da2504ab3a17725db73fd9e5e45c3", "text": "Comparisons of 2D and 3D cell culture models in literature have indicated differences in cellular morphology and metabolism, commonly attributed the better representation of in vivo conditions of the latter cell culture environment. Thus, interest in the use of 3D collagen gels for in vitro analysis has been growing. Although comparative studies to date have indicated an enhanced resistance of cells on collagen matrices against different toxicants, in the present study it is demonstrated that non-adapted protocols can lead to misinterpretation of results obtained from classical colorimetric dye-based cytotoxic assays. Using the well established Alamar blue assay, the study demonstrates how the transfer from 2D substrates to 3D collagen matrices can affect the uptake of the resazurin itself, affecting the outcome of the assay. Using flow cytometry, it is demonstrated that the cell viability is unaffected when cells are grown on collagen matrices, thus the difference seen in the fluorescence is a result of a dilution of the resazurin dye in the collagen matrix, and an increased uptake rate due to the larger cell surface exposed to the surrounding environment, facilitating more effective diffusion through the cellular membrane. The results are supported by a rate equation based simulation, verifying that differing uptake kinetics can result in apparently different cell viability. Finally, this work highlights the feasibility to apply classical dye-based assays on collagen based 3D cell culture models. However, the diffusion and bioavailability of test substances in 3D matrices used in in vitro toxicological assays must be considered and adaption of the protocols is necessary for direct comparison with the traditional 2D models. Moreover, the observations made based on the resazurin dye can be applied to drugs or nanoparticles which freely diffuse through the collagen matrices, thus affecting the effective concentration exposed to the cells.", "title": "" }, { "docid": "8f177b79f0b89510bd84e1f503b5475f", "text": "We propose a distributed cooperative framework among base stations (BS) with load balancing (dubbed as inter-BS for simplicity) for improving energy efficiency of OFDMA-based cellular access networks. Proposed inter-BS cooperation is formulated following the principle of ecological self-organization. Based on the network traffic, BSs mutually cooperate for distributing traffic among themselves and thus, the number of active BSs is dynamically adjusted for energy savings. For reducing the number of inter-BS communications, a three-step measure is taken by using estimated load factor (LF), initializing the algorithm with only the active BSs and differentiating neighboring BSs according to their operating modes for distributing traffic. An exponentially weighted moving average (EWMA)-based technique is proposed for estimating the LF in advance based on the historical data. Various selection schemes for finding the best BSs to distribute traffic are also explored. Furthermore, we present an analytical formulation for modeling the dynamic switching of BSs. A thorough investigation under a wide range of network settings is carried out in the context of an LTE system. Results demonstrate a significant enhancement in network energy efficiency yielding a much higher savings than the compared schemes. Moreover, frequency of inter-BS correspondences can be reduced by over 80%.", "title": "" }, { "docid": "d3834e337ca661d3919674a8acc1fa0c", "text": "Relative (or receiver) operating characteristic (ROC) curves are a graphical representation of the relationship between sensitivity and specificity of a laboratory test over all possible diagnostic cutoff values. Laboratory medicine has been slow to adopt the use of ROC curves for the analysis of diagnostic test performance. In this tutorial, we discuss the advantages and limitations of the ROC curve for clinical decision making in laboratory medicine. We demonstrate the construction and statistical uses of ROC analysis, review its published applications in clinical pathology, and comment on its role in the decision analytic framework in laboratory medicine.", "title": "" }, { "docid": "fbddd20271cf134e15b33e7d6201c374", "text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.", "title": "" }, { "docid": "ba8289f0730dae415c4ff3af57a41d4e", "text": "This paper is Part 2 of a four-part series of our research on the development of a general framework for error analysis in measurement-based geographic information systems (MBGIS). In this paper, we discuss the problem of point-in-polygon analysis under randomness, i.e., with random measurement error (ME). It is well known that overlay is one of the most important operations in GIS, and point-in-polygon analysis is a basic class of overlay and query problems. Though it is a classic problem, it has, however, not been addressed appropriately. With ME in the location of the vertices of a polygon, the resulting random polygons may undergo complex changes, so that the point-in-polygon problem may become theoretically and practically ill-defined. That is, there is a possibility that we cannot answer whether a random point is inside a random polygon if the polygon is not simple and cannot form a region. For the point-in-triangle problem, however, such a case need not be considered since any triangle can always forms its an interior or region. To formulate the general point-in-polygon problem in a suitable way, a conditional probability mechanism is first introduced in order to accurately characterize the nature of the problem and establish the basis for further analysis. For the point-in-triangle problem, four quadratic forms in the joint coordinate vectors of a point and the vertices of the triangle are constructed. The probability model for the point-in-triangle problem is then established by the identification of signs of these quadratic form variables. Our basic idea for solving a general point-in-polygon (concave or convex) problem is to convert it into several point-in-triangle A general framework for error analysis in measurement-based GIS ------Part 2 2 problems under a certain condition. By solving each point-in-triangle problem and summing the solutionsm up, the probability model for a general point-in-polygon analysis is constructed. The simplicity of the algebra-based approach is that from using these quadratic forms, we can circumvent the complex geometrical relations between a random point and a random polygon (convex or concave) that one has to deal with in any geometric methods when the probability is computed. The theoretical arguments are substantiated by simulation experiments.", "title": "" }, { "docid": "952735cb937248c837e0b0244cd9dbb1", "text": "Recently, the desired very high throughput of 5G wireless networks drives millimeter-wave (mm-wave) communication into practical applications. A phased array technique is required to increase the effective antenna aperture at mm-wave frequency. Integrated solutions of beamforming/beam steering are extremely attractive for practical implementations. After a discussion on the basic principles of radio beam steering, we review and explore the recent advanced integration techniques of silicon-based electronic integrated circuits (EICs), photonic integrated circuits (PICs), and antenna-on-chip (AoC). For EIC, the latest advanced designs of on-chip true time delay (TTD) are explored. Even with such advances, the fundamental loss of a silicon-based EIC still exists, which can be solved by advanced PIC solutions with ultra-broad bandwidth and low loss. Advanced PIC designs for mm-wave beam steering are then reviewed with emphasis on an optical TTD. Different from the mature silicon-based EIC, the photonic integration technology for PIC is still under development. In this paper, we review and explore the potential photonic integration platforms and discuss how a monolithic integration based on photonic membranes fits the photonic mm-wave beam steering application, especially for the ease of EIC and PIC integration on a single chip. To combine EIC, for its accurate and mature fabrication techniques, with PIC, for its ultra-broad bandwidth and low loss, a hierarchical mm-wave beam steering chip with large-array delays realized in PIC and sub-array delays realized in EIC can be a future-proof solution. Moreover, the antenna units can be further integrated on such a chip using AoC techniques. Among the mentioned techniques, the integration trends on device and system levels are discussed extensively.", "title": "" }, { "docid": "83c4fafaac2db4e3205dc3291556f058", "text": "Current research on traffic flow prediction mainly concentrates on generating accurate prediction results based on intelligent or combined algorithms but ignores the interpretability of the prediction model. In practice, however, the interpretability of the model is equally important for traffic managers to realize which road segment in the road network will affect the future traffic state of the target segment in a specific time interval and when such an influence is expected to happen. In this paper, an interpretable and adaptable spatiotemporal Bayesian multivariate adaptive-regression splines (ST-BMARS) model is developed to predict short-term freeway traffic flow accurately. The parameters in the model are estimated in the way of Bayesian inference, and the optimal models are obtained using a Markov chain Monte Carlo (MCMC) simulation. In order to investigate the spatial relationship of the freeway traffic flow, all of the road segments on the freeway are taken into account for the traffic prediction of the target road segment. In our experiments, actual traffic data collected from a series of observation stations along freeway Interstate 205 in Portland, OR, USA, are used to evaluate the performance of the model. Experimental results indicate that the proposed interpretable ST-BMARS model is robust and can generate superior prediction accuracy in contrast with the temporal MARS model, the parametric model autoregressive integrated moving averaging (ARIMA), the state-of-the-art seasonal ARIMA model, and the kernel method support vector regression.", "title": "" }, { "docid": "59a8fb8f04e73be3bd56a146a700f15f", "text": "OBJECTIVE\nWe created a system using a triad of change management, electronic surveillance, and algorithms to detect sepsis and deliver highly sensitive and specific decision support to the point of care using a mobile application. The investigators hypothesized that this system would result in a reduction in sepsis mortality.\n\n\nMETHODS\nA before-and-after model was used to study the impact of the interventions on sepsis-related mortality. All patients admitted to the study units were screened per the Institute for Healthcare Improvement Surviving Sepsis Guidelines using real-time electronic surveillance. Sepsis surveillance algorithms that adjusted clinical parameters based on comorbid medical conditions were deployed for improved sensitivity and specificity. Nurses received mobile alerts for all positive sepsis screenings as well as severe sepsis and shock alerts over a period of 10 months. Advice was given for early goal-directed therapy. Sepsis mortality during a control period from January 1, 2011 to September 30, 2013 was used as baseline for comparison.\n\n\nRESULTS\nThe primary outcome, sepsis mortality, decreased by 53% (P = 0.03; 95% CI, 1.06-5.25). The 30-day readmission rate reduced from 19.08% during the control period to 13.21% during the study period (P = 0.05; 95% CI, 0.97-2.52). No significant change in length of hospital stay was noted. The system had observed sensitivity of 95% and specificity of 82% for detecting sepsis compared to gold-standard physician chart review.\n\n\nCONCLUSION\nA program consisting of change management and electronic surveillance with highly sensitive and specific decision support delivered to the point of care resulted in significant reduction in deaths from sepsis.", "title": "" }, { "docid": "4c1c72fde3bbe25f6ff3c873a87b86ba", "text": "The purpose of this study was to translate the Foot Function Index (FFI) into Italian, to perform a cross-cultural adaptation and to evaluate the psychometric properties of the Italian version of FFI. The Italian FFI was developed according to the recommended forward/backward translation protocol and evaluated in patients with foot and ankle diseases. Feasibility, reliability [intraclass correlation coefficient (ICC)], internal consistency [Cronbach’s alpha (CA)], construct validity (correlation with the SF-36 and a visual analogue scale (VAS) assessing for pain), responsiveness to surgery were assessed. The standardized effect size and standardized response mean were also evaluated. A total of 89 patients were recruited (mean age 51.8 ± 13.9 years, range 21–83). The Italian version of the FFI consisted in 18 items separated into a pain and disability subscales. CA value was 0.95 for both the subscales. The reproducibility was good with an ICC of 0.94 and 0.91 for pain and disability subscales, respectively. A strong correlation was found between the FFI and the scales of the SF-36 and the VAS with related content, particularly in the areas of physical function and pain was observed indicating good construct validity. After surgery, the mean FFI improved from 55.9 ± 24.8 to 32.4 ± 26.3 for the pain subscale and from 48.8 ± 28.8 to 24.9 ± 23.7 for the disability subscale (P < 0.01). The Italian version of the FFI showed satisfactory psychometric properties in Italian patients with foot and ankle diseases. Further testing in different and larger samples is required in order to ensure the validity and reliability of this score.", "title": "" }, { "docid": "1cdd88ea6899afc093102990040779e2", "text": "Available online xxxx", "title": "" }, { "docid": "a1d6ec19be444705fd6c339d501bce10", "text": "The transmission properties of a guide consisting of a dielectric rod of rectangular cross-section surrounded by dielectrics of smaller refractive indices are determined. This guide is the basic component in a new technology called integrated optical circuitry. The directional coupler, a particularly useful device, made of two of those guides closely spaced is also analyzed. [The SCI indicates that this paper has been cited over 145 times since 1969.]", "title": "" }, { "docid": "b52b27e83adf3c7466ab481092969f2e", "text": "Test suite maintenance tends to have the biggest impact on the overall cost of test automation. Frequently modifications applied on a web application lead to have one or more test cases broken and repairing the test suite is a time-consuming and expensive task. \n This paper reports on an industrial case study conducted in a small Italian company investigating on the analysis of the effort to repair web test suites implemented using different UI locators (e.g., Identifiers and XPath). \n The results of our case study indicate that ID locators used in conjunction with LinkText is the best solution among the considered ones in terms of time required (and LOCs to modify) to repair the test suite to the new release of the application.", "title": "" }, { "docid": "78179425b45a0aa0eba67fba802e5c6c", "text": "Internet Gaming Disorder (IGD) is a potential mental disorder currently included in the third section of the latest (fifth) edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-5) as a condition that requires additional research to be included in the main manual. Although research efforts in the area have increased, there is a continuing debate about the respective criteria to use as well as the status of the condition as mental health concern. Rather than using diagnostic criteria which are based on subjective symptom experience, the National Institute of Mental Health advocates the use of Research Domain Criteria (RDoC) which may support classifying mental disorders based on dimensions of observable behavior and neurobiological measures because mental disorders are viewed as biological disorders that involve brain circuits that implicate specific domains of cognition, emotion, and behavior. Consequently, IGD should be classified on its underlying neurobiology, as well as its subjective symptom experience. Therefore, the aim of this paper is to review the neurobiological correlates involved in IGD based on the current literature base. Altogether, 853 studies on the neurobiological correlates were identified on ProQuest (in the following scholarly databases: ProQuest Psychology Journals, PsycARTICLES, PsycINFO, Applied Social Sciences Index and Abstracts, and ERIC) and on MEDLINE, with the application of the exclusion criteria resulting in reviewing a total of 27 studies, using fMRI, rsfMRI, VBM, PET, and EEG methods. The results indicate there are significant neurobiological differences between healthy controls and individuals with IGD. The included studies suggest that compared to healthy controls, gaming addicts have poorer response-inhibition and emotion regulation, impaired prefrontal cortex (PFC) functioning and cognitive control, poorer working memory and decision-making capabilities, decreased visual and auditory functioning, and a deficiency in their neuronal reward system, similar to those found in individuals with substance-related addictions. This suggests both substance-related addictions and behavioral addictions share common predisposing factors and may be part of an addiction syndrome. Future research should focus on replicating the reported findings in different cultural contexts, in support of a neurobiological basis of classifying IGD and related disorders.", "title": "" }, { "docid": "1e7721225d84896a72f2ea790570ecbd", "text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.", "title": "" } ]
scidocsrr
dc91bb57c3d5fd0edf1bf03cdc2ad6d1
Unsupervised Controllable Text Formalization
[ { "docid": "7288a312b26c6c3281cef7ecf7be8f44", "text": "This paper discusses an important issue in computational linguistics: classifying texts as formal or informal style. Our work describes a genreindependent methodology for building classifiers for formal and informal texts. We used machine learning techniques to do the automatic classification, and performed the classification experiments at both the document level and the sentence level. First, we studied the main characteristics of each style, in order to train a system that can distinguish between them. We then built two datasets: the first dataset represents general-domain documents of formal and informal style, and the second represents medical texts. We tested on the second dataset at the document level, to determine if our model is sufficiently general, and that it works on any type of text. The datasets are built by collecting documents for both styles from different sources. After collecting the data, we extracted features from each text. The features that we designed represent the main characteristics of both styles. Finally, we tested several classification algorithms, namely Decision Trees, Naïve Bayes, and Support Vector Machines, in order to choose the classifier that generates the best classification results. 1 LiLT Volume 8, Issue 1, March 2012. Learning to Classify Documents According to Formal and Informal Style. Copyright c © 2012, CSLI Publications. 2 / LiLT volume 8, issue 1 March 2012", "title": "" }, { "docid": "37a7de366210c2c56ec0f64992b71bef", "text": "In this paper, we propose a novel neural approach for paraphrase generation. Conventional paraphrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We experiment with our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based and bi-directional LSTM models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.", "title": "" }, { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" } ]
[ { "docid": "0745e61d3c62c569821382aa3d3dae9e", "text": "Air pollutants can be either gases or aerosols with particles or liquid droplets suspended in the air. They change the natural composition of the atmosphere, can be harmful to humans and other living species and can cause damage to natural water bodies and the land. Anthropogenic specifically due to the human causes that in this study, it has been identified that Population, Gross Domestic Product (GDP) and Manufacturing Industry adaptive from IPAT Model is the major contributors to the emission of carbon dioxide. The time series data gained of carbon emission from the years 1970 to 2011 to explain the trend. The Command and Control (CAC) and Economic Incentive (EI) approaches being recommended to assist the government monitoring the air pollution trend in Malaysia", "title": "" }, { "docid": "6677149025a415e44778d1011b617c36", "text": "In this paper controller synthesis based on standard and dynamic sliding modes for an uncertain nonlinear MIMO Three tank System is presented. Two types of sliding mode controllers are synthesized; first controller is based on standard first order sliding modes while second controller uses dynamic sliding modes. Sliding manifolds for both controllers are designed in-order to ensure finite time convergence of sliding variable for tracking the desired system trajectories. Simulation results are presented showing the performance analysis of both sliding mode controllers. Simulations are also carried out to assess the performance of dynamic sliding mode controller against parametric uncertainties / disturbances. A comparison of designed sliding mode controllers with LMI based robust H∞ controller is also discussed. The performance of dynamic sliding mode control in terms of response time, control effort and robustness of dynamic sliding mode controller is shown to be better than standard sliding mode controller and H∞ controllers.", "title": "" }, { "docid": "b2e71f9d11f29980ba1ac47fabc8b423", "text": "As security incidents continue to impact organisations, there is a growing demand for systems to be ‘forensic-ready’ - to maximise the potential use of evidence whilst minimising the costs of an investigation. Researchers have supported organisational forensic readiness efforts by proposing the use of policies and processes, aligning systems with forensics objectives and training employees. However, recent work has also proposed an alternative strategy for implementing forensic readiness called forensic-by-design. This is an approach that involves integrating requirements for forensics into relevant phases of the systems development lifecycle with the aim of engineering forensic-ready systems. While this alternative forensic readiness strategy has been discussed in the literature, no previous research has examined the extent to which organisations actually use this approach for implementing forensic readiness. Hence, we investigate the extent to which organisations consider requirements for forensics during systems development. We first assessed existing research to identify the various perspectives of implementing forensic readiness, and then undertook an online survey to investigate the consideration of requirements for forensics during systems development lifecycles. Our findings provide an initial assessment of the extent to which requirements for forensics are considered within organisations. We then use our findings, coupled with the literature, to identify a number of research challenges regarding the engineering of forensic-ready systems.", "title": "" }, { "docid": "6a32d9e43d7f4558fa6dbbc596ce4496", "text": "Automatically mapping natural language into programming language semantics has always been a major and interesting challenge. In this paper, we approach such problem by carrying out mapping at syntactic level and then applying machine learning algorithms to derive an automatic translator of natural language questions into their associated SQL queries. For this purpose, we design a dataset of relational pairs containing syntactic trees of questions and queries and we encode them in Support Vector Machines by means of kernel functions. Pair classification experiments suggest that our approach is promising in deriving shared semantics between the languages above.", "title": "" }, { "docid": "f071a3d699ba4b3452043b6efb14b508", "text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.", "title": "" }, { "docid": "367406644a29b4894df011b95add5985", "text": "Graphs have long been proposed as a tool to browse and navigate in a collection of documents in order to support exploratory search. Many techniques to automatically extract different types of graphs, showing for example entities or concepts and different relationships between them, have been suggested. While experimental evidence that they are indeed helpful exists for some of them, it is largely unknown which type of graph is most helpful for a specific exploratory task. However, carrying out experimental comparisons with human subjects is challenging and time-consuming. Towards this end, we present the GraphDocExplore framework. It provides an intuitive web interface for graph-based document exploration that is optimized for experimental user studies. Through a generic graph interface, different methods to extract graphs from text can be plugged into the system. Hence, they can be compared at minimal implementation effort in an environment that ensures controlled comparisons. The system is publicly available under an open-source license.1", "title": "" }, { "docid": "ad268322fcc88c82ed1f3b7f86a1c43a", "text": "Cannabis as a medicine was used before the Christian era in Asia, mainly in India. The introduction of cannabis in the Western medicine occurred in the midst of the 19th century, reaching the climax in the last decade of that century, with the availability and usage of cannabis extracts or tinctures. In the first decades of the 20th century, the Western medical use of cannabis significantly decreased largely due to difficulties to obtain consistent results from batches of plant material of different potencies. The identification of the chemical structure of cannabis components and the possibility of obtaining its pure constituents were related to a significant increase in scientific interest in such plant, since 1965. This interest was renewed in the 1990's with the description of cannabinoid receptors and the identification of an endogenous cannabinoid system in the brain. A new and more consistent cycle of the use of cannabis derivatives as medication begins, since treatment effectiveness and safety started to be scientifically proven.", "title": "" }, { "docid": "bf5535b2208be9f1cd204e1a77dec02e", "text": "iii This work is dedicated to my beloved parents, for all the sacrifices they have made to ensure that I obtain the best education possible. Their unconditional love and words of encouragement has really been a tonic to me. Looking back to the dark days and tough times I have been through, my parents has always given me the strength to persevere. Then I dedicated to my brother and sisters. May Allah be with them every step of the way, and richly bless them in everything they do iv ACKNOWLEDGEMENTS First and foremost, I would like to give thanks to the Almighty Allah for He made my dream comes true by giving me strength and good health to complete this study. Without Him, all my efforts would have been fruitless but because He is the only one who knows our fate, He made it possible for me to pursue my studies at UTM. Special thanks go to my supervisor Dr. Jafri Bin Din, for allowing me to carry out this study under his supervision, and for his constructive criticism and support, which has enabled me to complete this study on time. During the past one year of my research under his supervision, I have known Dr. Jafri Bin Din as a sympathetic and principle-centered person. He thought me how to be a challenger, how to set my benchmark ever higher and how to look for solutions to problems rather than focus on the problems. I learned to believe in myself, my work and my future. Thank you Dr. Jafri Bin Din, for your love, emotional and intellectual support as well as your never-ending faith in me. Last but not least, I am forever indebted to all my family members for their constant support throughout the entire duration of this project. Their words of encouragement never failed to keep me going even through the hardest of times and it is here that I express my sincerest gratitude to them. ABSTRACT In parallel with terrestrial and satellite wireless networks, a new alternative based on platforms located in the stratosphere has recently introduced, known as High Altitude Platforms (HAPS). HAPS are either airships or aircraft positioned between 17 and 22.5 km above the earth surface. It has capability to deliver a wide spectrum of applications to both mobile and fixed users over a broad coverage area. Wideband code division multiple access (WCDMA) has …", "title": "" }, { "docid": "383e88fd5dc669aff5f602f35b319380", "text": "Automatic Turret Gun (ATG) is a weapon system used in numerous combat platforms and vehicles such as in tanks, aircrafts, or stationary ground platforms. ATG plays a big role in both defensive and offensive scenario. It allows combat engagement while the operator of ATG (soldier) covers himself inside a protected control station. On the other hand, ATGs have significant mass and dimension, therefore susceptible to inertial disturbances that need to be compensated to enable the ATG to reach the targeted position quickly and accurately while undergoing disturbances from weapon fire or platform movement. The paper discusses various conventional control method applied in ATG, namely PID controller, RAC, and RACAFC. A number of experiments have been carried out for various range of angle both in azimuth and elevation axis of turret gun. The results show that for an ATG system working under disturbance, RACAFC exhibits greater performance than both RAC and PID, but in experiments without load, equally satisfactory results are obtained from RAC. The exception is for the PID controller, which cannot reach the entire angle given.", "title": "" }, { "docid": "6ec83bd04d6af27355d5906ca81c9d8f", "text": "Perhaps a few words might be inserted here to avoid In parametric curve interpolation, the choice of the any possible confusion. In the usual function interpolation interpolating nodes makes a great deal of difference in the resulting curve. Uniform parametrization is generally setting, the problem is of the form P~ = (x, y~) where the x~ are increasing, and one seeks a real-valued unsatisfactory. It is often suggested that a good choice polynomial y = y(x) so that y(x~)= y~. This is identical of nodes is the cumulative chord length parametrization. to the vector-valued polynomial Examples presented here, however, show that this is not so. Heuristic reasoning based on a physical analogy leads P(x) = (x, y(x)) to a third parametrization, (the \"centripetal model'), which almost invariably results in better shapes than with x as the parameter, except with the important either the chord length or the uniform parametrization. distinction that here the interpolating conditions As with the previous two methods, this method is \"global'and is 'invariant\" under similarity transformations, y(x~) = y~ are (It turns out that, in some sense, the method has been anticipated in a paper by Hosaka and Kimura.) P(x~) = P~, 0 <~ i <~ n", "title": "" }, { "docid": "396c9da61a3f7c21544278e0396eb689", "text": "There are several challenges in down-sizing robots for transportation deployment, diversification of locomotion capabilities tuned for various terrains, and rapid and on-demand manufacturing. In this paper we propose an origami-inspired method of addressing these key issues by designing and manufacturing a foldable, deployable, and self-righting version of the origami robot Tribot. Our latest Tribot prototype can jump as high as 215 mm, five times its height, and roll consecutively on any of its edges with an average step size of 55 mm. The 4 g robot self-deploys nine times of its size when released. A compliant roll cage ensures that the robot self-rights onto two legs after jumping or being deployed and also protects the robot from impacts. A description of our prototype and its design, locomotion modes, and fabrication is followed by demonstrations of its key features.", "title": "" }, { "docid": "2f60e3d89966d4680796c1e4355de4bc", "text": "This letter addresses the problem of energy detection of an unknown signal over a multipath channel. It starts with the no-diversity case, and presents some alternative closed-form expressions for the probability of detection to those recently reported in the literature. Detection capability is boosted by implementing both square-law combining and square-law selection diversity schemes", "title": "" }, { "docid": "dcf231b887d7caeec341850507561197", "text": "Convolutional neural networks (CNNs) have attracted increasing attention in the remote sensing community. Most CNNs only take the last fully-connected layers as features for the classification of remotely sensed images, discarding the other convolutional layer features which may also be helpful for classification purposes. In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification. To this end, the optimal fusing weights for different convolutional layers are learned from the data itself. In remotely sensed scenes, the objects of interest exhibit different scales in distinct scenes, and even a single scene may contain objects with different sizes. To address this issue, we select the CNN with spatial pyramid pooling (SPP-net) as the basic deep network, and further construct a multi-scale ADPM model to learn complementary information from multi-scale images. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods. Keywords—Convolutional neural network (CNN), adaptive deep pyramid matching (ADPM), convolutional features, multi-scale ensemble, remote-sensing scene classification.", "title": "" }, { "docid": "bd7f676d56c70ec3a64218feaf7399cb", "text": "Because of an aging population and increased occurrence of sports-related injuries, musculoskeletal disorders have become one of the major health concerns in the United States. Current treatments, although fairly successful, do not provide the optimum therapy. These treatments typically rely on donor tissues obtained either from the patient or from another source. The former raises the issue of supply, whereas the latter poses the risk of rejection and disease transfer. This has prompted orthopedic surgeons and scientists to look for viable alternatives. In recent years, tissue engineering has gained increasing support as a method to treat orthopedic disorders. Because it uses principles of engineering, biology, and chemistry, tissue engineering may provide a more effective approach to the treatment of musculoskeletal disorders than traditional methods. This chapter presents a review of current methods and new tissue-engineering techniques for the treatment of disorders affecting bone, ligament, and cartilage.", "title": "" }, { "docid": "411d3048bd13f48f0c31259c41ff2903", "text": "In computer vision, object detection is addressed as one of the most challenging problems as it is prone to localization and classification error. The current best-performing detectors are based on the technique of finding region proposals in order to localize objects. Despite having very good performance, these techniques are computationally expensive due to having large number of proposed regions. In this paper, we develop a high-confidence region-based object detection framework that boosts up the classification performance with less computational burden. In order to formulate our framework, we consider a deep network that activates the semantically meaningful regions in order to localize objects. These activated regions are used as input to a convolutional neural network (CNN) to extract deep features. With these features, we train a set of class-specific binary classifiers to predict the object labels. Our new region-based detection technique significantly reduces the computational complexity and improves the performance in object detection. We perform rigorous experiments on PASCAL, SUN, MIT-67 Indoor and MSRC datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in recognizing objects.", "title": "" }, { "docid": "ee6bcb714c118361a51db8f1f8f0e985", "text": "BACKGROUND\nWe propose the use of serious games to screen for abnormal cognitive status in situations where it may be too costly or impractical to use standard cognitive assessments (eg, emergency departments). If validated, serious games in health care could enable broader availability of efficient and engaging cognitive screening.\n\n\nOBJECTIVE\nThe objective of this work is to demonstrate the feasibility of a game-based cognitive assessment delivered on tablet technology to a clinical sample and to conduct preliminary validation against standard mental status tools commonly used in elderly populations.\n\n\nMETHODS\nWe carried out a feasibility study in a hospital emergency department to evaluate the use of a serious game by elderly adults (N=146; age: mean 80.59, SD 6.00, range 70-94 years). We correlated game performance against a number of standard assessments, including the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and the Confusion Assessment Method (CAM).\n\n\nRESULTS\nAfter a series of modifications, the game could be used by a wide range of elderly patients in the emergency department demonstrating its feasibility for use with these users. Of 146 patients, 141 (96.6%) consented to participate and played our serious game. Refusals to play the game were typically due to concerns of family members rather than unwillingness of the patient to play the game. Performance on the serious game correlated significantly with the MoCA (r=-.339, P <.001) and MMSE (r=-.558, P <.001), and correlated (point-biserial correlation) with the CAM (r=.565, P <.001) and with other cognitive assessments.\n\n\nCONCLUSIONS\nThis research demonstrates the feasibility of using serious games in a clinical setting. Further research is required to demonstrate the validity and reliability of game-based assessments for clinical decision making.", "title": "" }, { "docid": "d2a0ff28b7163203a03be27977b9b425", "text": "The various types of shadows are characterized. Most existing shadow algorithms are described, and their complexities, advantages, and shortcomings are discussed. Hard shadows, soft shadows, shadows of transparent objects, and shadows for complex modeling primitives are considered. For each type, shadow algorithms within various rendering techniques are examined. The aim is to provide readers with enough background and insight on the various methods to allow them to choose the algorithm best suited to their needs and to help identify the areas that need more research and point to possible solutions.<<ETX>>", "title": "" }, { "docid": "e26d52cdc3636e3034d76bc684b9dc95", "text": "The problem of cross-modal retrieval from multimedia repositories is considered. This problem addresses the design of retrieval systems that support queries across content modalities, for example, using an image to search for texts. A mathematical formulation is proposed, equating the design of cross-modal retrieval systems to that of isomorphic feature spaces for different content modalities. Two hypotheses are then investigated regarding the fundamental attributes of these spaces. The first is that low-level cross-modal correlations should be accounted for. The second is that the space should enable semantic abstraction. Three new solutions to the cross-modal retrieval problem are then derived from these hypotheses: correlation matching (CM), an unsupervised method which models cross-modal correlations, semantic matching (SM), a supervised technique that relies on semantic representation, and semantic correlation matching (SCM), which combines both. An extensive evaluation of retrieval performance is conducted to test the validity of the hypotheses. All approaches are shown successful for text retrieval in response to image queries and vice versa. It is concluded that both hypotheses hold, in a complementary form, although evidence in favor of the abstraction hypothesis is stronger than that for correlation.", "title": "" } ]
scidocsrr
705bcb2951ab66072e0f84293bf21aef
Inferring relations in knowledge graphs with tensor decompositions
[ { "docid": "0a170051e72b58081ad27e71a3545bcf", "text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "title": "" }, { "docid": "fa19d51396156e0ede5a02eb243a9fc8", "text": "Non-negative data is generated by a broad selection of applications today, e.g in gene expression analysis or imaging. Many factorization techniques have been extended to account for this natural constraint and have become very popular due to their decomposition into interpretable latent factors. Generally relational data like protein interaction networks or social network data can also be seen as naturally non-negative. In this work, we extend the RESCAL tensor factorization, which has shown state-of-the-art results for multi-relational learning, to account for non-negativity by employing multiplicative update rules. We study the performance via these approaches on various benchmark datasets and show that a non-negativity constraint can be introduced by losing only little in terms of predictive quality in most of the cases but simultaneously increasing the sparsity of the factors significantly compared to the original RESCAL algorithm.", "title": "" } ]
[ { "docid": "f4cf5ac351005975bc8244497a45bc70", "text": "This paper demonstrates the co-optimization of all critical device parameters of perpendicular magnetic tunnel junctions (pMTJ) in 1 Gbit arrays with an equivalent bitcell size of 22 F2 at the 28 nm logic node for embedded STT-MRAM. Through thin-film tuning and advanced etching of sub-50 nm (diameter) pMTJ, high device performance and reliability were achieved simultaneously, including TMR = 150 %, Hc > 1350 Oe, Heff <; 100 Oe, Δ = 85, Ic (35 ns) = 94 μA, Vbreakdown = 1.5 V, and high endurance (> 1012 write cycles). Reliable switching with small temporal variations (<; 5 %) was obtained down to 10 ns. In addition, tunnel barrier integrity and high temperature device characteristics were investigated in order to ensure reliable STT-MRAM operation.", "title": "" }, { "docid": "c06c13af6d89c66e2fa065534bfc2975", "text": "Complex foldings of the vaginal wall are unique to some cetaceans and artiodactyls and are of unknown function(s). The patterns of vaginal length and cumulative vaginal fold length were assessed in relation to body length and to each other in a phylogenetic context to derive insights into functionality. The reproductive tracts of 59 female cetaceans (20 species, 6 families) were dissected. Phylogenetically-controlled reduced major axis regressions were used to establish a scaling trend for the female genitalia of cetaceans. An unparalleled level of vaginal diversity within a mammalian order was found. Vaginal folds varied in number and size across species, and vaginal fold length was positively allometric with body length. Vaginal length was not a significant predictor of vaginal fold length. Functional hypotheses regarding the role of vaginal folds and the potential selection pressures that could lead to evolution of these structures are discussed. Vaginal folds may present physical barriers, which obscure the pathway of seawater and/or sperm travelling through the vagina. This study contributes broad insights to the evolution of reproductive morphology and aquatic adaptations and lays the foundation for future functional morphology analyses.", "title": "" }, { "docid": "45bd2380526aeec8ef3ed537d8fd700c", "text": "Numerous studies in recent months have proposed the use of linguistic instruments to support requirements analysis. There are two main reasons for this: (i) the progress made in natural language processing and (ii) the need to provide the developers of software systems with support in the early phases of requirements definition and conceptual modelling. This paper presents the results of an online market research intended (a) to assess the economic advantages of developing a CASE (computer-aided software engineering) tool that integrates linguistic analysis techniques for documents written in natural language, and (b) to verify the existence of the potential demand for such a tool. The research included a study of the language – ranging from completely natural to highly restricted – used in documents available for requirements analysis, an important factor given that on a technological level there is a trade-off between the language used and the performance of the linguistic instruments. To determine the potential demand for such tool, some of the survey questions dealt with the adoption of development methodologies and consequently with models and support tools; other questions referred to activities deemed critical by the companies involved. Through statistical correspondence analysis of the responses, we were able to outline two “profiles” of companies that correspond to two potential market niches, which are characterised by their very different approach to software development.", "title": "" }, { "docid": "c35e8480aec22e77519024d4bae688ac", "text": "We introduce a novel approach that reconstructs 3D urban scenes in the form of levels of detail (LODs). Starting from raw datasets such as surface meshes generated by multiview stereo systems, our algorithm proceeds in three main steps: classification, abstraction, and reconstruction. From geometric attributes and a set of semantic rules combined with a Markov random field, we classify the scene into four meaningful classes. The abstraction step detects and regularizes planar structures on buildings, fits icons on trees, roofs, and facades, and performs filtering and simplification for LOD generation. The abstracted data are then provided as input to the reconstruction step which generates watertight buildings through a min-cut formulation on a set of 3D arrangements. Our experiments on complex buildings and large-scale urban scenes show that our approach generates meaningful LODs while being robust and scalable. By combining semantic segmentation and abstraction, it also outperforms general mesh approximation approaches at preserving urban structures.", "title": "" }, { "docid": "110742230132649f178d2fa99c8ffade", "text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.", "title": "" }, { "docid": "c18b167f00acaf94a965b4da9f7c1c14", "text": "Artificial Bee Colony (ABC) algorithm, which was initially proposed for numerical function optimization, has been increasingly used for clustering. However, when it is directly applied to clustering, the performance of ABC is lower than expected. This paper proposes an improved ABC algorithm for clustering, denoted as EABC. EABC uses a key initialization method to accommodate the special solution space of clustering. Experimental results show that the evaluation of clustering is significantly improved and the latency of clustering is sharply reduced. Furthermore, EABC outperforms two ABC variants in clustering benchmark data sets.", "title": "" }, { "docid": "8310851d5115ec570953a8c4a1757332", "text": "We present a global optimization approach for mapping color images onto geometric reconstructions. Range and color videos produced by consumer-grade RGB-D cameras suffer from noise and optical distortions, which impede accurate mapping of the acquired color data to the reconstructed geometry. Our approach addresses these sources of error by optimizing camera poses in tandem with non-rigid correction functions for all images. All parameters are optimized jointly to maximize the photometric consistency of the reconstructed mapping. We show that this optimization can be performed efficiently by an alternating optimization algorithm that interleaves analytical updates of the color map with decoupled parameter updates for all images. Experimental results demonstrate that our approach substantially improves color mapping fidelity.", "title": "" }, { "docid": "7e07a37cce30e8a2835331da1fdfe70a", "text": "The superregenerative receiver has been used for many decades as a low-cost and low-power receiver in short-range narrow-band communications. In this paper, we present two new architectures that make use of the superregeneration principle to achieve noncoherent detection of direct-sequence spread-spectrum signals. The local pseudorandom code generator is clocked by the quench oscillator, making the quench frequency equal to the chip rate. Under this condition, it is possible to take advantage of the characteristic broad reception bandwidth and the pulsating nature of the receiver to filter and despread the signal. The two superregenerative architectures, operating under periodic and pseudorandom quench, respectively, are analyzed and compared. Theoretical predictions are confirmed by experimental results in the ISM band of 2.4 GHz.", "title": "" }, { "docid": "d026b12bedce1782a17654f19c7dcdf7", "text": "The millions of movies produced in the human history are valuable resources for computer vision research. However, learning a vision model from movie data would meet with serious difficulties. A major obstacle is the computational cost – the length of a movie is often over one hour, which is substantially longer than the short video clips that previous study mostly focuses on. In this paper, we explore an alternative approach to learning vision models from movies. Specifically, we consider a framework comprised of a visual module and a temporal analysis module. Unlike conventional learning methods, the proposed approach learns these modules from different sets of data – the former from trailers while the latter from movies. This allows distinctive visual features to be learned within a reasonable budget while still preserving long-term temporal structures across an entire movie. We construct a large-scale dataset for this study and define a series of tasks on top. Experiments on this dataset showed that the proposed method can substantially reduce the training time while obtaining highly effective features and coherent temporal structures.", "title": "" }, { "docid": "0ec7a27ed4d89909887b08c5ea823756", "text": "Brain responses to pain, assessed through positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) are reviewed. Functional activation of brain regions are thought to be reflected by increases in the regional cerebral blood flow (rCBF) in PET studies, and in the blood oxygen level dependent (BOLD) signal in fMRI. rCBF increases to noxious stimuli are almost constantly observed in second somatic (SII) and insular regions, and in the anterior cingulate cortex (ACC), and with slightly less consistency in the contralateral thalamus and the primary somatic area (SI). Activation of the lateral thalamus, SI, SII and insula are thought to be related to the sensory-discriminative aspects of pain processing. SI is activated in roughly half of the studies, and the probability of obtaining SI activation appears related to the total amount of body surface stimulated (spatial summation) and probably also by temporal summation and attention to the stimulus. In a number of studies, the thalamic response was bilateral, probably reflecting generalised arousal in reaction to pain. ACC does not seem to be involved in coding stimulus intensity or location but appears to participate in both the affective and attentional concomitants of pain sensation, as well as in response selection. ACC subdivisions activated by painful stimuli partially overlap those activated in orienting and target detection tasks, but are distinct from those activated in tests involving sustained attention (Stroop, etc.). In addition to ACC, increased blood flow in the posterior parietal and prefrontal cortices is thought to reflect attentional and memory networks activated by noxious stimulation. Less noted but frequent activation concerns motor-related areas such as the striatum, cerebellum and supplementary motor area, as well as regions involved in pain control such as the periaqueductal grey. In patients, chronic spontaneous pain is associated with decreased resting rCBF in contralateral thalamus, which may be reverted by analgesic procedures. Abnormal pain evoked by innocuous stimuli (allodynia) has been associated with amplification of the thalamic, insular and SII responses, concomitant to a paradoxical CBF decrease in ACC. It is argued that imaging studies of allodynia should be encouraged in order to understand central reorganisations leading to abnormal cortical pain processing. A number of brain areas activated by acute pain, particularly the thalamus and anterior cingulate, also show increases in rCBF during analgesic procedures. Taken together, these data suggest that hemodynamic responses to pain reflect simultaneously the sensory, cognitive and affective dimensions of pain, and that the same structure may both respond to pain and participate in pain control. The precise biochemical nature of these mechanisms remains to be investigated.", "title": "" }, { "docid": "ddc3241c09a33bde1346623cf74e6866", "text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.", "title": "" }, { "docid": "f1325dd1350acf612dc1817db693a3d6", "text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.", "title": "" }, { "docid": "48ba8ea879ba854e5b38ab187602721e", "text": "With the advent of video-on-demand services and digital video recorders, the way in which we consume media is undergoing a fundamental change. People today are less likely to watch shows at the same time, let alone the same place. As a result, television viewing, which was once a social activity, has been reduced to a passive and isolated experience. To study this issue, we developed a system called CollaboraTV and demonstrated its ability to support the communal viewing experience through a month-long field study. Our study shows that users understand and appreciate the utility of asynchronous interaction, are enthusiastic about CollaboraTV's engaging social communication primitives and value implicit show recommendations from friends. Our results both provide a compelling demonstration of a social television system and raise new challenges for social television communication modalities.", "title": "" }, { "docid": "1c576cf604526b448f0264f2c39f705a", "text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.", "title": "" }, { "docid": "2b03868a73808a0135547427112dcaf8", "text": "In this article we focus attention on ethnography’s place in CSCW by reflecting on how ethnography in the context of CSCW has contributed to our understanding of the sociality and materiality of work and by exploring how the notion of the ‘field site’ as a construct in ethnography provides new ways of conceptualizing ‘work’ that extends beyond the workplace. We argue that the well known challenges of drawing design implications from ethnographic research have led to useful strategies for tightly coupling ethnography and design. We also offer some thoughts on recent controversies over what constitutes useful and proper ethnographic research in the context of CSCW. Finally, we argue that as the temporal and spatial horizons of inquiry have expanded, along with new domains of collaborative activity, ethnography continues to provide invaluable perspectives.", "title": "" }, { "docid": "3ced47ece49eeec3edc5d720df9bb864", "text": "Complex space systems typically provide the operator a means to understand the current state of system components. The operator often has to manually determine whether the system is able to perform a given set of high level objectives based on this information. The operations team needs a way for the system to quantify its capability to successfully complete a mission objective and convey that information in a clear, concise way. A mission-level space cyber situational awareness tool suite integrates the data into a complete picture to display the current state of the mission. The Johns Hopkins University Applied Physics Laboratory developed the Spyder tool suite for such a purpose. The Spyder space cyber situation awareness tool suite allows operators to understand the current state of their systems, allows them to determine whether their mission objectives can be completed given the current state, and provides insight into any anomalies in the system. Spacecraft telemetry, spacecraft position, ground system data, ground computer hardware, ground computer software processes, network connections, and network data flows are all combined into a system model service that serves the data to various display tools. Spyder monitors network connections, port scanning, and data exfiltration to determine if there is a cyber attack. The Spyder Tool Suite provides multiple ways of understanding what is going on in a system. Operators can see the logical and physical relationships between system components to better understand interdependencies and drill down to see exactly where problems are occurring. They can quickly determine the state of mission-level capabilities. The space system network can be analyzed to find unexpected traffic. Spyder bridges the gap between infrastructure and mission and provides situational awareness at the mission level.", "title": "" }, { "docid": "3c4aaea63fd829828c75b85509cceac8", "text": "When maintaining equilibrium in upright stance, humans use sensory feedback control to cope with unforeseen external disturbances such as support surface motion, this despite biological 'complications' such as noisy and inaccurate sensor signals and considerable neural, motor, and processing time delays. The control method they use apparently differs from established methods one normally finds in technical fields. System identification recently led us design a control model that we currently test in our laboratory. The tests include hardware-in-the-loop simulations after the model's embodiment into a robot. The model is called disturbance estimation and compensation (DEC) model. Disturbance estimation is performed by on-line multisensory interactions using joint angle, joint torque, and vestibular cues. For disturbance compensation, the method of direct disturbance rejection is used (\" Störgrös-senaufschaltung \"). So far, biomechanics of a single inverted pendulum (SIP) were applied. Here we extend the DEC concept to the control of a double inverted pendulum (DIP; moving links: trunk on hip joint and legs on ankle joints). The aim is that the model copes in addition with inter-link torques and still describes human experimental data. As concerns the inter-link torque arising during leg motion in the hip joint (support base of the outer link, the trunk), it is already covered by the DEC concept we so far used for the SIP. The inter-link torque arising from trunk motion in the ankle joint is largely neutralized by the concept's whole-body COM control through the ankle joint (due to the fact that body geometry and thus COM location changes with the inter-link motion). Experimentally, we applied pseudorandom support surface tilt stimuli in the sagittal plane to healthy human subjects who were standing with eyes closed on a motion platform (frequency range, 0.16 – 2.2 Hz). Angular excursions of trunk, leg, and whole-body COM (center of mass) with respect to the space vertical as well as COP (center of pressure) shifts were recorded and analyzed. The human data was compared to corresponding model and robot simulation data. The human findings were well described by the model and robot simulations. This shows that the DIP biomechanics of human reactive stance can be controlled using a purely sensor-based control.", "title": "" }, { "docid": "d90b6c61369ff0458843241cd30437ba", "text": "The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and planetary/lunar settlements. The improved health resulting from the calorie-restricted but nutrient dense Biosphere 2 diet was the first such scientifically controlled experiment with humans. The success of Biosphere 2 in creating a diversity of terrestrial and marine environments, from rainforest to coral reef, allowed detailed studies with comprehensive measurements such that the dynamics of these complex biomic systems are now better understood. The coral reef ecosystem, the largest artificial reef ever built, catalyzed methods of study now being applied to planetary coral reef systems. Restoration ecology advanced through the creation and study of the dynamics of adaptation and self-organization of the biomes in Biosphere 2. The international interest that Biosphere 2 generated has given new impetus to the public recognition of the sciences of biospheres (biospherics), biomes and closed ecological life systems. The facility, although no longer a materially-closed ecological system, is being used as an educational facility by Columbia University as an introduction to the study of the biosphere and complex system ecology and for carbon dioxide impacts utilizing the complex ecosystems created in Biosphere '. The many lessons learned from Biosphere 2 are being used by its key team of creators in their design and operation of a laboratory-sized closed ecological system, the Laboratory Biosphere, in operation as of March 2002, and for the design of a Mars on Earth(TM) prototype life support system for manned missions to Mars and Mars surface habitats. Biosphere 2 is an important foundation for future advances in biospherics and closed ecological system research.", "title": "" }, { "docid": "246faf136fc925f151f7006a08fcee2d", "text": "A key goal of smart grid initiatives is significantly increasing the fraction of grid energy contributed by renewables. One challenge with integrating renewables into the grid is that their power generation is intermittent and uncontrollable. Thus, predicting future renewable generation is important, since the grid must dispatch generators to satisfy demand as generation varies. While manually developing sophisticated prediction models may be feasible for large-scale solar farms, developing them for distributed generation at millions of homes throughout the grid is a challenging problem. To address the problem, in this paper, we explore automatically creating site-specific prediction models for solar power generation from National Weather Service (NWS) weather forecasts using machine learning techniques. We compare multiple regression techniques for generating prediction models, including linear least squares and support vector machines using multiple kernel functions. We evaluate the accuracy of each model using historical NWS forecasts and solar intensity readings from a weather station deployment for nearly a year. Our results show that SVM-based prediction models built using seven distinct weather forecast metrics are 27% more accurate for our site than existing forecast-based models.", "title": "" }, { "docid": "ff2322cee61da0ca6013037dce09bb27", "text": "In this paper, we propose to train a network with both binary weights and binary activations, designed specifically for mobile devices with limited computation capacity and power consumption. Previous works on quantizing CNNs uncritically assume the same architecture with fullprecision networks, which we term value approximation. Their objective is to preserve the floating-point information using a set of discrete values. However, we take a novel view—for best performance it is very likely that a different architecture may be better suited to deal with binary weights as well as binary activations. Thus we directly design such a highly accurate binary network structure, which is termed structure approximation. In particular, we propose a “network decomposition” strategy in which we divide the networks into groups and aggregate a set of homogeneous binary branches to implicitly reconstruct the full-precision intermediate feature maps. In addition, we also learn the connections between each group. We further provide a comprehensive comparison among all quantization categories. Experiments on ImageNet classification tasks demonstrate the superior performance of the proposed model, named Group-Net, over various popular architectures. In particular, we outperform the previous best binary neural network in terms of accuracy as well as saving huge computational complexity. Furthermore, the proposed Group-Net can effectively utilize task specific properties for strong generalization. In particular, we propose to extend Group-Net for lossless semantic segmentation. This is the first work proposed on solving dense pixels prediction based on BNNs in the literature. Actually, we claim that considering both value and structure approximation should be the future development direction of BNNs.", "title": "" } ]
scidocsrr
fd826af58995ca1bceeefb7d216639e8
CELLULAR AND NETWORK ARCHITECTURE FOR 5 G WIRELESS COMMUNICATION NETWORKS IN MOBILE TECHNOLOGY
[ { "docid": "4412bca4e9165545e4179d261828c85c", "text": "Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-real-time services. On the other side, there are many wireless technologies that have proven to be important, with the most important ones being 802.11 Wireless Local Area Networks (WLAN) and 802.16 Wireless Metropolitan Area Networks (WMAN), as well as ad-hoc Wireless Personal Area Network (WPAN) and wireless networks for digital TV and radio broadcast. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The paper also proposes intelligent Internet phone concept where the mobile phone can choose the best connections by selected constraints and dynamically change them during a single end-to-end connection. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.", "title": "" } ]
[ { "docid": "33db7ac45c020d2a9e56227721b0be70", "text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.", "title": "" }, { "docid": "770805557428fc2c9f705cb5f4a6fe62", "text": "Captcha is a security mechanism designed to differentiate between computers and humans, and is used to defend against malicious bot programs. Text-based Captchas are the most widely deployed differentiation mechanism, and almost all text-based Captchas are single layered. Numerous successful attacks on the single-layer text-based Captchas deployed by Google, Yahoo!, and Amazon have been reported. In 2015, Microsoft deployed a new two-layer Captcha scheme. This appears to be the first application of two-layer Captchas. It is, therefore, natural to ask a fundamental question: is the two-layer Captcha as secure as its designers expected? Intrigued by this question, we have for the first time systematically analyzed the security of the two-layer Captcha in this paper. We propose a simple but an effective method to attack the two-layer Captcha deployed by Microsoft, and achieve a success rate of 44.6% with an average speed of 9.05 s on a standard desktop computer (with a 3.3-GHz Intel Core i3 CPU and 2-GB RAM), thus demonstrating clear security issues. We also discuss the originality and applicability of our attack, and offer guidelines for designing Captchas with better security and usability.", "title": "" }, { "docid": "a8ff130dcb899214da73f66e12a5a1b1", "text": "We designed and evaluated an assumption-free, deep learning-based methodology for animal health monitoring, specifically for the early detection of respiratory disease in growing pigs based on environmental sensor data. Two recurrent neural networks (RNNs), each comprising gated recurrent units (GRUs), were used to create an autoencoder (GRU-AE) into which environmental data, collected from a variety of sensors, was processed to detect anomalies. An autoencoder is a type of network trained to reconstruct the patterns it is fed as input. By training the GRU-AE using environmental data that did not lead to an occurrence of respiratory disease, data that did not fit the pattern of \"healthy environmental data\" had a greater reconstruction error. All reconstruction errors were labelled as either normal or anomalous using threshold-based anomaly detection optimised with particle swarm optimisation (PSO), from which alerts are raised. The results from the GRU-AE method outperformed state-of-the-art techniques, raising alerts when such predictions deviated from the actual observations. The results show that a change in the environment can result in occurrences of pigs showing symptoms of respiratory disease within 1⁻7 days, meaning that there is a period of time during which their keepers can act to mitigate the negative effect of respiratory diseases, such as porcine reproductive and respiratory syndrome (PRRS), a common and destructive disease endemic in pigs.", "title": "" }, { "docid": "07ce1301392e18c1426fd90507dc763f", "text": "The fluorescent lamp lifetime is very dependent of the start-up lamp conditions. The lamp filament current and temperature during warm-up and at steady-state operation are important to extend the life of a hot-cathode fluorescent lamp, and the preheating circuit is responsible for attending to the start-up lamp requirements. The usual solution for the preheating circuit used in self-oscillating electronic ballasts is simple and presents a low cost. However, the performance to extend the lamp lifetime is not the most effective. This paper presents an effective preheating circuit for self-oscillating electronic ballasts as an alternative to the usual solution.", "title": "" }, { "docid": "53f28f66d99f5e706218447e226cf7cc", "text": "The Connectionist Inductive Learning and Logic Programming System, C-IL2P, integrates the symbolic and connectionist paradigms of Artificial Intelligence through neural networks that perform massively parallel Logic Programming and inductive learning from examples and background knowledge. This work presents an extension of C-IL2P that allows the implementation of Extended Logic Programs in Neural Networks. This extension makes C-IL2P applicable to problems where the background knowledge is represented in a Default Logic. As a case example, we have applied the system for fault diagnosis of a simplified power system generation plant, obtaining good preliminary results.", "title": "" }, { "docid": "268e8d3b755d7579c2cbdee466622270", "text": "This research is an attempt to illustrate the variables that are mentioned in the literature to deal with the unexpected future risks that are increasingly threatening the success of the large program. The research is a qualitative conceptualization using secondary data collection from the literature review and by criticizing it reaching a structural validation of the system dynamic simple model of how to increase the level of the stock of the unknown unknowns or the complexity chaotic knowledge for better risk management and creativity in achieving a competitive edge. The unknow-unknowns are still representing a black box and are under the control of the god act. This is a try only to concurrent and foreword adaptation with the unknown future. The manager can use this model to conceptualize the internal and external variables that can be linked to the business being objectives. By using this model the manager can minimized the side effects of the productivity and efficiency", "title": "" }, { "docid": "36b46a2bf4b46850f560c9586e91d27b", "text": "Promoting pro-environmental behaviour amongst urban dwellers is one of today's greatest sustainability challenges. The aim of this study is to test whether an information intervention, designed based on theories from environmental psychology and behavioural economics, can be effective in promoting recycling of food waste in an urban area. To this end we developed and evaluated an information leaflet, mainly guided by insights from nudging and community-based social marketing. The effect of the intervention was estimated through a natural field experiment in Hökarängen, a suburb of Stockholm city, Sweden, and was evaluated using a difference-in-difference analysis. The results indicate a statistically significant increase in food waste recycled compared to a control group in the research area. The data analysed was on the weight of food waste collected from sorting stations in the research area, and the collection period stretched for almost 2 years, allowing us to study the short- and long term effects of the intervention. Although the immediate positive effect of the leaflet seems to have attenuated over time, results show that there was a significant difference between the control and the treatment group, even 8 months after the leaflet was distributed. Insights from this study can be used to guide development of similar pro-environmental behaviour interventions for other urban areas in Sweden and abroad, improving chances of reaching environmental policy goals.", "title": "" }, { "docid": "05a9e70a73cac0a30b2c952c861b4e2d", "text": "We introduce the notion of query substitution, that is, generating a new query to replace a user's original search query. Our technique uses modifications based on typical substitutions web searchers make to their queries. In this way the new query is strongly related to the original query, containing terms closely related to all of the original terms. This contrasts with query expansion through pseudo-relevance feedback, which is costly and can lead to query drift. This also contrasts with query relaxation through boolean or TFIDF retrieval, which reduces the specificity of the query. We define a scale for evaluating query substitution, and show that our method performs well at generating new queries related to the original queries. We build a model for selecting between candidates, by using a number of features relating the query-candidate pair, and by fitting the model to human judgments of relevance of query suggestions. This further improves the quality of the candidates generated. Experiments show that our techniques significantly increase coverage and effectiveness in the setting of sponsored search.", "title": "" }, { "docid": "c0b40058d003cdaa80d54aa190e48bc2", "text": "Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.", "title": "" }, { "docid": "ac86e950866646a0b86d76bb3c087d0a", "text": "In this paper, an SVM-based approach is proposed for stock market trend prediction. The proposed approach consists of two parts: feature selection and prediction model. In the feature selection part, a correlation-based SVM filter is applied to rank and select a good subset of financial indexes. And the stock indicators are evaluated based on the ranking. In the prediction model part, a so called quasi-linear SVM is applied to predict stock market movement direction in term of historical data series by using the selected subset of financial indexes as the weighted inputs. The quasi-linear SVM is an SVM with a composite quasi-linear kernel function, which approximates a nonlinear separating boundary by multi-local linear classifiers with interpolation. Experimental results on Taiwan stock market datasets demonstrate that the proposed SVM-based stock market trend prediction method produces better generalization performance over the conventional methods in terms of the hit ratio. Moreover, the experimental results also show that the proposed SVM-based stock market trend prediction system can find out a good subset and evaluate stock indicators which provide useful information for investors.", "title": "" }, { "docid": "60f94e4336d8e406097dd880f8054089", "text": "In order to improve the retrieval accuracy of content-based image retrieval systems, research focus has been shifted from designing sophisticated low-level feature extraction algorithms to reducing the ‘semantic gap’ between the visual features and the richness of human semantics. This paper attempts to provide a comprehensive survey of the recent technical achievements in high-level semantic-based image retrieval. Major recent publications are included in this survey covering different aspects of the research in this area, including low-level image feature extraction, similarity measurement, and deriving high-level semantic features. We identify five major categories of the state-of-the-art techniques in narrowing down the ‘semantic gap’: (1) using object ontology to define high-level concepts; (2) using machine learning methods to associate low-level features with query concepts; (3) using relevance feedback to learn users’ intention; (4) generating semantic template to support high-level image retrieval; (5) fusing the evidences from HTML text and the visual content of images for WWW image retrieval. In addition, some other related issues such as image test bed and retrieval performance evaluation are also discussed. Finally, based on existing technology and the demand from real-world applications, a few promising future research directions are suggested. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9d6f128fbb7c76ff1734b22bfc796811", "text": "In this paper, inkjet-printed UHF and microwave circuits fabricated on paper substrates are investigated for the first time as an approach that aims for a system-level solution for fast and ultra-low-cost mass production. First, the RF characteristics of the paper substrate are studied by using the microstrip ring resonator in order to characterize the relative permittivity (epsivr) and loss tangent (tan delta) of the substrate at the UHF band for the first time reported. A UHF RFID tag module is then developed with the inkjet-printing technology, proving this approach could function as an enabling technology for much simpler and faster fabrication on/in paper. Simulation and well-agreed measurement results, which show very good agreement, verify a good performance of the tag module. In addition, the possibility of multilayer RF structures on a paper substrate is explored, and a multilayer patch resonator bandpass filter demonstrates the feasibility of ultra-low-cost 3-D paper-on-paper RF/wireless structures.", "title": "" }, { "docid": "2621f13dd04e94923b96541c743d67c6", "text": "Motivation\nBiclustering algorithms are commonly used for gene expression data analysis. However, accurate identification of meaningful structures is very challenging and state-of-the-art methods are incapable of discovering with high accuracy different patterns of high biological relevance.\n\n\nResults\nIn this paper, a novel biclustering algorithm based on evolutionary computation, a sub-field of artificial intelligence, is introduced. The method called EBIC aims to detect order-preserving patterns in complex data. EBIC is capable of discovering multiple complex patterns with unprecedented accuracy in real gene expression datasets. It is also one of the very few biclustering methods designed for parallel environments with multiple graphics processing units. We demonstrate that EBIC greatly outperforms state-of-the-art biclustering methods, in terms of recovery and relevance, on both synthetic and genetic datasets. EBIC also yields results over 12 times faster than the most accurate reference algorithms.\n\n\nAvailability and implementation\nEBIC source code is available on GitHub at https://github.com/EpistasisLab/ebic.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "f289b58d16bf0b3a017a9b1c173cbeb6", "text": "All hospitalisations for pulmonary arterial hypertension (PAH) in the Scottish population were examined to determine the epidemiological features of PAH. These data were compared with expert data from the Scottish Pulmonary Vascular Unit (SPVU). Using the linked Scottish Morbidity Record scheme, data from all adults aged 16-65 yrs admitted with PAH (idiopathic PAH, pulmonary hypertension associated with congenital heart abnormalities and pulmonary hypertension associated with connective tissue disorders) during the period 1986-2001 were identified. These data were compared with the most recent data in the SPVU database (2005). Overall, 374 Scottish males and females aged 16-65 yrs were hospitalised with incident PAH during 1986-2001. The annual incidence of PAH was 7.1 cases per million population. On December 31, 2002, there were 165 surviving cases, giving a prevalence of PAH of 52 cases per million population. Data from the SPVU were available for 1997-2006. In 2005, the last year with a complete data set, the incidence of PAH was 7.6 cases per million population and the corresponding prevalence was 26 cases per million population. Hospitalisation data from the Scottish Morbidity Record scheme gave higher prevalences of pulmonary arterial hypertension than data from the expert centres (Scotland and France). The hospitalisation data may overestimate the true frequency of pulmonary arterial hypertension in the population, but it is also possible that the expert centres underestimate the true frequency.", "title": "" }, { "docid": "1ad65bf27c4c4037d85a97c0cead8c41", "text": "This study explores the issue of effectiveness within virtual teams — groups of people who work together although they are often dispersed across space, time, and/or organizational boundaries. Due to the recent trend towards corporate restructuring, which can, in part, be attributed to an increase in corporate layoffs, mergers and acquisitions, competition, and globalization, virtual teams have become critical for companies to survive. Globalization of the marketplace alone, for that matter, makes such distributed work groups the primary operating units needed to achieve a competitive advantage in this ever-changing business environment. In an effort to determine the factors that contribute to/inhibit the success of a virtual team, a survey was distributed to a total of eight companies in the high technology, agriculture, and professional services industries. Data was then collected from 67 individuals who comprised a total of 12 virtual teams from these companies. Results indicated that several factors were positively correlated to the effectiveness of the participating teams. The teams’ processes and team members’ relations presented the strongest relationships to team performance and team member satisfaction, while the selection procedures and executive leadership styles also exhibited moderate associations to these measures of effectiveness. Analysis of predictor variables such as the design process, other internal group dynamics, and additional external support mechanisms, however, depicted weaker relations. Although the connections between the teams’ tools and technologies and communication patterns and the teams’ effectiveness measures did not prove significant, content analysis of the participants’ narrative responses to questions regarding the greatest challenges to virtual teams suggested otherwise. Beyond the traditional strategies used to enhance a team’s effectiveness, further efforts directed towards the specific technology and communication-related issues that concern dispersed team members are needed to supplement the set of best practices identified in the current study. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "7ef2f4a771aa0d1724127c97aa21e1ea", "text": "This paper demonstrates the efficient use of Internet of Things for the traditional agriculture. It shows the use of Arduino and ESP8266 based monitored and controlled smart irrigation systems, which is also cost-effective and simple. It is beneficial for farmers to irrigate there land conveniently by the application of automatic irrigation system. This smart irrigation system has pH sensor, water flow sensor, temperature sensor and soil moisture sensor that measure respectively and based on these sensors arduino microcontroller drives the servo motor and pump. Arduino received the information and transmitted with ESP8266 Wi-Fi module wirelessly to the website through internet. This transmitted information is monitor and control by using IOT. This enables the remote control mechanism through a secure internet web connection to the user. A website has been prepared which present the actual time values and reference values of various factors needed by crops. Users can control water pumps and sprinklers through the website and keep an eye on the reference values which will help the farmer increase production with quality crops.", "title": "" }, { "docid": "4077a2baa30054132170bcf07a3263b1", "text": "Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical.", "title": "" }, { "docid": "6cb2e41787378eca0dbbc892f46274e5", "text": "Both reviews and user-item interactions (i.e., rating scores) have been widely adopted for user rating prediction. However, these existing techniques mainly extract the latent representations for users and items in an independent and static manner. That is, a single static feature vector is derived to encode user preference without considering the particular characteristics of each candidate item. We argue that this static encoding scheme is incapable of fully capturing users’ preferences, because users usually exhibit different preferences when interacting with different items. In this article, we propose a novel context-aware user-item representation learning model for rating prediction, named CARL. CARL derives a joint representation for a given user-item pair based on their individual latent features and latent feature interactions. Then, CARL adopts Factorization Machines to further model higher order feature interactions on the basis of the user-item pair for rating prediction. Specifically, two separate learning components are devised in CARL to exploit review data and interaction data, respectively: review-based feature learning and interaction-based feature learning. In the review-based learning component, with convolution operations and attention mechanism, the pair-based relevant features for the given user-item pair are extracted by jointly considering their corresponding reviews. However, these features are only reivew-driven and may not be comprehensive. Hence, an interaction-based learning component further extracts complementary features from interaction data alone, also on the basis of user-item pairs. The final rating score is then derived with a dynamic linear fusion mechanism. Experiments on seven real-world datasets show that CARL achieves significantly better rating prediction accuracy than existing state-of-the-art alternatives. Also, with the attention mechanism, we show that the pair-based relevant information (i.e., context-aware information) in reviews can be highlighted to interpret the rating prediction for different user-item pairs.", "title": "" }, { "docid": "7b23f205197cedb6ce40fdc8c41c9fb8", "text": "This paper presents the correlating synthetic aperture radar (CoSAR) technique, a novel radar imaging concept to observe statistical properties of fast decorrelating surfaces. A CoSAR system consists of two radars with a relative motion in the along-track (cross-range) dimension. The spatial autocorrelation function of the scattered signal can be estimated by combining quasi-simultaneously received radar echoes. By virtue of the Van Cittert-Zernike theorem, estimates of this autocorrelation function for different relative positions can be processed by generating images of several properties of the scene, including the normalized radar cross section, Doppler velocities, and surface topography. Aside from the geometric performance, a central aspect of this paper is a theoretical derivation of the radiometric performance of CoSAR. The radiometric quality is proportional to the number of independent samples available for the estimation of the spatial correlation, and to the ratio between the CoSAR azimuth resolution and the real-aperture resolution. A CoSAR mission concept is provided where two geosynchronous radar satellites fly at opposing sides of a quasi-circular trajectory. Such a mission could provide bidaily images of the ocean backscatter, mean Doppler, and surface topography at resolutions on the order of 500 m over wide areas.", "title": "" }, { "docid": "9fd247bb0f45d09e11c05fca48372ee8", "text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.", "title": "" } ]
scidocsrr
d2530e95b9287d6be86c85067a0b0de2
Development and evaluation of interactive humanoid robots
[ { "docid": "aa4de4dce2a7d7b0630e91ab4cf6f692", "text": "This paper presents part of an on-going project to integrate perception, attention, drives, emotions, behavior arbitration, and expressive acts for a robot designed to interact socially with humans. We present the design of a visual attention system based on a model of human visual search behavior from Wolfe (1994). The attention system integrates perceptions (motion detection, color saliency, and face popouts) with habituation effects and influences from the robot’s motivational and behavioral state to create a context-dependent attention activation map. This activation map is used to direct eye movements and to satiate the drives of the motivational system.", "title": "" } ]
[ { "docid": "3e5fd66795e92999aacf6e39cc668aed", "text": "A couple of popular methods are presented with their benefits and drawbacks. Commonly used methods are using wrapped phase and impulse response. With real time FFT analysis, magnitude and time domain can be analyzed simultaneously. Filtered impulse response and Cepstrum analysis are helpful tools when the spectral content differs and make it hard to analyse the impulse response. To make a successful time alignment the measurements must be anechoic. Methods such as multiple time windowing and averaging in frequency domain are presented. Group-delay and wavelets analysis are used to evaluate the measurements.", "title": "" }, { "docid": "5cb9ed0ffb8045f2bd297700991f8a33", "text": "In this paper, we propose spatial propagation networks for learning the affinity matrix for vision tasks. We show that by constructing a row/column linear propagation model, the spatially varying transformation matrix exactly constitutes an affinity matrix that models dense, global pairwise relationships of an image. Specifically, we develop a three-way connection for the linear propagation model, which (a) formulates a sparse transformation matrix, where all elements can be outputs from a deep CNN, but (b) results in a dense affinity matrix that effectively models any task-specific pairwise similarity matrix. Instead of designing the similarity kernels according to image features of two points, we can directly output all the similarities in a purely data-driven manner. The spatial propagation network is a generic framework that can be applied to many affinity-related tasks, such as image matting, segmentation and colorization, to name a few. Essentially, the model can learn semantically-aware affinity values for high-level vision tasks due to the powerful learning capability of deep CNNs. We validate the framework on the task of refinement of image segmentation boundaries. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides a general, effective and efficient solution for generating high-quality segmentation results.", "title": "" }, { "docid": "b3ac28a94719a21abf6ebb719c2933cd", "text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.09.110 ⇑ Corresponding author. Tel.: +86 (0)21 2023 1668; E-mail address: liulong@tongji.edu.cn (L. Liu). Failure mode and effects analysis (FMEA) is a methodology to evaluate a system, design, process or service for possible ways in which failures (problems, errors, etc.) can occur. The two most important issues of FMEA are the acquirement of FMEA team members’ diversity opinions and the determination of risk priorities of the failure modes that have been identified. First, the FMEA team often demonstrates different opinions and knowledge from one team member to another and produces different types of assessment information because of its cross-functional and multidisciplinary nature. These different types of information are very hard to incorporate into the FMEA by the traditional model and fuzzy logic approach. Second, the traditional FMEA determines the risk priorities of failure modes using the risk priority numbers (RPNs) by multiplying the scores of the risk factors like the occurrence (O), severity (S) and detection (D) of each failure mode. The method has been criticized to have several shortcomings. In this paper, we present an FMEA using the fuzzy evidential reasoning (FER) approach and grey theory to solve the two problems and improve the effectiveness of the traditional FMEA. As is illustrated by the numerical example, the proposed FMEA can well capture FMEA team members’ diversity opinions and prioritize failure modes under different types of uncertainties. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "48415fdf5bf425969da57f95134e4412", "text": "Distributed graph analytics frameworks must offer low and balanced communication and computation, low preprocessing overhead, low memory footprint, and scalability. We present LFGraph, a fast, scalable, distributed, in-memory graph analytics engine intended primarily for directed graphs. LFGraph is the first system to satisfy all of the above requirements. It does so by relying on cheap hash-based graph partitioning, while making iterations faster by using publish-subscribe information flow along directed edges, fetch-once communication, single-pass computation, and in-neighbor storage. Our analytical and experimental results show that when applied to real-life graphs, LFGraph is faster than the best graph analytics frameworks by factors of 1x--5x when ignoring partitioning time and by 1x--560x when including partitioning time.", "title": "" }, { "docid": "447d46cb861541c0b6e542018a05b9d0", "text": "Acupuncture is currently gaining popularity as an important modality of alternative and complementary medicine in the western world. Modern neuroimaging techniques such as functional magnetic resonance imaging, positron emission tomography, and magnetoencephalography open a window into the neurobiological foundations of acupuncture. In this review, we have summarized evidence derived from neuroimaging studies and tried to elucidate both neurophysiological correlates and key experimental factors involving acupuncture. Converging evidence focusing on acute effects of acupuncture has revealed significant modulatory activities at widespread cerebrocerebellar brain regions. Given the delayed effect of acupuncture, block-designed analysis may produce bias, and acupuncture shared a common feature that identified voxels that coded the temporal dimension for which multiple levels of their dynamic activities in concert cause the processing of acupuncture. Expectation in acupuncture treatment has a physiological effect on the brain network, which may be heterogeneous from acupuncture mechanism. \"Deqi\" response, bearing clinical relevance and association with distinct nerve fibers, has the specific neurophysiology foundation reflected by neural responses to acupuncture stimuli. The type of sham treatment chosen is dependent on the research question asked and the type of acupuncture treatment to be tested. Due to the complexities of the therapeutic mechanisms of acupuncture, using multiple controls is an optimal choice.", "title": "" }, { "docid": "7918167cbceddcc24b4d22f094b167dd", "text": "This paper is presented the study of the social influence by using social features in fitness mobile applications and habit that persuades the working-aged people, in the context of continuous fitness mobile application usage to promote the physical activity. Our conceptual model consisted of Habit and Social Influence. The social features based on the Persuasive Technology (1) Normative Influence, (2) Social Comparison, (3) Competition, (4) Co-operation, and (5) Social Recognition were embedded in the Social Influence construct of UTAUT2 model. The questionnaires were an instrument for this study. The target group was 443 working-aged people who live in Thailand's central region. The results reveal that the factors significantly affecting Behavioral Intention toward Use Behavior are Normative Influence, Social Comparison, Competition, and Co-operation. Only the Social Recognition is insignificantly affecting Behavioral Intention to use fitness mobile applications. The Behavioral Intention and Habit also significantly support the Use Behavior. The social features in fitness mobile application should be developed to promote the physical activity.", "title": "" }, { "docid": "cb815a01960490760e2ac581e26f4486", "text": "To solve the weakly-singular Volterra integro-differential equations, the combined method of the Laplace Transform Method and the Adomian Decomposition Method is used. As a result, series solutions of the equations are constructed. In order to explore the rapid decay of the equations, the pade approximation is used. The results present validity and great potential of the method as a powerful algorithm in order to present series solutions for singular kind of differential equations.", "title": "" }, { "docid": "ba4637dd5033fa39d1cb09edb42481ec", "text": "In this paper we introduce a framework for best first search of minimax trees. Existing best first algorithms like SSS* and DUAL* are formulated as instances of this framework. The framework is built around the Alpha-Beta procedure. Its instances are highly practical, and readily implementable. Our reformulations of SSS* and DUAL* solve the perceived drawbacks of these algorithms. We prove their suitability for practical use by presenting test results with a tournament level chess program. In addition to reformulating old best first algorithms, we introduce an improved instance of the framework: MTD(ƒ). This new algorithm outperforms NegaScout, the current algorithm of choice of most chess programs. Again, these are not simulation results, but results of tests with an actual chess program, Phoenix.", "title": "" }, { "docid": "1d7d3a52e059a256434556c405c0e1fa", "text": "Page segmentation is still a challenging problem due to the large variety of document layouts. Methods examining both foreground and background regions are among the most effective to solve this problem. However, their performance is influenced by the implementation of two key steps: the extraction and selection of background regions, and the grouping of background regions into separators. This paper proposes an efficient hybrid method for page segmentation. The method extracts white space rectangles based on connected component analysis, and filters white space rectangles progressively incorporating foreground and background information such that the remaining rectangles are likely to form column separators. Experimental results on the ICDAR2009 page segmentation competition test set demonstrate the effectiveness and superiority of the proposed method.", "title": "" }, { "docid": "740d130948c25d5cd2027645bab151a9", "text": "Ahstract-The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custom-built, cost-effective, Cartesian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that contributed to our competitiveness. We believe these aspects are crucial to building robust and effective robotic systems.", "title": "" }, { "docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7", "text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.", "title": "" }, { "docid": "21408f0466b0e2885fd85689ee2087f3", "text": "The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.", "title": "" }, { "docid": "b123916f2795ab6810a773ac69bdf00b", "text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.", "title": "" }, { "docid": "2c442933c4729e56e5f4f46b5b8071d6", "text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.", "title": "" }, { "docid": "66c57a94a5531b36199bd52521a56ccb", "text": "This project describes design and experimental analysis of composite leaf spring made of glass fiber reinforced polymer. The objective is to compare the load carrying capacity, stiffness and weight savings of composite leaf spring with that of steel leaf spring. The design constraints are stresses and deflections. The dimensions of an existing conventional steel leaf spring of a light commercial vehicle are taken. Same dimensions of conventional leaf spring are used to fabricate a composite multi leaf spring using E-Glass/Epoxy unidirectional laminates. Static analysis of 2-D model of conventional leaf spring is also performed using ANSYS 10 and compared with experimental results. Finite element analysis with full load on 3-D model of composite multi leaf spring is done using ANSYS 10 and the analytical results are compared with experimental results. Compared to steel spring, the composite leaf spring is found to have 67.35% lesser stress, 64.95% higher stiffness and 126.98% higher natural frequency than that of existing steel leaf spring. A weight reduction of 76.4% is achieved by using optimized composite leaf spring.", "title": "" }, { "docid": "48a8709ba0f40d6b174a1fdfc0663865", "text": "The resistive switching memory (RRAM) offers fast switching, low-voltage operation, and scalable device area. However, reliability and variability issues must be understood, particularly in the low-current operation regime. This letter addresses set-state variability and presents a new set failure phenomenon in RRAM, leading to a high-resistance tail in the set-state distribution. The set failure is due to complementary switching of the RRAM, causing an increase of resistance soon after the set transition. The dependence of set failure on the programing current is explained by the increasing voltage stress across the RRAM device causing filament disconnection.", "title": "" }, { "docid": "d5758c68110a604c7af4a68faba32d1d", "text": "Two experiments explore the validity of conceptualizing musical beats as auditory structural features and the potential for increases in tempo to lead to greater sympathetic arousal, measured using skin conductance. In the first experiment, fastand slow-paced rock and classical music excerpts were compared to silence. As expected, skin conductance response (SCR) frequency was greater during music processing than during silence. Skin conductance level (SCL) data showed that fast-paced music elicits greater activation than slow-paced music. Genre significantly interacted with tempo in SCR frequency, with faster tempo increasing activation for classical music while decreasing it for rock music. A second experiment was conducted to explore the possibility that the presumed familiarity of the genre led to this interaction. Although further evidence was found for conceptualizing musical beat onsets as auditory structure, the familiarity explanation was not supported. Music Effects on Arousal 2 Effects of Music Genre and Tempo on Physiological Arousal Music communicates many different types of messages through the combination of sound and lyric (Sellnow & Sellnow, 2001). For example, music can be used to exchange political information (e.g., Frith, 1981; Stewart, Smith, & Denton, 1989). Music can also establish and portray a selfor group-image (Arnett, 1991, 1992; Dehyle, 1998; Kendall & Carterette, 1990; Dillman Carpentier, Knobloch & Zillmann, 2003; Manuel, 1991; McLeod, 1999; see also Hansen & Hansen, 2000). Pertinent to this investigation, music can communicate emotional information (e.g., Juslin & Sloboda, 2001). In short, music is a form of “interhuman communication in which humanly organized, non-verbal sound is perceived as vehiculating primarily affective (emotional) and/or gestural (corporeal) patterns of cognition” (Tagg, 2002, p. 5). This idea of music as communication reaches the likes of audio production students, who are taught the concept of musical underscoring, or adding music to “enhance information or emotional content” in a wide variety of ways from establishing a specific locale to intensifying action (Alten, 2005, p. 360). In this realm, music becomes a key instrument in augmenting or punctuating a given message. Given the importance of arousal and/or activation in most theories of persuasion and information processing, an understanding of how music can be harnessed to instill arousal is arguably of benefit to media producers looking to utilize every possible tool when creating messages, whether the messages are commercial appeals, promotional announcements or disease-prevention messages. It is with the motivation of harnessing the psychological response to music for practical application that two experiments were conducted to test whether message creators can rely on musical tempo as a way to increase sympathetic nervous system Music Effects on Arousal 3 activation in a manner similar to other structural features of media (i.e., cuts, edits, sound effects, voice changes). Before explaining the original work, a brief description of the current state of the literature on music and emotion is offered. Different Approaches in Music Psychology Although there is little doubt that music ‘vehiculates’ emotion, several debates exist within the music psychology literature about exactly how that process is best conceptualized and empirically approached (e.g., Bever, 1988; Gaver & Mandler, 1987; Juslin & Sloboda, 2001; Lundin, 1985; Sloboda, 1991). The primary conceptual issue revolves around two different schools of thought (Scherer & Zentner, 2001). The first, the cognitivist approach, describes emotional response to music as resulting from the listener’s cognitive recognition of cues within the composition itself. Emotivists, on the other hand, eliminate the cognitive calculus required by cue recognition in the score, instead describing emotional response to music as a feeling of emotion. Although both approaches acknowledge a cultural or social influence in how the music is interpreted (e.g., Krumhansl, 1997; Peretz, 2001), the conceptual chasm between emotion as being either expressed or elicited by a piece of music is wide indeed. A second issue in the area of music psychology concerns a difference in the empirical approach present among emotion scholars writ large. Some focus their explorations on specific, discrete affective states (i.e., joy, fear, disgust, etc.), often labeled as the experience of basic emotions (Ortony et al., 1988; Thayer, 1989; Zajonc, 1980). Communication scholars such as Nabi (1999, 2003) and Newhagen (1998) have also found it fruitful to explore unique affective states resulting from mediated messages, driven by the understanding that “each emotion expresses a different relational meaning Music Effects on Arousal 4 that motivates the use of mental and/or physical resources in ways consistent with the emotion’s action tendency” (Nabi, 2003, p. 226; also see Wirth & Schramm, 2005 for review). This approach is also well represented by studies exploring human reactions to music (see Juslin & Laukka, 2003 for review). Other emotion scholars design studies where the focus is placed not on the discrete identifier assigned to a certain feeling-state by a listener, but rather the extent to which different feeling-states share common factors or dimensions. The two most commonly studied dimensions are valence—a term given to the relative positive/negative hedonic value, and arousal—the intensity or level to which that hedonic value is experienced. The centrality of these two dimensions in the published literature is due to the consistency with which they account for the largest amount of predictive variance across a wide variety of dependent variables (Osgood, Suci & Tannenbuam, 1957; Bradley, 1994; Reisenzein, 1994). This dimensional approach to emotional experience is well-represented by articles in the communication literature exploring the combined impact of valence and arousal on memory (Lang, Bolls, Potter & Kawahara, 1999; Sundar & Kalyanaraman, 2004), liking (Yoon, Bolls, & Lang, 1998), and persuasive appeal (Yoon et al., 1998; Potter, LaTour, Braun-LaTour & Reichert, 2006). When surveying the music psychology literature for studies utilizing the dimensional emotions approach, however, results show that the impact of music on hedonic valence are difficult to consistently predict—arguably due to contextual, experiential or mood-state influences of the listener combined with interpretational differences of the song composers and performers (Bigand, Filipic, & Lalitte, 2005; Cantor & Zillmann, 1973; Gabrielsson & Lindström, 2001; Kendall & Carterette, 1990; Leman, 2003; Lundin, 1985). Music Effects on Arousal 5 On the other hand, the measured effects of music on the arousal dimension, while not uniform, are more consistent across studies (see Scherer & Zentner, 2001). For example, numerous experiments have noted the relaxation potential of music—either using compositions pre-tested as relaxing or self-identified by research participants as such. In Bartlett’s (1996) review of music studies using physiological measures, a majority of studies measuring muscle tension found relaxing music to reduce it. Interestingly, slightly more than half of the studies that measured skin temperature found relaxing music to increase it. Pelletier (2004) went beyond reviewing studies individually, conducting a statistical meta-analysis of 22 experiments. Conclusions showed that music alone, as well as used in tandem with relaxation techniques, significantly decreased perceived arousal and physiological activation. However, the amount of decrease significantly varied by age, stressor, musical preference, and previous music experience of the participant. These caveats provide possible explanations for the few inconsistent findings across individual studies that show either little or no effects of relaxing music (e.g., Davis-Rollans & Cunningham, 1987; Robb, Nichols, Rutan, & Bishop, et al., 1995; Strauser, 1997; see Standley, 1991 for review) or that show listening to relaxing music yields higher perceived arousal compared to the absence of music (Davis & Thaut, 1989). Burns, Labbé, Williams, and McCall (1999) relied on both self-report and physiological responses to the musical selections to explore music’s ability to generate states of relaxation. The researchers used a predetermined classical excerpt, a predetermined rock excerpt, an excerpt from a “relaxing” selection chosen by each participant, and a condition of sitting in silence. Burns et al. (1999) found that, within Music Effects on Arousal 6 groups, both finger temperature and skin conductance decreased over time. Across emotional conditions, self-reported relaxation was lowest for rock listeners and highest for participants in the self-selection and silence conditions. However, no significant between-group physiological differences were found. Rickard (2004) also combined self-reports of emotional impact, enjoyment, and familiarity with psychophysiological measures in evaluating arousal effects of music. Psychophysiological measures included skin conductance responses, chills, skin temperature, and muscle tension. Stimuli included relaxing music, music predetermined to be arousing but not emotionally powerful, self-selected emotionally-powerful music, and an emotionally-powerful film scene. Rickard found that music participants had selfidentified as emotionally powerful led to the greatest increases in skin conductance and chills, in addition to higher ratings on the self-reported measures. No correlation was found between these effects and participant gender or musical training. Krumhansl (1997) explored how music affects the peripheral nervous system in eliciting emotions in college-aged music students. Classical music selections approximately 180-seconds long were chosen which expressed sadness, happiness or fear. While listening, ha", "title": "" }, { "docid": "18480c92c48df7318d0c7317bc63ff40", "text": "For digital rights management (drm) software implementations incorporating cryptography, white-box cryptography (cryptographic implementation designed to withstand the white-box attack context) is more appropriate than traditional black-box cryptography. In the whitebox context, the attacker has total visibility into software implementation and execution. Our objective is to prevent extraction of secret keys from the program. We present methods to make such key extraction difficult, with focus on symmetric block ciphers implemented by substitution boxes and linear transformations. A des implementation (useful also for triple-des) is presented as a concrete example.", "title": "" }, { "docid": "0cd9577750b6195c584e55aac28cc2ba", "text": "The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, incentives are becoming as important to dependability as technical design. The new field provides valuable insights not just into ‘security’ topics such as privacy, bugs, spam, and phishing, but into more general areas such as system dependability (the design of peer-to-peer systems and the optimal balance of effort by programmers and testers), and policy (particularly digital rights management). This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and sociology. Most recently it has started to interact with psychology, both through the psychology-and-economics tradition and in response to phishing. The promise of this research program is a novel framework for analyzing information security problems – one that is both principled and effective.", "title": "" }, { "docid": "83863c6bb0da320b63eede2b5e783e83", "text": "BACKGROUND\nUnsafe behavior is closely related to occupational accidents. Work pressure is one the main factors affecting employees' behavior. The aim of the present study was to provide a path analysis model for explaining how work pressure affects safety behavior.\n\n\nMETHODS\nUsing a self-administered questionnaire, six variables supposed to affect safety employees' behavior were measured. The path analysis model was constructed based on several hypotheses. The goodness of fit of the model was assessed using both absolute and comparative fit indices.\n\n\nRESULTS\nWork pressure was determined not to influence safety behavior directly. However, it negatively influenced other variables. Group attitude and personal attitude toward safety were the main factors mediating the effect of work pressure on safety behavior. Among the variables investigated in the present study, group attitude, personal attitude and work pressure had the strongest effects on safety behavior.\n\n\nCONCLUSION\nManagers should consider that in order to improve employees' safety behavior, work pressure should be reduced to a reasonable level, and concurrently a supportive environment, which ensures a positive group attitude toward safety, should be provided. Replication of the study is recommended.", "title": "" } ]
scidocsrr
f11406f76ab9ed8865ca11f92629373f
Impact of the front-of-pack 5-colour nutrition label (5-CNL) on the nutritional quality of purchases: an experimental study
[ { "docid": "63f2caff9f598cf493d6c8a044000aa3", "text": "There are both public health and food industry initiatives aimed at increasing breakfast consumption among children, particularly the consumption of ready-to-eat cereals. The purpose of this study was to determine whether there were identifiable differences in nutritional quality between cereals that are primarily marketed to children and cereals that are not marketed to children. Of the 161 cereals identified between January and February 2006, 46% were classified as being marketed to children (eg, packaging contained a licensed character or contained an activity directed at children). Multivariate analyses of variance were used to compare children's cereals and nonchildren's cereals with respect to their nutritional content, focusing on nutrients required to be reported on the Nutrition Facts panel (including energy). Compared to nonchildren's cereals, children's cereals were denser in energy, sugar, and sodium, but were less dense in fiber and protein. The proportion of children's and nonchildren's cereals that did and did not meet national nutritional guidelines for foods served in schools were compared using chi2analysis. The majority of children's cereals (66%) failed to meet national nutrition standards, particularly with respect to sugar content. t tests were used to compare the nutritional quality of children's cereals with nutrient-content claims and health claims to those without such claims. Although the specific claims were generally justified by the nutritional content of the product, there were few differences with respect to the overall nutrition profile. Overall, there were important differences in nutritional quality between children's cereals and nonchildren's cereals. Dietary advice for children to increase consumption of ready-to-eat breakfast cereals should identify and recommend those cereals with the best nutrient profiles.", "title": "" } ]
[ { "docid": "4b250bd1c7bcca08f011f5ebc2808e4c", "text": "As a result of the rapid growth of available services provided via Internet, as well as multiple accounts a person owns, reliable user authentication schemes are mandatory for security purposes. OTP systems have prevailed as the best viable solution for security over sensitive information and pose an interesting field for research. Although, OTP schemes enhance authentication's security through various algorithmic customizations and extensions, certain compromises should be made; especially since excessively tolerable to vulnerability systems tend to have high computational and storage needs. In order to minimize the risk of a non-authenticated user having access to sensitive data, depending on the use, OTP system's architecture differs; as its tolerance towards already known attack methods. In this paper, the most widely accepted and promising OTP schemes are described and evaluated in terms of resistance against security attacks and in terms of computational intensity (performance efficiency). The results showed that there is a correlation between the security level, the computational efficiency and the storage needs of an OTP system.", "title": "" }, { "docid": "645faf32f40732d291e604d7240f0546", "text": "Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.", "title": "" }, { "docid": "1c7a844f1e9e4b38a52db9c518d1b094", "text": "BACKGROUND\nActive learning (AL) has shown the promising potential to minimize the annotation cost while maximizing the performance in building statistical natural language processing (NLP) models. However, very few studies have investigated AL in a real-life setting in medical domain.\n\n\nMETHODS\nIn this study, we developed the first AL-enabled annotation system for clinical named entity recognition (NER) with a novel AL algorithm. Besides the simulation study to evaluate the novel AL algorithm, we further conducted user studies with two nurses using this system to assess the performance of AL in real world annotation processes for building clinical NER models.\n\n\nRESULTS\nThe simulation results show that the novel AL algorithm outperformed traditional AL algorithm and random sampling. However, the user study tells a different story that AL methods did not always perform better than random sampling for different users.\n\n\nCONCLUSIONS\nWe found that the increased information content of actively selected sentences is strongly offset by the increased time required to annotate them. Moreover, the annotation time was not considered in the querying algorithms. Our future work includes developing better AL algorithms with the estimation of annotation time and evaluating the system with larger number of users.", "title": "" }, { "docid": "cd0c68845416f111307ae7e14bfb7491", "text": "Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals' activity space. First, a survey was conducted to collect individuals' daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment.", "title": "" }, { "docid": "6f734301a698a54177265815189a2bb9", "text": "Online image sharing in social media sites such as Facebook, Flickr, and Instagram can lead to unwanted disclosure and privacy violations, when privacy settings are used inappropriately. With the exponential increase in the number of images that are shared online every day, the development of effective and efficient prediction methods for image privacy settings are highly needed. The performance of models critically depends on the choice of the feature representation. In this paper, we present an approach to image privacy prediction that uses deep features and deep image tags as feature representations. Specifically, we explore deep features at various neural network layers and use the top layer (probability) as an auto-annotation mechanism. The results of our experiments show that models trained on the proposed deep features and deep image tags substantially outperform baselines such as those based on SIFT and GIST as well as those that use “bag of tags” as features.", "title": "" }, { "docid": "682803607ab7f72f27f5f145e1dabb0c", "text": "Theories of how initially satisfied marriages deteriorate or remain stable over time have been limited by a failure to distinguish between key facets of change. The present study defines the trajectory of marital satisfaction in terms of 2 separate parameters--(a) the initial level of satisfaction and (b) the rate of change in satisfaction over time--and seeks to estimate unique effects on each of these parameters with variables derived from intrapersonal and interpersonal models of marriage. Sixty newlywed couples completed measures of neuroticism, were observed during a marital interaction and provided reports of marital satisfaction every 6 months for 4 years. Neuroticism was associated with initial levels of marital satisfaction but had no additional effects on rates of change. Behavior during marital interaction predicted rates of change in marital satisfaction but was not associated with initial levels.", "title": "" }, { "docid": "d922dbcdd2fb86e7582a4fb78990990e", "text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.", "title": "" }, { "docid": "ea42c551841cc53c84c63f72ee9be0ae", "text": "Phishing is a prevalent issue of today’s Internet. Previous approaches to counter phishing do not draw on a crucial factor to combat the threat the users themselves. We believe user education about the dangers of the Internet is a further key strategy to combat phishing. For this reason, we developed an Android app, a game called –NoPhish–, which educates the user in the detection of phishing URLs. It is crucial to evaluate NoPhish with respect to its effectiveness and the users’ knowledge retention. Therefore, we conducted a lab study as well as a retention study (five months later). The outcomes of the studies show that NoPhish helps users make better decisions with regard to the legitimacy of URLs immediately after playing NoPhish as well as after some time has passed. The focus of this paper is on the description and the evaluation of both studies. This includes findings regarding those types of URLs that are most difficult to decide on as well as ideas to further improve NoPhish.", "title": "" }, { "docid": "f4a2e2cc920e28ae3d7539ba8b822fb7", "text": "Neurologic injuries, such as stroke, spinal cord injuries, and weaknesses of skeletal muscles with elderly people, may considerably limit the ability of this population to achieve the main daily living activities. Recently, there has been an increasing interest in the development of wearable devices, the so-called exoskeletons, to assist elderly as well as patients with limb pathologies, for movement assistance and rehabilitation. In this paper, we review and discuss the state of the art of the lower limb exoskeletons that are mainly used for physical movement assistance and rehabilitation. An overview of the commonly used actuation systems is presented. According to different case studies, a classification and comparison between different types of actuators is conducted, such as hydraulic actuators, electrical motors, series elastic actuators, and artificial pneumatic muscles. Additionally, the mainly used control strategies in lower limb exoskeletons are classified and reviewed, based on three types of human-robot interfaces: the signals collected from the human body, the interaction forces between the exoskeleton and the wearer, and the signals collected from exoskeletons. Furthermore, the performances of several typical lower limb exoskeletons are discussed, and some assessment methods and performance criteria are reviewed. Finally, a discussion of the major advances that have been made, some research directions, and future challenges are presented.", "title": "" }, { "docid": "f25afc147ceb24fb1aca320caa939f10", "text": "Third party intervention is a typical response to destructive and persistent social conflict and comes in a number of different forms attended by a variety of issues. Mediation is a common form of intervention designed to facilitate a negotiated settlement on substantive issues between conflicting parties. Mediators are usually external to the parties and carry an identity, motives and competencies required to play a useful role in addressing the dispute. While impartiality is generally seen as an important prerequisite for effective intervention, biased mediators also appear to have a role to play. This article lays out the different forms of third-party intervention in a taxonomy of six methods, and proposes a contingency model which matches each type of intervention to the appropriate stage of conflict escalation. Interventions are then sequenced, in order to assist the parties in de-escalating and resolving the conflict. It must be pointed out, however, that the mixing of interventions with different power bases raises a number of ethical and moral questions about the use of reward and coercive power by third parties. The article then discusses several issues around the practice of intervention. It is essential to give these issues careful consideration if third-party methods are to play their proper and useful role in the wider process of conflict transformation. Psychology from the University of Saskatchewan and a Ph.D. in Social Psychology from the University of Michigan. He has provided training and consulting services to various organizations and international institutes in conflict management. His current interests include third party intervention, interactive conflict resolution, and reconciliation in situations of ethnopolitical conflict. A b s t r a c t A b o u t t h e C o n t r i b u t o r", "title": "" }, { "docid": "38b93f50d4fc5a1029ebedb5a544987a", "text": "We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.", "title": "" }, { "docid": "effd314d69f6775b80dbe5570e3f37d8", "text": "New paradigms in networking industry, such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV), require the hypervisors to enable the execution of Virtual Network Functions in virtual machines (VMs). In this context, the virtual switch function is critical to achieve carrier grade performance, hardware independence, advanced features and programmability. SnabbSwitch is a virtual switch designed to run in user space with carrier grade performance targets, based on an efficient architecture which has driven the development of vhost-user (now also adopted by OVS-DPDK, the user space implementation of OVS based on Intel DPDK), easy to deploy and to program through its Lua scripting layer. This paper presents the SnabbSwitch virtual switch implementation along with its novelties (the vhost-user implementation and the usage of a trace compiler) and code optimizations, which have been merged in the mainline project repository. Extensive benchmarking activities, whose results are included in this paper, have been carried on to compare SnabbSwitch with other virtual switching solutions (i.e., OVS, OVS-DPDK, Linux Bridge, VFIO and SR-IOV). These results show that SnabbSwitch performs as well as hardware based solutions, such as SR-IOV and VFIO, while allowing for additional functional and flexible operation; they show also that SnabbSwitch is faster than the vhost-user based version (user space) of OVS-DPDK.", "title": "" }, { "docid": "933e0e6855114b3f46e07394af23d3d7", "text": "Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of 'unconventional computing' scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries.", "title": "" }, { "docid": "5170cb57e50cfb1c1b4e9fbccd12127b", "text": "Mobile and wireless networks have made tremendous growth in the last fifteen years. Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-realtime services. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.", "title": "" }, { "docid": "5b0530f94f476754034c92292e02b390", "text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar shashank.shekhar@vanderbilt.edu Hamzah Abdel-Aziz hamzah.abdelaziz@vanderbilt.edu Michael Walker michael.a.walker.1@vanderbilt.edu Faruk Caglar faruk.caglar@vanderbilt.edu Aniruddha Gokhale a.gokhale@vanderbilt.edu Xenofon Koutsoukos xenonfon.koutsoukos@vanderbilt.edu 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two", "title": "" }, { "docid": "0c177af9c2fffa6c4c667d1b4a4d3d79", "text": "In the last decade, a large number of different software component models have been developed, with different aims and using different principles and technologies. This has resulted in a number of models which have many similarities, but also principal differences, and in many cases unclear concepts. Component-based development has not succeeded in providing standard principles, as has, for example, object-oriented development. In order to increase the understanding of the concepts and to differentiate component models more easily, this paper identifies, discusses, and characterizes fundamental principles of component models and provides a Component Model Classification Framework based on these principles. Further, the paper classifies a large number of component models using this framework.", "title": "" }, { "docid": "5b1f814b7d8f1495733f0dc391449296", "text": "Abstruct-A class of digital h e a r phase fiiite impulse response (FIR) filters for decimation (sampling rate decrease) and interpolation (sampling rate increase) are presented. They require no multipliers and use limited storage making them an economical alternative to conventional implementations for certain applications. A digital fiiter in this class consists of cascaded ideal integrator stages operating at a high sampling rate and an equal number of comb stages operating at a low sampling rate. Together, a single integrator-comb pair produces a uniform FIR. The number of cascaded integrator-comb pairs is chosen to meet design requirements for aliasing or imaging error. Design procedures and examples are given for both decimation and interpolation filters with the emphasis on frequency response and register width.", "title": "" }, { "docid": "2eb157031961417e69e8abe55cf2ac14", "text": "Research on human induced pluripotent stem cells (hiPSCs) is one of the fastest growing fields in biomedicine. Generated from patient's own somatic cells, hiPSCs can be differentiated towards all functional cell types and returned to the patient without immunological concerns. 3D printing of hiPSCs could enable the generation of functional organs for replacement therapies or realization of organ-on-chip systems for individualized medicine. Printing of living cells was demonstrated with immortalized cell lines, primary cells, and adult stem cells with different printing technologies and biomaterials. However, hiPSCs are more sensitive to handling procedures, in particular, when dissociated into single cells. Both pluripotency and directed differentiation are influenced by numerous environmental factors including culture media, biomaterials, and cell density. Notably, existing literature on the effect of applied biomaterials on pluripotency is rather ambiguous. In this study, laser bioprinting of undifferentiated hiPSCs in combination with different biomaterials was performed and the impact on cells' behavior, pluripotency, and differentiation was investigated. Our findings suggest that hiPSCs are indeed more sensitive to the applied biomaterials, but not to laser printing itself. With appropriate biomaterials, such as the hyaluronic acid based solutions applied in this study, hiPSCs can be successfully laser printed without losing their pluripotency.", "title": "" }, { "docid": "1d7bbd7aaa65f13dd72ffeecc8499cb6", "text": "Due to the 60Hz or higher LCD refresh operations, display controller (DC) reads the pixels out from frame buffer at fixed rate. Accessing frame buffer consumes not only memory bandwidth, but power as well. Thus frame buffer compression (FBC) can contribute to alleviating both bandwidth and power consumption. A conceptual frame buffer compression model is proposed, and to the best of our knowledge, an arithmetic expression concerning the compression ratio and the read/update ratio of frame buffer is firstly presented, which reveals the correlation between frame buffer compression and target applications. Moreover, considering the linear access feature of frame buffer, we investigate a frame buffer compression without color information loss, named LFBC (Loss less Frame-Buffer Compression). LFBC defines new frame buffer compression data format, and employs run-length encoding (RLE) to implement the compression. For the applications suitable for frame buffer compression, LFBC reduces 50%90% bandwidth consumption and memory accesses caused by LCD refresh operations.", "title": "" } ]
scidocsrr
ff3a6049a7e42f367ca341dd4667d82f
Point cloud databases
[ { "docid": "8840e9e1e304a07724dd6e6779cfc9c4", "text": "Clustering has become an increasingly important task in modern application domains such as marketing and purchasing assistance, multimedia, molecular biology as well as many others. In most of these areas, the data are originally collected at different sites. In order to extract information from these data, they are merged at a central site and then clustered. In this paper, we propose a different approach. We cluster the data locally and extract suitable representatives from these clusters. These representatives are sent to a global server site where we restore the complete clustering based on the local representatives. This approach is very efficient, because the local clustering can be carried out quickly and independently from each other. Furthermore, we have low transmission cost, as the number of transmitted representatives is much smaller than the cardinality of the complete data set. Based on this small number of representatives, the global clustering can be done very efficiently. For both the local and the global clustering, we use a density based clustering algorithm. The combination of both the local and the global clustering forms our new DBDC (Density Based Distributed Clustering) algorithm. Furthermore, we discuss the complex problem of finding a suitable quality measure for evaluating distributed clusterings. We introduce two quality criteria which are compared to each other and which allow us to evaluate the quality of our DBDC algorithm. In our experimental evaluation, we will show that we do not have to sacrifice clustering quality in order to gain an efficiency advantage when using our distributed clustering approach.", "title": "" } ]
[ { "docid": "4f5c37ec7c2e926126a100a10cccf40e", "text": "Prior work shows that setting limits on young children's screen time is conducive to healthy development but can be a challenge for families. We investigate children's (age 1 - 5) transitions to and from screen-based activities to understand the boundaries families have set and their experiences living within them. We report on interviews with 27 parents and a diary study with a separate 28 families examining these transitions. These families turn on screens primarily to facilitate parents' independent activities. Parents feel this is appropriate but self-audit and express hesitation, as they feel they are benefiting from an activity that can be detrimental to their child's well-being. We found that families turn off screens when parents are ready to give their child their full attention and technology presents a natural stopping point. Transitioning away from screens is often painful, and predictive factors determine the pain of a transition. Technology-mediated transitions are significantly more successful than parent-mediated transitions, suggesting that the design community has the power to make this experience better for parents and children by creating technologies that facilitate boundary-setting and respect families' self-defined limits.", "title": "" }, { "docid": "4eebd9eb516bf2fe0b89c5d684f1ff96", "text": "Psychological theories have suggested that creativity involves a twofold process characterized by a generative component facilitating the production of novel ideas and an evaluative component enabling the assessment of their usefulness. The present study employed a novel fMRI paradigm designed to distinguish between these two components at the neural level. Participants designed book cover illustrations while alternating between the generation and evaluation of ideas. The use of an fMRI-compatible drawing tablet allowed for a more natural drawing and creative environment. Creative generation was associated with preferential recruitment of medial temporal lobe regions, while creative evaluation was associated with joint recruitment of executive and default network regions and activation of the rostrolateral prefrontal cortex, insula, and temporopolar cortex. Executive and default regions showed positive functional connectivity throughout task performance. These findings suggest that the medial temporal lobe may be central to the generation of novel ideas and creative evaluation may extend beyond deliberate analytical processes supported by executive brain regions to include more spontaneous affective and visceroceptive evaluative processes supported by default and limbic regions. Thus, creative thinking appears to recruit a unique configuration of neural processes not typically used together during traditional problem solving tasks.", "title": "" }, { "docid": "755c4c452a535f30e53f0e9e77f71d20", "text": "Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video superresolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video superresolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video superresolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.", "title": "" }, { "docid": "c9bab6f494d8c01e47449141526daeab", "text": "In this letter, we propose a conceptually simple and intuitive learning objective function, i.e., additive margin softmax, for face verification. In general, face verification tasks can be viewed as metric learning problems, even though lots of face verification models are trained in classification schemes. It is possible when a large-margin strategy is introduced into the classification model to encourage intraclass variance minimization. As one alternative, angular softmax has been proposed to incorporate the margin. In this letter, we introduce another kind of margin to the softmax loss function, which is more intuitive and interpretable. Experiments on LFW and MegaFace show that our algorithm performs better when the evaluation criteria are designed for very low false alarm rate.", "title": "" }, { "docid": "97ed18e26a80a2ae078f78c70becfe8c", "text": "A fully-integrated 18.5 kHz RC time-constant-based oscillator is designed in 65 nm CMOS for sleep-mode timers in wireless sensors. A comparator offset cancellation scheme achieves 4× to 25× temperature stability improvement, leading to an accuracy of ±0.18% to ±0.55% over -40 to 90 °C. Sub-threshold operation and low-swing oscillations result in ultra-low power consumption of 130 nW. The architecture also provides timing noise suppression, leading to 10× reduction in long-term Allan deviation. It is measured to have a stability of 20 ppm or better for measurement intervals over 0.5 s. The oscillator also has a fast startup-time, with the period settling in 4 cycles.", "title": "" }, { "docid": "70d7c838e7b5c4318e8764edb5a70555", "text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.", "title": "" }, { "docid": "482fb0c3b5ead028180c57466f3a092e", "text": "Separating text lines in handwritten documents remains a challenge because the text lines are often ununiformly skewed and curved. In this paper, we propose a novel text line segmentation algorithm based on Minimal Spanning Tree (MST) clustering with distance metric learning. Given a distance metric, the connected components of document image are grouped into a tree structure. Text lines are extracted by dynamically cutting the edges of the tree using a new objective function. For avoiding artificial parameters and improving the segmentation accuracy, we design the distance metric by supervised learning. Experiments on handwritten Chinese documents demonstrate the superiority of the approach.", "title": "" }, { "docid": "ca7fc2fc0951a004101f330c506b800c", "text": "There is considerable interest in the use of statistical process control (SPC) in healthcare. Although SPC is part of an overall philosophy of continual improvement, the implementation of SPC usually requires the production of control charts. However, as SPC is relatively new to healthcare practitioners and is not routinely featured in medical statistics texts/courses, there is a need to explain the issues involved in the selection and construction of control charts in practice. Following a brief overview of SPC in healthcare and preliminary issues, we use a tutorial-based approach to illustrate the selection and construction of four commonly used control charts (xmr-chart, p-chart, u-chart, c-chart) using examples from healthcare. For each control chart, the raw data, the relevant formulae and their use and interpretation of the final SPC chart are provided together with a notes section highlighting important issues for the SPC practitioner. Some more advanced topics are also mentioned with suggestions for further reading.", "title": "" }, { "docid": "485cda7203863d2ff0b2070ca61b1126", "text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.", "title": "" }, { "docid": "cc5815edf96596a1540fa1fca53da0d3", "text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.", "title": "" }, { "docid": "f48ee93659a25bee9a49e8be6c789987", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "1d8b6e3c415510329fb82ec0c58cb2e6", "text": "Functional antibody delivery in living cells would enable the labelling and manipulation of intracellular antigens, which constitutes a long-thought goal in cell biology and medicine. Here we present a modular strategy to create functional cell-permeable nanobodies capable of targeted labelling and manipulation of intracellular antigens in living cells. The cell-permeable nanobodies are formed by the site-specific attachment of intracellularly stable (or cleavable) cyclic arginine-rich cell-penetrating peptides to camelid-derived single-chain VHH antibody fragments. We used this strategy for the non-endocytic delivery of two recombinant nanobodies into living cells, which enabled the relocalization of the polymerase clamp PCNA (proliferating cell nuclear antigen) and tumour suppressor p53 to the nucleolus, and thereby allowed the detection of protein-protein interactions that involve these two proteins in living cells. Furthermore, cell-permeable nanobodies permitted the co-transport of therapeutically relevant proteins, such as Mecp2, into the cells. This technology constitutes a major step in the labelling, delivery and targeted manipulation of intracellular antigens. Ultimately, this approach opens the door towards immunostaining in living cells and the expansion of immunotherapies to intracellular antigen targets.", "title": "" }, { "docid": "ce8d70f73b3bf312dc0a88aa646eea55", "text": "1.1 Introduction Intelligent agents are a new paradigm for developing software applications. More than this, agent-based computing has been hailed as 'the next significant breakthrough in software development' (Sargent, 1992), and 'the new revolution in software' (Ovum, 1994). Currently, agents are the focus of intense interest on the part of many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications, ranging from comparatively small systems such as email filters to large, open, complex, mission critical systems such as air traffic control. At first sight, it may appear that such extremely different types of system can have little in common. And yet this is not the case: in both, the key abstraction used is that of an agent. Our aim in this article is to help the reader to understand why agent technology is seen as a fundamentally important new tool for building such a wide array of systems. More precisely, our aims are five-fold: • to introduce the reader to the concept of an agent and agent-based systems, • to help the reader to recognize the domain characteristics that indicate the appropriateness of an agent-based solution, • to introduce the main application areas in which agent technology has been successfully deployed to date, • to identify the main obstacles that lie in the way of the agent system developer, and finally • to provide a guide to the remainder of this book. We begin, in this section, by introducing some basic concepts (such as, perhaps most importantly, the notion of an agent). In Section 1.2, we give some general guidelines on the types of domain for which agent technology is appropriate. In Section 1.3, we survey the key application domains for intelligent agents. In Section 1.4, we discuss some issues in agent system development, and finally, in Section 1.5, we outline the structure of this book. Before we can discuss the development of agent-based systems in detail, we have to describe what we mean by such terms as 'agent' and 'agent-based system'. Unfortunately, we immediately run into difficulties, as some key concepts in agent-based computing lack universally accepted definitions. In particular, there is no real agreement even on the core question of exactly what an agent is (see Franklin and Graesser (1996) for a discussion). However, we believe that most researchers", "title": "" }, { "docid": "e702b39e13d308fa264cb6a421792f5c", "text": "Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.", "title": "" }, { "docid": "ee0f9e4dd0265fa6f738ee0190df2786", "text": "The issue of leadership has been investigated from several perspectives; however, very less from ethical perspective. With the growing number of corporate scandals and unethical roles played by business leaders in several parts of the world, the need to examine leadership from ethical perspective cannot be over emphasized. The importance of leadership credibility has been discussed in the authentic model of leadership. Authentic leaders display high degree of integrity, have deep sense of purpose, and committed to their core values. As a result they promote a more trusting relationship in their work groups that translates into several positive outcomes. The present study examined how authentic leadership contribute to subordinates’ trust in leadership and how this trust, in turn, predicts subordinates’ work engagement. A sample of 395 employees was randomly selected from several local banks operating in Malaysia. Standardized tools such as ALQ, OTI, and EEQ were employed. Results indicated that authentic leadership promoted subordinates’ trust in leader, and contributed to work engagement. Also, interpersonal trust predicted employees’ work engagement as well as mediated the relationship between this style of leadership and employees’ work engagement. Keywords—Authentic Leadership, Interpersonal Trust, Work Engagement", "title": "" }, { "docid": "e8f15d3689f1047cd05676ebd72cc0fc", "text": "We argue that in fully-connected networks a phase transition delimits the overand under-parametrized regimes where fitting can or cannot be achieved. Under some general conditions, we show that this transition is sharp for the hinge loss. In the whole over-parametrized regime, poor minima of the loss are not encountered during training since the number of constraints to satisfy is too small to hamper minimization. Our findings support a link between this transition and the generalization properties of the network: as we increase the number of parameters of a given model, starting from an under-parametrized network, we observe that the generalization error displays three phases: (i) initial decay, (ii) increase until the transition point — where it displays a cusp — and (iii) slow decay toward a constant for the rest of the over-parametrized regime. Thereby we identify the region where the classical phenomenon of over-fitting takes place, and the region where the model keeps improving, in line with previous empirical observations for modern neural networks.", "title": "" }, { "docid": "c6f52d8333406bce50d72779f07d5ac2", "text": "Dimensionality reduction studies methods that effectively reduce data dimensionality for efficient data processing tasks such as pattern recognition, machine learning, text retrieval, and data mining. We introduce the field of dimensionality reduction by dividing it into two parts: feature extraction and feature selection. Feature extraction creates new features resulting from the combination of the original features; and feature selection produces a subset of the original features. Both attempt to reduce the dimensionality of a dataset in order to facilitate efficient data processing tasks. We introduce key concepts of feature extraction and feature selection, describe some basic methods, and illustrate their applications with some practical cases. Extensive research into dimensionality reduction is being carried out for the past many decades. Even today its demand is further increasing due to important high-dimensional applications such as gene expression data, text categorization, and document indexing.", "title": "" }, { "docid": "27487316cbda79a378b706d19d53178f", "text": "Pallister-Killian syndrome (PKS) is a congenital disorder attributed to supernumerary isochromosome 12p mosaicism. Craniofacial dysmorphism, learning impairment and seizures are considered cardinal features. However, little is known regarding the seizure and epilepsy patterns in PKS. To better define the prevalence and spectrum of seizures in PKS, we studied 51 patients (39 male, 12 female; median age 4 years and 9 months; age range 7 months to 31 years) with confirmed 12p tetrasomy. Using a parent-based structured questionnaire, we collected data regarding seizure onset, frequency, timing, semiology, and medication therapy. Patients were recruited through our practice, at PKS Kids family events, and via the PKS Kids website. Epilepsy occurred in 27 (53%) with 23 (85%) of those with seizures having seizure onset prior to 3.5 years of age. Mean age at seizure onset was 2 years and 4 months. The most common seizure types were myoclonic (15/27, 56%), generalized convulsions (13/27, 48%), and clustered tonic spasms (similar to infantile spasms; 8/27, 30%). Thirteen of 27 patients with seizures (48%) had more than one seizure type with 26 out of 27 (96%) ever having taken antiepileptic medications. Nineteen of 27 (70%) continued to have seizures and 17/27 (63%) remained on antiepileptic medication. The most commonly used medications were: levetiracetam (10/27, 37%), valproic acid (10/27, 37%), and topiramate (9/27, 33%) with levetiracetam felt to be \"most helpful\" by parents (6/27, 22%). Further exploration of seizure timing, in-depth analysis of EEG recordings, and collection of MRI data to rule out confounding factors is warranted.", "title": "" }, { "docid": "fe42cf28ff020c35d3a3013bb249c7d8", "text": "Sensors and actuators are the core components of all mechatronic systems used in a broad range of diverse applications. A relatively new and rapidly evolving area is the one of rehabilitation and assistive devices that comes to support and improve the quality of human life. Novel exoskeletons have to address many functional and cost-sensitive issues such as safety, adaptability, customization, modularity, scalability, and maintenance. Therefore, a smart variable stiffness actuator was developed. The described approach was to integrate in one modular unit a compliant actuator with all sensors and electronics required for real-time communications and control. This paper also introduces a new method to estimate and control the actuator's torques without using dedicated expensive torque sensors in conditions where the actuator's torsional stiffness can be adjusted by the user. A 6-degrees-of-freedom exoskeleton was assembled and tested using the technology described in this paper, and is introduced as a real-life case study for the mechatronic design, modularity, and integration of the proposed smart actuators, suitable for human–robot interaction. The advantages are discussed together with possible improvements and the possibility of extending the presented technology to other areas of mechatronics.", "title": "" }, { "docid": "bf239cb017be0b2137b0b4fd1f1d4247", "text": "Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.", "title": "" } ]
scidocsrr
91d8890584cc6cf88bec603f9be40b7f
Fast and robust absolute camera pose estimation with known focal length
[ { "docid": "126b52ab2e2585eabf3345ef7fb39c51", "text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.", "title": "" } ]
[ { "docid": "1197bc22d825a53c2b9e6ff068e10353", "text": "CONTEXT\nPermanent evaluation of end-user satisfaction and continuance intention is a critical issue at each phase of a clinical information system (CIS) project, but most validation studies are concerned with the pre- or early post-adoption phases.\n\n\nOBJECTIVE\nThe purpose of this study was twofold: to validate at the Pompidou University Hospital (HEGP) an information technology late post-adoption model built from four validated models and to propose a unified metamodel of evaluation that could be adapted to each context or deployment phase of a CIS project.\n\n\nMETHODS\nFive dimensions, i.e., CIS quality (CISQ), perceived usefulness (PU), confirmation of expectations (CE), user satisfaction (SAT), and continuance intention (CI) were selected to constitute the CI evaluation model. The validity of the model was tested using the combined answers to four surveys performed between 2011 and 2015, i.e., more than ten years after the opening of HEGP in July 2000. Structural equation modeling was used to test the eight model-associated hypotheses.\n\n\nRESULTS\nThe multi-professional study group of 571 responders consisted of 158 doctors, 282 nurses, and 131 secretaries. The evaluation model accounted for 84% of variance of satisfaction and 53% of CI variance for the period 2011-2015 and for 92% and 69% for the period 2014-2015. In very late post adoption, CISQ appears to be the major determinant of satisfaction and CI. Combining the results obtained at various phases of CIS deployment, a Unified Model of Information System Continuance (UMISC) is proposed.\n\n\nCONCLUSION\nIn a meaningful CIS use situation at HEGP, this study confirms the importance of CISQ in explaining satisfaction and CI. The proposed UMISC model that can be adapted to each phase of CIS deployment could facilitate the necessary efforts of permanent CIS acceptance and continuance evaluation.", "title": "" }, { "docid": "66f46290a9194d4e982b8d1b59a73090", "text": "Sensor to body calibration is a key requirement for capturing accurate body movements in applications based on wearable systems. In this paper, we consider the specific problem of estimating the positions of multiple inertial measurement units (IMUs) relative to the adjacent body joints. To derive an efficient, robust and precise method based on a practical procedure is a crucial as well as challenging task when developing a wearable system with multiple embedded IMUs. In this work, first, we perform a theoretical analysis of an existing position calibration method, showing its limited applicability for the hip and knee joint. Based on this, we propose a method for simultaneously estimating the positions of three IMUs (mounted on pelvis, upper leg, lower leg) relative to these joints. The latter are here considered as an ensemble. Finally, we perform an experimental evaluation based on simulated and real data, showing the improvements of our calibration method as well as lines of future work.", "title": "" }, { "docid": "cfe92b50318c2df44ce169b3dc818211", "text": "As illegal and unhealthy content on the Internet has gradually increased in recent years, there have been constant calls for Internet content regulation. But any regulation comes at a cost. Based on the principles of the cost-benefit theory, this article conducts an in-depth discussion on China’s current Internet content regulation, so as to reveal its latent patterns.", "title": "" }, { "docid": "47de1604b6c8f5acc539e161dac6f637", "text": "Data-intensive applications are increasingly designed to execute on large computing clusters. Grouped aggregation is a core primitive of many distributed programming models, and it is often the most efficient available mechanism for computations such as matrix multiplication and graph traversal. Such algorithms typically require non-standard aggregations that are more sophisticated than traditional built-in database functions such as Sum and Max. As a result, the ease of programming user-defined aggregations, and the efficiency of their implementation, is of great current interest.\n This paper evaluates the interfaces and implementations for user-defined aggregation in several state of the art distributed computing systems: Hadoop, databases such as Oracle Parallel Server, and DryadLINQ. We show that: the degree of language integration between user-defined functions and the high-level query language has an impact on code legibility and simplicity; the choice of programming interface has a material effect on the performance of computations; some execution plans perform better than others on average; and that in order to get good performance on a variety of workloads a system must be able to select between execution plans depending on the computation. The interface and execution plan described in the MapReduce paper, and implemented by Hadoop, are found to be among the worst-performing choices.", "title": "" }, { "docid": "88a4ab49e7d3263d5d6470d123b6e74b", "text": "Graph databases have gained renewed interest in the last years, due to its applications in areas such as the Semantic Web and Social Networks Analysis. We study the problem of querying graph databases, and, in particular, the expressiveness and complexity of evaluation for several general-purpose query languages, such as the regular path queries and its extensions with conjunctions and inverses. We distinguish between two semantics for these languages. The first one, based on simple paths, easily leads to intractability, while the second one, based on arbitrary paths, allows tractable evaluation for an expressive family of languages.\n We also study two recent extensions of these languages that have been motivated by modern applications of graph databases. The first one allows to treat paths as first-class citizens, while the second one permits to express queries that combine the topology of the graph with its underlying data.", "title": "" }, { "docid": "b78c38c6ac9809f46e3d73f90e60afc6", "text": "The INTERSPEECH 2012 Speaker Trait Challenge provides for the first time a unified test-bed for ‘perceived’ speaker traits: Personality in the five OCEAN personality dimensions, likability of speakers, and intelligibility of pathologic speakers. In this paper, we describe these three Sub-Challenges, Challenge conditions, baselines, and a new feature set by the openSMILE toolkit, provided to the participants.", "title": "" }, { "docid": "91a3969506858fd7484d870505c6b800", "text": "Automatic grasp planning for robotic hands is a difficult problem because of the huge number of possible hand configurations. However, humans simplify the problem by choosing an appropriate prehensile posture appropriate for the object and task to be performed. By modeling an object as a set of shape primitives, such as spheres, cylinders, cones and boxes, we can use a set of rules to generate a set of grasp starting positions and pregrasp shapes that can then be tested on the object model. Each grasp is tested and evaluated within our grasping simulator “GraspIt!”, and the best grasps are presented to the user. The simulator can also plan grasps in a complex environment involving obstacles and the reachability constraints of a robot arm.", "title": "" }, { "docid": "3b4607a6b0135eba7c4bb0852b78dda9", "text": "Heart rate variability for the treatment of major depression is a novel, alternative approach that can offer symptom reduction with minimal-to-no noxious side effects. The following material will illustrate some of the work being conducted at our laboratory to demonstrate the efficacy of heart rate variability. Namely, results will be presented regarding our published work on an initial open-label study and subsequent results of a small, unfinished randomized controlled trial.", "title": "" }, { "docid": "94bc9736b80c129338fc490e58378504", "text": "Both reverberation and additive noises degrade the speech quality and intelligibility. the weighted prediction error (WPE) performs well on dereverberation but with limitations. First, The WPE doesn’t consider the influence of the additive noise which degrades the performance of dereverberation. Second, it relies on a time-consuming iterative process, and there is no guarantee or a widely accepted criterion on its convergence. In this paper, we integrate deep neural network (DNN) into WPE for dereverberation and denoising. DNN is used to suppress the background noise to meet the noise-free assumption of WPE. Meanwhile, DNN is applied to directly predict spectral variance of the target speech to make the WPE work without iteration. The experimental results show that the proposed method has a significant improvement in speech quality and runs fast.", "title": "" }, { "docid": "8c56b3b8e1185ee704faec04d8c438ec", "text": "The proliferation of mobile devices has facilitated the prevalence of participatory sensing applications in which participants collect and share information in their environments. The design of a participatory sensing application confronts two challenges: “privacy” and “incentive” which are two conflicting objectives and deserve deeper attention. Inspired by physical currency circulation system, this paper firstly proposes E-cent, a unit bearer currency. It is exchangeable, and participants can utilize it to participate in tasks anonymously. By employing E-cent, we further propose an E-cent-based privacy-preserving incentive mechanism, called EPPI, which exploits a pledge-based participating protocol to encourage participants to participate without revealing privacy and prohibit participants from sending false data. EPPI also takes advantage of a dynamic reward allocation scheme to maximize the value of the services under a budget constraint. To the best of our knowledge, EPPI is the first attempt to build an incentive mechanism while maintaining the desired privacy-preserving in participatory sensing systems. Extensive simulation and analysis results show that EPPI can achieve high anonymity level and remarkable incentive effects.", "title": "" }, { "docid": "d2928d8227544e8251818f06099b17fd", "text": "Driven by the dominance of the relational model, the requirements of modern applications, and the veracity of data, we revisit the fundamental notion of a key in relational databases with NULLs. In SQL database systems primary key columns are NOT NULL by default. NULL columns may occur in unique constraints which only guarantee uniqueness for tuples which do not feature null markers in any of the columns involved, and therefore serve a different function than primary keys. We investigate the notions of possible and certain keys, which are keys that hold in some or all possible worlds that can originate from an SQL table, respectively. Possible keys coincide with the unique constraint of SQL, and thus provide a semantics for their syntactic definition in the SQL standard. Certain keys extend primary keys to include NULL columns, and thus form a sufficient and necessary condition to identify tuples uniquely, while primary keys are only sufficient for that purpose. In addition to basic characterization, axiomatization, and simple discovery approaches for possible and certain keys, we investigate the existence and construction of Armstrong tables, and describe an indexing scheme for enforcing certain keys. Our experiments show that certain keys with NULLs do occur in real-world databases, and that related computational problems can be solved efficiently. Certain keys are therefore semantically well-founded and able to maintain data quality in the form of Codd’s entity integrity rule while handling the requirements of modern applications, that is, higher volumes of incomplete data from different formats.", "title": "" }, { "docid": "acba717edc26ae7ba64debc5f0d73ded", "text": "Previous phase I-II clinical trials have shown that recombinant human erythropoietin (rHuEpo) can ameliorate anemia in a portion of patients with multiple myeloma (MM) and non-Hodgkin's lymphoma (NHL). Therefore, we performed a randomized controlled multicenter study to define the optimal initial dosage and to identify predictors of response to rHuEpo. A total of 146 patients who had hemoglobin (Hb) levels < or = 11 g/dL and who had no need for transfusion at the time of enrollment entered this trial. Patients were randomized to receive 1,000 U (n = 31), 2,000 U (n = 29), 5,000 U (n = 31), or 10,000 U (n = 26) of rHuEpo daily subcutaneously for 8 weeks or to receive no therapy (n = 29). Of the patients, 84 suffered from MM and 62 from low- to intermediate-grade NHL, including chronic lymphocytic leukemia; 116 of 146 (79%) received chemotherapy during the study. The mean baseline Hb level was 9.4 +/- 1.0 g/dL. The median serum Epo level was 32 mU/mL, and endogenous Epo production was found to be defective in 77% of the patients, as judged by a value for the ratio of observed-to-predicted serum Epo levels (O/P ratio) of < or = 0.9. An intention-to-treat analysis was performed to evaluate treatment efficacy. The median average increase in Hb levels per week was 0.04 g/dL in the control group and -0.04 (P = .57), 0.22 (P = .05), 0.43 (P = .01), and 0.58 (P = .0001) g/dL in the 1,000 U, 2,000 U, 5,000 U, and 10,000 U groups, respectively (P values versus control). The probability of response (delta Hb > or = 2 g/dL) increased steadily and, after 8 weeks, reached 31% (2,000 U), 61% (5,000 U), and 62% (10,000 U), respectively. Regression analysis using Cox's proportional hazard model and classification and regression tree analysis showed that serum Epo levels and the O/P ratio were the most important factors predicting response in patients receiving 5,000 or 10,000 U. Approximately three quarters of patients presenting with Epo levels inappropriately low for the degree of anemia responded to rHuEpo, whereas only one quarter of those with adequate Epo levels did so. Classification and regression tree analysis also showed that doses of 2,000 U daily were effective in patients with an average platelet count greater than 150 x 10(9)/L. About 50% of these patients are expected to respond to rHuEpo. Thus, rHuEpo was safe and effective in ameliorating the anemia of MM and NHL patients who showed defective endogenous Epo production. From a practical point of view, we conclude that the decision to use rHuEpo in an individual anemic patient with MM or NHL should be based on serum Epo levels, whereas the choice of the initial dosage should be based on residual marrow function.", "title": "" }, { "docid": "3bd2571d38c57ecd336e6fc073f3c501", "text": "Software security, which has attracted the interest of the industrial and research community during the last years, aims at preventing security problems by building software without the so-called security holes. One way to achieve this goal is to apply specific patterns in software architecture. In the same way that the well-known design patterns for building well-structured software have been defined, a new kind of patterns called security patterns have emerged. These patterns enable us to incorporate a level of security already at the design phase of a software system. There exists no strict set of rules that can be followed in order to develop secure software. However, a number of guidelines have already appeared in the literature. Furthermore, the key problems in building secure software and major threat categories for a software system have been identified. An attempt to evaluate known security patterns based on how well they follow each principle, how well they encounter with possible problems in building secure software and for which of the threat categories they do take care of, is performed in this paper. Thirteen security patterns were evaluated based on these three sets of criteria. The ability of some of these patterns to enhance the security of the design of a software system is also examined by an illustrative example of fortifying a published design. a 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ab5cf1d4c03dea07a46587b73235387c", "text": "Image is usually taken for expressing some kinds of emotions or purposes, such as love, celebrating Christmas. There is another better way that combines the image and relevant song to amplify the expression, which has drawn much attention in the social network recently. Hence, the automatic selection of songs should be expected. In this paper, we propose to retrieve semantic relevant songs just by an image query, which is named as the image2song problem. Motivated by the requirements of establishing correlation in semantic/content, we build a semantic-based song retrieval framework, which learns the correlation between image content and lyric words. This model uses a convolutional neural network to generate rich tags from image regions, a recurrent neural network to model lyric, and then establishes correlation via a multi-layer perceptron. To reduce the content gap between image and lyric, we propose to make the lyric modeling focus on the main image content via a tag attention. We collect a dataset from the social-sharing multimodal data to study the proposed problem, which consists of (image, music clip, lyric) triplets. We demonstrate that our proposed model shows noticeable results in the image2song retrieval task and provides suitable songs. Besides, the song2image task is also performed.", "title": "" }, { "docid": "0c06c0e4fec9a2cc34c38161e142032d", "text": "We introduce a novel high-level security metrics objective taxonomization model for software-intensive systems. The model systematizes and organizes security metrics development activities. It focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals. The model emphasizes the roles of security-enforcing mechanisms, the overall security quality of the system under investigation, and secure system lifecycle, project and business management. Security correctness, effectiveness and efficiency are seen as the fundamental measurement objectives, determining the directions for more detailed security metrics development. Integration of the proposed model with riskdriven security metrics development approaches is also discussed.", "title": "" }, { "docid": "430c4f8912557f4286d152608ce5eab8", "text": "The latex of the tropical species Carica papaya is well known for being a rich source of the four cysteine endopeptidases papain, chymopapain, glycyl endopeptidase and caricain. Altogether, these enzymes are present in the laticifers at a concentration higher than 1 mM. The proteinases are synthesized as inactive precursors that convert into mature enzymes within 2 min after wounding the plant when the latex is abruptly expelled. Papaya latex also contains other enzymes as minor constituents. Several of these enzymes namely a class-II and a class-III chitinase, an inhibitor of serine proteinases and a glutaminyl cyclotransferase have already been purified up to apparent homogeneity and characterized. The presence of a beta-1,3-glucanase and of a cystatin is also suspected but they have not yet been isolated. Purification of these papaya enzymes calls on the use of ion-exchange supports (such as SP-Sepharose Fast Flow) and hydrophobic supports [such as Fractogel TSK Butyl 650(M), Fractogel EMD Propyl 650(S) or Thiophilic gels]. The use of covalent or affinity gels is recommended to provide preparations of cysteine endopeptidases with a high free thiol content (ideally 1 mol of essential free thiol function per mol of enzyme). The selective grafting of activated methoxypoly(ethylene glycol) chains (with M(r) of 5000) on the free thiol functions of the proteinases provides an interesting alternative to the use of covalent and affinity chromatographies especially in the case of enzymes such as chymopapain that contains, in its native state, two thiol functions.", "title": "" }, { "docid": "9869f2a28b11a5f0a83127937408b0ac", "text": "With the advent of the Semantic Web, the field of domain ontology engineering has gained more and more importance. This innovative field may have a big impact on computer-based education and will certainly contribute to its development. This paper presents a survey on domain ontology engineering and especially domain ontology learning. The paper focuses particularly on automatic methods for ontology learning from texts. It summarizes the state of the art in natural language processing techniques and statistical and machine learning techniques for ontology extraction. It also explains how intelligent tutoring systems may benefit from this engineering and talks about the challenges that face the field.", "title": "" }, { "docid": "1cfdb3a9d6da2e421991b4e5d526a83c", "text": "Scenario-based training exemplifies the learning-by-doing approach to human performance improvement. In this paper, we enumerate the advantages of incorporating automated scenario generation technologies into the traditional scenario development pipeline. An automated scenario generator is a system that creates training scenarios from scratch, augmenting human authoring to rapidly develop new scenarios, providing a richer diversity of tailored training opportunities, and delivering training scenarios on demand. We introduce a combinatorial optimization approach to scenario generation to deliver the requisite diversity and quality of scenarios while tailoring the scenarios to a particular learner's needs and abilities. We propose a set of evaluation metrics appropriate to scenario generation technologies and present preliminary evidence for the suitability of our approach compared to other scenario generation approaches.", "title": "" }, { "docid": "9b69254f90c28e0256fdfbefc608c034", "text": "Multiple-station shared-use vehicle systems allow users to travel between different activity centers and are well suited for resort communities, recreational areas, as well as university and corporate campuses. In this type of shared-use vehicle system, trips are more likely to be oneway each time, differing from other shared-use vehicle system models such as neighborhood carsharing and station cars where round-trips are more prevalent. Although convenient to users, a multiple-station system can suffer from a vehicle distribution problem. As vehicles are used throughout the day, they may become disproportionally distributed among the stations. As a result, it is necessary on occasion to relocate vehicles from one station to another. Relocations can be performed by system staff, which can be cumbersome and costly. In order to alleviate the distribution problem and reduce the number or relocations, we introduce two user-based relocation mechanisms called trip joining (or ridesharing) or trip splitting. When the system realizes that it is becoming imbalanced, it urges users that have more than one passenger to take separate vehicles when more vehicles are needed at the destination station (trip splitting). Conversely, if two users are at the origin station at the same time traveling to the same destination, the system can urge them to rideshare (trip joining). We have implemented this concept both on a real-world university campus shared vehicle system and in a high-fidelity computer simulation model. The model results show that there can be as much as a 42% reduction in the number of relocations using these techniques.", "title": "" }, { "docid": "d1940d5db7b1c8b9c7b4ca6ac9463147", "text": "In the new interconnected world, we need to secure vehicular cyber-physical systems (VCPS) using sophisticated intrusion detection systems. In this article, we present a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET). By combining static and dynamic detection agents, that can be mounted on central vehicles, and a control center where the alarms about possible attacks on the system are communicated, the proposed DIDS can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.", "title": "" } ]
scidocsrr
711bf80c53c49e4d5dd46106d9be0842
Learning image representations equivariant to ego-motion
[ { "docid": "c2b1dd2d2dd1835ed77cf6d43044eed8", "text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.", "title": "" } ]
[ { "docid": "deeaa0f2243baded6b3198655b7c78cf", "text": "Glaucoma is a major global cause of blindness. An approach to automatically extract the main features in color fundus images is proposed in this paper. The optic cup-to-disc ratio (CDR) in retinal fundus images is one of the principle physiological characteristics in the diagnosis of glaucoma. The least square fitting algorithm aims to improve the accuracy of the boundary estimation. The technique used here is a core component of ARGALI (Automatic cup-to-disc Ratio measurement system for Glaucoma detection and AnaLysIs), a system for automated glaucoma risk assessment. The algorithm's effectiveness is demonstrated manually on segmented retina fundus images. By comparing the automatic cup height measurement to ground truth, we found that the method accurately detected neuro-retinal cup height. This work improves the efficiency of clinical interpretation of Glaucoma in fundus images of the eye. The tool utilized to accomplish the objective is MATLAB7.5.", "title": "" }, { "docid": "6dc4e4949d4f37f884a23ac397624922", "text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.", "title": "" }, { "docid": "15c0f63bb4ab47e47d2bb9789cf404f4", "text": "This review provides an account of the Study of Mathematically Precocious Youth (SMPY) after 35 years of longitudinal research. Findings from recent 20-year follow-ups from three cohorts, plus 5- or 10-year findings from all five SMPY cohorts (totaling more than 5,000 participants), are presented. SMPY has devoted particular attention to uncovering personal antecedents necessary for the development of exceptional math-science careers and to developing educational interventions to facilitate learning among intellectually precocious youth. Along with mathematical gifts, high levels of spatial ability, investigative interests, and theoretical values form a particularly promising aptitude complex indicative of potential for developing scientific expertise and of sustained commitment to scientific pursuits. Special educational opportunities, however, can markedly enhance the development of talent. Moreover, extraordinary scientific accomplishments require extraordinary commitment both in and outside of school. The theory of work adjustment (TWA) is useful in conceptualizing talent identification and development and bridging interconnections among educational, counseling, and industrial psychology. The lens of TWA can clarify how some sex differences emerge in educational settings and the world of work. For example, in the SMPY cohorts, although more mathematically precocious males than females entered math-science careers, this does not necessarily imply a loss of talent because the women secured similar proportions of advanced degrees and high-level careers in areas more correspondent with the multidimensionality of their ability-preference pattern (e.g., administration, law, medicine, and the social sciences). By their mid-30s, the men and women appeared to be happy with their life choices and viewed themselves as equally successful (and objective measures support these subjective impressions). Given the ever-increasing importance of quantitative and scientific reasoning skills in modern cultures, when mathematically gifted individuals choose to pursue careers outside engineering and the physical sciences, it should be seen as a contribution to society, not a loss of talent.", "title": "" }, { "docid": "57991cdfd00786c929d1a909ba22cbee", "text": "This system description explains how to use several bilingual dictionaries and aligned corpora in order to create translation candidates for novel language pairs. It proposes (1) a graph-based approach which does not depend on cyclical translations and (2) a combination of this method with a collocation-based model using the multilingually aligned Europarl corpus.", "title": "" }, { "docid": "cd8f880b2c290ac6066beb4010d90001", "text": "The miniaturization of integrated circuits based on complementary metal oxide semiconductor (CMOS) technology meets a significant slowdown in this decade due to several technological and scientific difficulties. Spintronic devices such as magnetic tunnel junction (MTJ) nanopillar become one of the most promising candidates for the next generation of memory and logic chips thanks to their non-volatility, infinite endurance, and high density. A magnetic processor based on spintronic devices is then expected to overcome the issue of increasing standby power due to leakage currents and high dynamic power dedicated to data moving. For the purpose of fabricating such a non-volatile magnetic processor, a new design of multi-bit magnetic adder (MA)-the basic element of arithmetic/logic unit for any processor-whose input and output data are stored in perpendicular magnetic anisotropy (PMA) domain wall (DW) racetrack memory (RM)-is presented in this paper. The proposed multi-bit MA circuit promises nearly zero standby power, instant ON/OFF capability, and smaller die area. By using an accurate racetrack memory spice model, we validated this design and simulated its performance such as speed, power and area, etc.", "title": "" }, { "docid": "5e24546cb92fa4445c044bd9bed46081", "text": "Previous research has demonstrated that when a close romantic partner views you and behaves toward you in a manner that is congruent with your ideal self, you experience movement toward your ideal self (termed the \"Michelangelo phenomenon\"; Drigotas, Rusbult, Wieselquist, & Whitton, 1999). The present research represents an attempt demonstrate the phenomenon's link to personal well-being. Results of a cross-sectional study of individuals in dating relationships, with a 2-month follow-up assessing breakup, replicated previous findings regarding relationship well-being and revealed strong links between the model and personal well-being, even when accounting for level of relationship satisfaction. Such results provide further evidence for the social construction of the self and personal well-being.", "title": "" }, { "docid": "cd23b0dfd98fb42513229070035e0aa9", "text": "Sixteen residents in long-term care with advanced dementia (14 women; average age = 88) showed significantly more constructive engagement (defined as motor or verbal behaviors in response to an activity), less passive engagement (defined as passively observing an activity), and more pleasure while participating in Montessori-based programming than in regularly scheduled activities programming. Principles of Montessori-based programming, along with examples of such programming, are presented. Implications of the study and methods for expanding the use of Montessori-based dementia programming are discussed.", "title": "" }, { "docid": "a05a953097e5081670f26e85c4b8e397", "text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.", "title": "" }, { "docid": "c5639c65908882291c29e147605c79ca", "text": "Dirofilariasis is a rare disease in humans. We report here a case of a 48-year-old male who was diagnosed with pulmonary dirofilariasis in Korea. On chest radiographs, a coin lesion of 1 cm in diameter was shown. Although it looked like a benign inflammatory nodule, malignancy could not be excluded. So, the nodule was resected by video-assisted thoracic surgery. Pathologically, chronic granulomatous inflammation composed of coagulation necrosis with rim of fibrous tissues and granulations was seen. In the center of the necrotic nodules, a degenerating parasitic organism was found. The parasite had prominent internal cuticular ridges and thick cuticle, a well-developed muscle layer, an intestinal tube, and uterine tubules. The parasite was diagnosed as an immature female worm of Dirofilaria immitis. This is the second reported case of human pulmonary dirofilariasis in Korea.", "title": "" }, { "docid": "7882d2d18bc8a30a63e9fdb726c48ff1", "text": "Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.", "title": "" }, { "docid": "e05ef8c7b20b91998ec8034c58177c85", "text": "We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.", "title": "" }, { "docid": "17bd8497b30045267f77572c9bddb64f", "text": "0007-6813/$ see front matter D 200 doi:10.1016/j.bushor.2004.11.006 * Corresponding author. E-mail addresses: cseelos@sscg.org jmair@iese.edu (J. Mair).", "title": "" }, { "docid": "55285f99e1783bcba47ab41e56171026", "text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.", "title": "" }, { "docid": "1c8e47f700926cf0b6ab6ed7446a6e7a", "text": "Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated datasets, which are expensive to develop and thus limited in size. Since such datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity datasets. Additionally, we investigated the effect of dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one dataset, performance improves significantly for five datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any dataset, and performance improves significantly for six datasets by up to 1.1%. The dataset size experiments found that as dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER dataset. We also found that Multi-task Learning is beneficial for small datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.", "title": "" }, { "docid": "762ddb81b6ba123f60decdb625c628bf", "text": "The automatic detection and recognition of car number plates has become an important application of artificial vision systems. Since the license plates can be replaced, stolen or simply tampered with, they are not the ultimate answer for vehicle identification. The objective is to develop a system whereby vehicle identification number (VIN) or vehicle chassis number is digitally photographed, and then identified electronically by segmenting the characters from the embossed VIN. In this paper we present a novel algorithm for vehicle chassis number identification based on optical character recognition (OCR) using artificial neural network. The algorithm is tested on over thousand vehicle images of different ambient illumination. While capturing these images, the VIN was kept in-focus, while the angle of view and the distance from the vehicle varied according to the experimental setup. These images were subjected to pre-processing which comprises of some standard image processing algorithms. The resultant images were then fed to the proposed OCR system. The OCR system is a three-layer artificial neural network (ANN) with topology 504-600-10. The major achievement of this work is the rate of correct identification, which is 95.49% with zero false identification.", "title": "" }, { "docid": "61ffc67f0e242afd8979d944cbe2bff4", "text": "Diprosopus is a rare congenital malformation associated with high mortality. Here, we describe a patient with diprosopus, multiple life-threatening anomalies, and genetic mutations. Prenatal diagnosis and counseling made a beneficial impact on the family and medical providers in the care of this case.", "title": "" }, { "docid": "907883af0e81f4157e81facd4ff4344c", "text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 (11.4ps@3.125GBaud) and consumes 9.5mW at 3.125GBaud.", "title": "" }, { "docid": "b77b5d588c8f20825dd61bd2cc4e51c4", "text": "During crises such as natural disasters or other human tragedies, information needs of both civilians and responders often require urgent, specialized treatment. Monitoring and summarizing a text stream during such an event remains a difficult problem. We present a system for update summarization which predicts the salience of sentences with respect to an event and then uses these predictions to directly bias a clustering algorithm for sentence selection, increasing the quality of the updates. We use novel, disaster-specific features for salience prediction, including geo-locations and language models representing the language of disaster. Our evaluation on a standard set of retrospective events using ROUGE shows that salience prediction provides a significant improvement over other approaches.", "title": "" }, { "docid": "538ad3f32bbf333d73e619efc8ab4e9c", "text": "In order to learn effective control policies for dynamical systems, policy search methods must be able to discover successful executions of the desired task. While random exploration can work well in simple domains, complex and highdimensional tasks present a serious challenge, particularly when combined with high-dimensional policies that make parameter-space exploration infeasible. We present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks.", "title": "" }, { "docid": "9c1e518c80dfbf201291923c9c55f1fd", "text": "Computation underlies the organization of cells into higher-order structures, for example during development or the spatial association of bacteria in a biofilm. Each cell performs a simple computational operation, but when combined with cell–cell communication, intricate patterns emerge. Here we study this process by combining a simple genetic circuit with quorum sensing to produce more complex computations in space. We construct a simple NOR logic gate in Escherichia coli by arranging two tandem promoters that function as inputs to drive the transcription of a repressor. The repressor inactivates a promoter that serves as the output. Individual colonies of E. coli carry the same NOR gate, but the inputs and outputs are wired to different orthogonal quorum-sensing ‘sender’ and ‘receiver’ devices. The quorum molecules form the wires between gates. By arranging the colonies in different spatial configurations, all possible two-input gates are produced, including the difficult XOR and EQUALS functions. The response is strong and robust, with 5- to >300-fold changes between the ‘on’ and ‘off’ states. This work helps elucidate the design rules by which simple logic can be harnessed to produce diverse and complex calculations by rewiring communication between cells.", "title": "" } ]
scidocsrr
68b38a127a9fd4cf4af5b14ab05d63dc
Recommender Systems for the Semantic Web
[ { "docid": "0dd78cb46f6d2ddc475fd887a0dc687c", "text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.", "title": "" } ]
[ { "docid": "c9af9d5f461cb0aa196221c926ac4252", "text": "The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known object-oriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.", "title": "" }, { "docid": "b8bee026b35868b62ef2ffe5029bfb7b", "text": "In this paper, we propose a novel network architecture, a recurrent convolutional neural network, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection of multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations while the latter effectively analyzes temporal dependency in bi-temporal images. Although both CNN and RNN are well-established techniques for remote sensing applications, to the best of our knowledge, we are the first to combine them for multitemporal data analysis in the remote sensing community. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode.", "title": "" }, { "docid": "f3f4cb6e7e33f54fca58c14ce82d6b46", "text": "In this letter, a novel slot array antenna with a substrate-integrated coaxial line (SICL) technique is proposed. The proposed antenna has radiation slots etched homolaterally along the mean line in the top metallic layer of SICL and achieves a compact transverse dimension. A prototype with 5 <inline-formula><tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> 10 longitudinal slots is designed and fabricated with a multilayer liquid crystal polymer (LCP) process. A maximum gain of 15.0 dBi is measured at 35.25 GHz with sidelobe levels of <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 28.2 dB (<italic>E</italic>-plane) and <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 33.1 dB (<italic>H</italic>-plane). The close correspondence between experimental results and designed predictions on radiation patterns has validated the proposed excogitation in the end.", "title": "" }, { "docid": "4255fd867660b8a6998c058508339e90", "text": "This paper presents the concept of a new robotic joint composed of two electric motors as inputs, an epicyclic gearing system for the transmission, and a single output. The proposed joint mechanism has a wider range of speed and torque performances comparatively to a traditional robot joint using a single motor and gearbox. The dynamic equations for the mechanical transmission system are given and a dual-motor joint mechanism is designed and prototyped to test this new concept of robotic joint. Also, the potential advantages of this joint concept for the design of manipulators for which a wide range of performances are desired are discussed. This work is motivated by the development of field robots designed for the operation and maintenance tasks in power distribution lines.", "title": "" }, { "docid": "aee62b585bb8a51b7bd9e0835bce72b4", "text": "Someone said, “It is a bad craftsman that blames his tools.” It should be obvious to the thoughtful observer that the problem may be the implementation of ISD, not a systematic approach itself. At the highest level of a systems approach one cannot imagine a design process that does not identify the training needs of an organization or the learning needs of the students. While learning occurs in many different environments, it is generally agreed that instruction requires that one first identify the goals of the instruction. It is equally difficult to imagine a process that does not involve planning, development, implementation, and evaluation. It is not these essential development activities that are in question but perhaps the fact that their detailed implementation in various incarnations of ISD do not represent the most efficient or effective method for designing instruction. A more significant element is the emphasis on the process involved in developing instruction rather than the basic learning principles that this process should emphasize. Merely following a series of steps, when there is insufficient guidance as to quality, is likely to result in an inferior product. A technology involves not only the steps involved but a set of specifications for what each step is to accomplish. Perhaps many ISD implementations have had insufficient specifications for the products of the process.", "title": "" }, { "docid": "eabb50988aeb711995ff35833a47770d", "text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.", "title": "" }, { "docid": "d864cc5603c97a8ff3c070dd385fe3a8", "text": "Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network architecture. It intends to be more flexible and to simplify the management in networks with respect to traditional architectures. Each of these aspects are possible because of the separation of control plane (controller) and data plane (switches) in network devices. OpenFlow is the most common protocol for SDN networks that provides the communication between control and data planes. Moreover, the advantage of decoupling control and data planes enables a quick evolution of protocols and also its deployment without replacing data plane switches. In this survey, we review the SDN technology and the OpenFlow protocol and their related works. Specifically, we describe some technologies as Wireless Sensor Networks and Wireless Cellular Networks and how SDN can be included within them in order to solve their challenges. We classify different solutions for each technology attending to the problem that is being fixed.", "title": "" }, { "docid": "2bdaaeb18db927e2140c53fcc8d4fa30", "text": "Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. As a concrete example, in the context of environmental monitoring of Lake Zurich we would like to estimate the regions of the lake where the concentration of chlorophyll or algae is greater than some critical value, which would serve as an indicator of algal bloom phenomena. A critical factor in such applications is the high cost in terms of time, baery power, etc. that is associated with each measurement, therefore it is important to be careful about selecting “informative” locations to sample, in order to reduce the total sampling effort required. We formalize the task of level set estimation as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an active learning algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural seings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Based on the laer extension we also propose a simple path planning algorithm. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely the aforementioned autonomous monitoring of algal populations in Lake Zurich and geolocating network latency.", "title": "" }, { "docid": "8e5573b7ab9789a73d431b666bfb3c8a", "text": "Automated question answering has been a topic of research and development since the earliest AI applications. Computing power has increased since the first such systems were developed, and the general methodology has changed from the use of hand-encoded knowledge bases about simple domains to the use of text collections as the main knowledge source over more complex domains. Still, many research issues remain. The focus of this article is on the use of restricted domains for automated question answering. The article contains a historical perspective on question answering over restricted domains and an overview of the current methods and applications used in restricted domains. A main characteristic of question answering in restricted domains is the integration of domain-specific information that is either developed for question answering or that has been developed for other purposes. We explore the main methods developed to leverage this domain-specific information.", "title": "" }, { "docid": "faf547f09749d672177a8245612b7bbb", "text": "Most previous work on fashion recommendation focuses on designing visual features to enhance recommendations. Existing work neglects user comments of fashion items, which have been proved effective in generating explanations along with better recommendation results. We propose a novel neural network framework, neural fashion recommendation (NFR), that simultaneously provides fashion recommendations and generates abstractive comments. NFR consists of two parts: outfit matching and comment generation. For outfit matching, we propose a convolutional neural network with a mutual attention mechanism to extract visual features of outfits. The visual features are then decoded into a rating score for the matching prediction. For abstractive comment generation, we propose a gated recurrent neural network with a cross-modality attention mechanism to transform visual features into a concise sentence. The two parts are jointly trained based on a multi-task learning framework in an end-to-end back-propagation paradigm. Extensive experiments conducted on an existing dataset and a collected real-world dataset show NFR achieves significant improvements over state-of-the-art baselines for fashion recommendation. Meanwhile, our generated comments achieve impressive ROUGE and BLEU scores in comparison to human-written comments. The generated comments can be regarded as explanations for the recommendation results. We release the dataset and code to facilitate future research.", "title": "" }, { "docid": "19672ead8c41fa723099b30d152fb466", "text": "-Fractal dimension is an interesting parameter to characterize roughness in an image. It can be used in texture segmentation, estimation of three-dimensional (3D) shape and other information. A new method is proposed to estimate fractal dimension in a two-dimensional (2D) image which can readily be extended to a 3D image as well. The method has been compared with other existing methods to show that our method is both efficient and accurate. Fractal dimension Texture analysis Image roughness measure Image segmentation Computer vision", "title": "" }, { "docid": "2893a60090a15e2c913ae37e976c2bff", "text": "We propose Precoded SUbcarrier Nulling (PSUN), a transmission strategy for OFDM-based wireless communication networks (SCN, Secondary Communication Networks) that need to coexist with pulsed radar systems. It is a novel null-tone allocation method that effectively mitigates inter-carrier interference (ICI) remaining after pulse blanking (PB). When the power from the radar's pulse interference is high, the SCN Rx needs to employ PB to mitigate the interference power. Although PB is known to be an effective technique for suppressing pulsed interference, it magnifies the effect of ICI in OFDM waveforms, and thus degrades bit error rate (BER) performance. For more reliable performance evaluation, we take into account two characteristics of the incumbent radar significantly affect the performance of SCN: (i) antenna sidelobe and (ii) out-of-band emission. Our results show that PSUN effectively mitigates the impact of ICI remaining after PB.", "title": "" }, { "docid": "cddcd23b07837a93e4c6fe2b5d9765ec", "text": "Advanced driver assistance systems (ADAS) have a critical role in the development of the active safety systems for vehicles. There are various sub technologies like Adaptive cruise control (ACC), Collision avoidance system, Blind spot detection etc. under ADAS. All these technologies are also accepted as the preliminary technology of autonomous driving. Therefore, during development of these technologies using a system of system (SOS) control approach would help both decreasing the development costs and unifying all these technologies under autonomous driving. In this paper, a SOS based intelligent ACC system design is proposed. The ACC system has high level control, low level control and sensor units.", "title": "" }, { "docid": "2fcd7e151c658e29cacda5c4f5542142", "text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.", "title": "" }, { "docid": "11b9964816912595d3d001e547465a1e", "text": "Redundancy is at the heart of graphical applications. In fact, generating an animation typically involves the succession of extremely similar images. In terms of rendering these images, this behavior translates into the creation of many fragment programs with the exact same input data. We have measured this fragment redundancy for a set of commercial Android applications, and found that more than 40% of the fragments used in a frame have been already computed in a prior frame.\n In this paper we try to exploit this redundancy, using fragment memoization. Unfortunately, this is not an easy task as most of the redundancy exists across frames, rendering most HW based schemes unfeasible. We thus first take a step back and try to analyze the temporal locality of the redundant fragments, their complexity, and the number of inputs typically seen in fragment programs. The result of our analysis is a task level memoization scheme, that easily outperforms the current state-of-the-art in low power GPUs\n More specifically, our experimental results show that our scheme is able to remove 59.7% of the redundant fragment computations on average. This materializes to a significant speedup of 17.6% on average, while also improving the overall energy efficiency by 8.9% on average.", "title": "" }, { "docid": "58ea0c36079f1b12ffa09a9b65f198c0", "text": "We propose a new class of vortex definitions for flows that are induced by rotating mechanical parts, such as stirring devices, helicopters, hydrocyclones, centrifugal pumps, or ventilators. Instead of a Galilean invariance, we enforce a rotation invariance, i.e., the invariance of a vortex under a uniform-speed rotation of the underlying coordinate system around a fixed axis. We provide a general approach to transform a Galilean invariant vortex concept to a rotation invariant one by simply adding a closed form matrix to the Jacobian. In particular, we present rotation invariant versions of the well-known Sujudi-Haimes, Lambda-2, and Q vortex criteria. We apply them to a number of artificial and real rotating flows, showing that for these cases rotation invariant vortices give better results than their Galilean invariant counterparts.", "title": "" }, { "docid": "50f7fd72dcd833c92efb56fb71918263", "text": "The input vocabulary for touch-screen interaction on handhelds is dramatically limited, especially when the thumb must be used. To enrich that vocabulary we propose to discriminate, among thumb gestures, those we call MicroRolls, characterized by zero tangential velocity of the skin relative to the screen surface. Combining four categories of thumb gestures, Drags, Swipes, Rubbings and MicroRolls, with other classification dimensions, we show that at least 16 elemental gestures can be automatically recognized. We also report the results of two experiments showing that the roll vs. slide distinction facilitates thumb input in a realistic copy and paste task, relative to existing interaction techniques.", "title": "" }, { "docid": "1dab5734e1e3e8e12eb533c8d2ca98f1", "text": "—The significant growth of online shopping makes the competition in this industry become more intense. Maintaining customer loyalty has been recognized as one of the essential factor for business survival and growth. The purpose of this study is to examine empirically the influence of satisfaction, trust and commitment on customer loyalty in online shopping. This paper describes a theoretical model for investigating the influence of satisfaction, trust and commitment on customer loyalty toward online shopping. Based on the theoretical model, hypotheses were formulated. The primary data were collected from the respondents which consists of 300 students. Multiple regression and qualitative analysis were used to test the study hypotheses. The empirical study results revealed that satisfaction, trust and commitment have significant impact on student loyalty toward online shopping.", "title": "" }, { "docid": "b83fc3d06ff877a7851549bcd23aaed2", "text": "Finding what is and what is not a salient object can be helpful in developing better features and models in salient object detection (SOD). In this paper, we investigate the images that are selected and discarded in constructing a new SOD dataset and find that many similar candidates, complex shape and low objectness are three main attributes of many non-salient objects. Moreover, objects may have diversified attributes that make them salient. As a result, we propose a novel salient object detector by ensembling linear exemplar regressors. We first select reliable foreground and background seeds using the boundary prior and then adopt locally linear embedding (LLE) to conduct manifold-preserving foregroundness propagation. In this manner, a foregroundness map can be generated to roughly pop-out salient objects and suppress non-salient ones with many similar candidates. Moreover, we extract the shape, foregroundness and attention descriptors to characterize the extracted object proposals, and a linear exemplar regressor is trained to encode how to detect salient proposals in a specific image. Finally, various linear exemplar regressors are ensembled to form a single detector that adapts to various scenarios. Extensive experimental results on 5 dataset and the new SOD dataset show that our approach outperforms 9 state-of-art methods.", "title": "" }, { "docid": "4be71eccf611b7bdffb708f8cfa2613d", "text": "Many natural and social systems develop complex networks that are usually modeled as random graphs. The eigenvalue spectrum of these graphs provides information about their structural properties. While the semicircle law is known to describe the spectral densities of uncorrelated random graphs, much less is known about the spectra of real-world graphs, describing such complex systems as the Internet, metabolic pathways, networks of power stations, scientific collaborations, or movie actors, which are inherently correlated and usually very sparse. An important limitation in addressing the spectra of these systems is that the numerical determination of the spectra for systems with more than a few thousand nodes is prohibitively time and memory consuming. Making use of recent advances in algorithms for spectral characterization, here we develop methods to determine the eigenvalues of networks comparable in size to real systems, obtaining several surprising results on the spectra of adjacency matrices corresponding to models of real-world graphs. We find that when the number of links grows as the number of nodes, the spectral density of uncorrelated random matrices does not converge to the semicircle law. Furthermore, the spectra of real-world graphs have specific features, depending on the details of the corresponding models. In particular, scale-free graphs develop a trianglelike spectral density with a power-law tail, while small-world graphs have a complex spectral density consisting of several sharp peaks. These and further results indicate that the spectra of correlated graphs represent a practical tool for graph classification and can provide useful insight into the relevant structural properties of real networks.", "title": "" } ]
scidocsrr
7dd9b38e4c35df5948c8243263807a65
Gaze Estimation in the 3 D Space Using RGB-D sensors Towards Head-Pose And User Invariance
[ { "docid": "1705ba479a7ff33eef46e0102d4d4dd0", "text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.", "title": "" }, { "docid": "8092ba3c116d33900e72ff79994ac45c", "text": "We describe an expression-invariant method for face recognition by fitting an identity/expression separated 3D Morphable Model to shape data. The expression model greatly improves recognition and retrieval rates in the uncooperative setting, while achieving recognition rates on par with the best recognition algorithms in the face recognition great vendor test. The fitting is performed with a robust nonrigid ICP algorithm. It is able to perform face recognition in a fully automated scenario and on noisy data. The system was evaluated on two datasets, one with a high noise level and strong expressions, and the standard UND range scan database, showing that while expression invariance increases recognition and retrieval performance for the expression dataset, it does not decrease performance on the neutral dataset. The high recognition rates are achieved even with a purely shape based method, without taking image data into account.", "title": "" } ]
[ { "docid": "991c93c20d25636a8d91ba8326c48578", "text": "[1] Interferometric synthetic aperture radar (InSAR) provides a practical means of mapping creep along major strike-slip faults. The small amplitude of the creep signal (<10 mm/yr), combined with its short wavelength, makes it difficult to extract from long time span interferograms, especially in agricultural or heavily vegetated areas. We utilize two approaches to extract the fault creep signal from 37 ERS SAR images along the southern San Andreas Fault. First, amplitude stacking is utilized to identify permanent scatterers, which are then used to weight the interferogram prior to spatial filtering. This weighting improves correlation and also provides a mask for poorly correlated areas. Second, the unwrapped phase is stacked to reduce tropospheric and other short-wavelength noise. This combined processing enables us to recover the near-field ( 200 m) slip signal across the fault due to shallow creep. Displacement maps from 60 interferograms reveal a diffuse secular strain buildup, punctuated by localized interseismic creep of 4–6 mm/yr line of sight (LOS, 12–18 mm/yr horizontal). With the exception of Durmid Hill, this entire segment of the southern San Andreas experienced right-lateral triggered slip of up to 10 cm during the 3.5-year period spanning the 1992 Landers earthquake. The deformation change following the 1999 Hector Mine earthquake was much smaller (<1 cm) and broader than for the Landers event. Profiles across the fault during the interseismic phase show peak-to-trough amplitude ranging from 15 to 25 mm/yr (horizontal component) and the minimum misfit models show a range of creeping/locking depth values that fit the data.", "title": "" }, { "docid": "5b6daefbefd44eea4e317e673ad91da3", "text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.", "title": "" }, { "docid": "ef742ded3107fe9c5812a7c866835117", "text": "Much commentary has been circulating in academe regarding the research skills, or lack thereof, in members of ‘‘Generation Y,’’ the generation born between 1980 and 1994. The students currently on college campuses, as well as those due to arrive in the next few years, have grown up in front of electronic screens: television, movies, video games, computer monitors. It has been said that student critical thinking and other cognitive skills (as well as their physical well-being) are suffering because of the large proportion of time spent in sedentary pastimes, passively absorbing words and images, rather than in reading. It may be that students’ cognitive skills are not fully developing due to ubiquitous electronic information technologies. However, it may also be that academe, and indeed the entire world, is currently in the middle of a massive and wideranging shift in the way knowledge is disseminated and learned.", "title": "" }, { "docid": "110a60612f701575268fe3dbcf0d338f", "text": "The Danish and Swedish male top football divisions were studied prospectively from January to June 2001. Exposure to football and injury incidence, severity and distribution were compared between the countries. Swedish players had greater exposure to training (171 vs. 123 h per season, P<0.001), whereas exposure to matches did not differ between the countries. There was a higher risk for injury during training in Denmark than in Sweden (11.8 vs. 6.0 per 1000 h, P<0.01), whereas for match play there was no difference (28.2 vs. 26.2 per 1000 h). The risk for incurring a major injury (absence from football more than 4 weeks) was greater in Denmark (1.8 vs. 0.7 per 1000 h, P = 0.002). The distribution of injuries according to type and location was similar in both countries. Of all injuries in Denmark and Sweden, overuse injury accounted for 39% and 38% (NS), and re-injury for 30% and 24% (P = 0.032), respectively. The greater training exposure and the long pre-season period in Sweden may explain some of the reported differences.", "title": "" }, { "docid": "16fa1af9571b623aa756d49fb269ecee", "text": "The subgraph isomorphism problem is one of the most important problems for pattern recognition in graphs. Its applications are found in many di®erent disciplines, including chemistry, medicine, and social network analysis. Because of the NP-completeness of the problem, the existing exact algorithms exhibit an exponential worst-case running time. In this paper, we propose several improvements to the well-known Ullmann's algorithm for the problem. The improvements lower the time consumption as well as the space requirements of the algorithm. We experimentally demonstrate the e±ciency of our improvement by comparing it to another set of improvements called FocusSearch, as well as other state-of-the-art algorithms, namely VF2 and LAD.", "title": "" }, { "docid": "81a9907ddc512cbf74e1b10ac620f910", "text": "Spent coffee grounds (SCG) were extracted with an environmentally friendly procedure and analyzed to evaluate the recovery of relevant natural antioxidants for use as nutritional supplements, foods, or cosmetic additives. SCG were characterized in terms of their total phenolic content by the Folin-Ciocalteu procedure and antioxidant activity by the DPPH scavenging assay. Flavonoid content was also determined by a colorimetric assay. The total phenolic content was strongly correlated with the DPPH scavenging activity, suggesting that phenolic compounds are mainly responsible for the antioxidant activity of SCG. An UHPLC-PDA-TOF-MS system was used to separate, identify, and quantify phenolic and nonphenolic compounds in the SCG extracts. Important amounts of chlorogenic acids (CGA) and related compounds as well as caffeine (CAF) evidenced the high potential of SCG, a waste material that is widely available in the world, as a source of natural phenolic antioxidants.", "title": "" }, { "docid": "6ee17f377956fd20432d117b0c001022", "text": "Social media has led to the democratisation of opinion sharing. A wealth of information about public opinions, current events, and authors' insights into specific topics can be gained by understanding the text written by users. However, there is a wide variation in the language used by different authors in different contexts on the web. This diversity in language makes interpretation an extremely challenging task. Crowdsourcing presents an opportunity to interpret the sentiment, or topic, of free-text. However, the subjectivity and bias of human interpreters raise challenges in inferring the semantics expressed by the text. To overcome this problem, we present a novel Bayesian approach to language understanding that relies on aggregated crowdsourced judgements. Our model encodes the relationships between labels and text features in documents, such as tweets, web articles, and blog posts, accounting for the varying reliability of human labellers. It allows inference of annotations that scales to arbitrarily large pools of documents. Our evaluation using two challenging crowdsourcing datasets shows that by efficiently exploiting language models learnt from aggregated crowdsourced labels, we can provide up to 25% improved classifications when only a small portion, less than 4% of documents has been labelled. Compared to the six state-of-the-art methods, we reduce by up to 67% the number of crowd responses required to achieve comparable accuracy. Our method was a joint winner of the CrowdFlower - CrowdScale 2013 Shared Task challenge at the conference on Human Computation and Crowdsourcing (HCOMP 2013).", "title": "" }, { "docid": "2c39eafa87d34806dd1897335fdfe41c", "text": "One of the issues facing credit card fraud detection systems is that a significant percentage of transactions labeled as fraudulent are in fact legitimate. These &quot;false alarms&quot; delay the detection of fraudulent transactions and can cause unnecessary concerns for customers. In this study, over 1 million unique credit card transactions from 11 months of data from a large Canadian bank were analyzed. A meta-classifier model was applied to the transactions after being analyzed by the Bank&apos;s existing neural network based fraud detection algorithm. This meta-classifier model consists of 3 base classifiers constructed using the decision tree, naïve Bayesian, and k-nearest neighbour algorithms. The naïve Bayesian algorithm was also used as the meta-level algorithm to combine the base classifier predictions to produce the final classifier. Results from the research show that when a meta-classifier was deployed in series with the Bank&apos;s existing fraud detection algorithm improvements of up to 28% to their existing system can be achieved.", "title": "" }, { "docid": "8833d9299c7a106f5fc3ad72a31327f8", "text": "The challenge in obtaining accurate recordings of biomedical signals such as EEG and ECG is to deal with the interference that is caused due to power line, which is a low frequency signal at 50Hz/60Hz. In order to yield accurate readings from ECG/EEG there is a need to eliminate this interfering signal. This paper presents the design of discrete-time notch filter to eliminate 60Hz, The filter is implemented in CMOS 180nm technology. The Tow-Thomas topology is used for the implementation of the filter which includes fully differential operational amplifier and switched capacitor based resistors. The conventional approach of filter design i.e continuous time approach using resistors consumed large silicon area, hence a discrete time approach is presented in this paper which replaces the resistors by switched-capacitor network. The results of discrete time filter show a notch depth of 62.3dB consuming power of 9.936μΨ. The design is simulated using Cadence Virtuoso and the post layout simulations are presented.", "title": "" }, { "docid": "013bdf7a7f2ad22b358637cacc1bc853", "text": "In this paper we propose an NLP-based method for Ontology Population from texts and apply it to semi automatic instantiate a Generic Knowledge Base (Generic Domain Ontology) in the risk management domain. The approach is semi-automatic and uses a domain expert intervention for validation. The proposed approach relies on a set of Instances Recognition Rules based on syntactic structures, and on the predicative power of verbs in the instantiation process. It is not domain dependent since it heavily relies on linguistic knowledge. A description of an experiment performed on a part of the ontology of the PRIMA project (supported by the European community) is given. A first validation of the method is done by populating this ontology with Chemical Fact Sheets from Environmental Protection Agency. The results of this experiment complete the paper and support the hypothesis that relying on the predicative power of verbs in the instantiation process improves the performance. Keywords—Information Extraction, Instance Recognition Rules, Ontology Population, Risk Management, Semantic analysis.", "title": "" }, { "docid": "1ac8e3098f8ae082d2c0de658fc208e1", "text": "The ability to learn about and efficiently use tools constitutes a desirable property for general purpose humanoid robots, as it allows them to extend their capabilities beyond the limitations of their own body. Yet, it is a topic that has only recently been tackled from the robotics community. Most of the studies published so far make use of tool representations that allow their models to generalize the knowledge among similar tools in a very limited way. Moreover, most studies assume that the tool is always grasped in its common or canonical grasp position, thus not considering the influence of the grasp configuration in the outcome of the actions performed with them. In the current paper we present a method that tackles both issues simultaneously by using an extended set of functional features and a novel representation of the effect of the tool use. Together, they implicitly account for the grasping configuration and allow the iCub to generalize among tools based on their geometry. Moreover, learning happens in a self-supervised manner: First, the robot autonomously discovers the affordance categories of the tools by clustering the effect of their usage. These categories are subsequently used as a teaching signal to associate visually obtained functional features to the expected tool's affordance. In the experiments, we show how this technique can be effectively used to select, given a tool, the best action to achieve a desired effect.", "title": "" }, { "docid": "c63ce594f3e940783ae24494a6cb1aa9", "text": "In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed. The new system contains two parts: a deep neural network (DNN) based sequence tagging model and a deep reinforcement learning (DRL) based augmented tagger. The augmented tagger helps improve system performance by modeling the data with minority tags. The new system is evaluated on SLU and NLU sequence tagging tasks using ATIS and CoNLL2003 benchmark datasets, to demonstrate the new system’s outstanding performance on general tagging tasks. Evaluated by F1 scores, it shows that the new system outperforms the current state-of-the-art model on ATIS dataset by 1.9 % and that on CoNLL-2003 dataset by 1.4 %.", "title": "" }, { "docid": "a7f29c88c2fb7423cffb153eec105b50", "text": "Collective cell migration is fundamental to gaining insights into various important biological processes such as wound healing and cancer metastasis. In particular, recent in vitro studies and in silico simulations suggest that mechanics can explain the social behavior of multicellular clusters to a large extent with minimal knowledge of various cellular signaling pathways. These results suggest that a mechanistic perspective is necessary for a comprehensive and holistic understanding of collective cell migration, and this review aims to provide a broad overview of such a perspective.", "title": "" }, { "docid": "615e43e2dc7c12c38c87a4a6649407c0", "text": "BACKGROUND\nThe management of chronic pain is a complex challenge worldwide. Cannabis-based medicines (CBMs) have proven to be efficient in reducing chronic pain, although the topic remains highly controversial in this field.\n\n\nOBJECTIVES\nThis study's aim is to conduct a conclusive review and meta-analysis, which incorporates all randomized controlled trials (RCTs) in order to update clinicians' and researchers' knowledge regarding the efficacy and adverse events (AEs) of CBMs for chronic and postoperative pain treatment.\n\n\nSTUDY DESIGN\nA systematic review and meta-analysis.\n\n\nMETHODS\nAn electronic search was conducted using Medline/Pubmed and Google Scholar with the use of Medical Subject Heading (MeSH) terms on all literature published up to July 2015. A follow-up manual search was conducted and included a complete cross-check of the relevant studies. The included studies were RCTs which compared the analgesic effects of CBMs to placebo. Hedges's g scores were calculated for each of the studies. A study quality assessment was performed utilizing the Jadad scale. A meta-analysis was performed utilizing random-effects models and heterogeneity between studies was statistically computed using I² statistic and tau² test.\n\n\nRESULTS\nThe results of 43 RCTs (a total of 2,437 patients) were included in this review, of which 24 RCTs (a total of 1,334 patients) were eligible for meta-analysis. This analysis showed limited evidence showing more pain reduction in chronic pain -0.61 (-0.78 to -0.43, P < 0.0001), especially by inhalation -0.93 (-1.51 to -0.35, P = 0.001) compared to placebo. Moreover, even though this review consisted of some RCTs that showed a clinically significant improvement with a decrease of pain scores of 2 points or more, 30% or 50% or more, the majority of the studies did not show an effect. Consequently, although the primary analysis showed that the results were favorable to CBMs over placebo, the clinical significance of these findings is uncertain. The most prominent AEs were related to the central nervous and the gastrointestinal (GI) systems.\n\n\nLIMITATIONS\nPublication limitation could have been present due to the inclusion of English-only published studies. Additionally, the included studies were extremely heterogeneous. Only 7 studies reported on the patients' history of prior consumption of CBMs. Furthermore, since cannabinoids are surrounded by considerable controversy in the media and society, cannabinoids have marked effects, so that inadequate blinding of the placebo could constitute an important source of limitation in these types of studies.\n\n\nCONCLUSIONS\nThe current systematic review suggests that CBMs might be effective for chronic pain treatment, based on limited evidence, primarily for neuropathic pain (NP) patients. Additionally, GI AEs occurred more frequently when CBMs were administered via oral/oromucosal routes than by inhalation.Key words: Cannabis, CBMs, chronic pain, postoperative pain, review, meta-analysis.", "title": "" }, { "docid": "66f47f612c332ac9e3eee7a4f4024a17", "text": "The welfare of both women and men constitutes the human welfare. At the turn of the century amidst the glory of unprecedented growth in national income, India is experiencing the spread of rural distress. It is mainly due to the collapse of agricultural economy. Structural adjustments and competition from large-scale enterprises result in loss of rural livelihoods. Poor delivery of public services and safety nets, deepen the distress. The adverse impact is more on women than on men. This review examines the adverse impact of the events in terms of endowments, livelihood opportunities and nutritional outcomes on women in detail with the help of chosen indicators at two time-periods roughly representing mid nineties and early 2000. The gender equality index computed and the major indicators of welfare show that the gender gap is increasing in many aspects. All the aspects of livelihoods, such as literacy, unemployment and wages now have larger gender gaps than before. Survival indicators such as juvenile sex ratio, infant mortality, child labour have deteriorated for women, compared to men, though there has been a narrowing of gender gaps in life expectancy and literacy. The overall gender gap has widened due to larger gaps in some indicators, which are not compensated by the smaller narrowing in other indicators both in the rural and urban context.", "title": "" }, { "docid": "a57caf61fdae1ab9c1fc4d944ebe03cd", "text": "The handiness and ease of use of tele-technology like mobile phones has surged the growth of ICT in developing countries like India than ever. Mobile phones are showing overwhelming responses and have helped farmers to do the work on timely basis and stay connected with the outer farming world. But mobile phones are of no use when it comes to the real-time farm monitoring or accessing the accurate information because of the little research and application of mobile phone in agricultural field for such uses. The current demand of use of WSN in agricultural fields has revolutionized the farming experiences. In Precision Agriculture, the contribution of WSN are numerous staring from monitoring soil health, plant health to the storage of crop yield. Due to pressure of population and economic inflation, a lot of pressure is on farmers to produce more out of their fields with fewer resources. This paper gives brief insight into the relation of plant disease prediction with the help of wireless sensor networks. Keywords— Plant Disease Monitoring, Precision Agriculture, Environmental Parameters, Wireless Sensor Network (WSN)", "title": "" }, { "docid": "5a912359338b6a6c011e0d0a498b3e8d", "text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.", "title": "" }, { "docid": "57c780448d8771a0d22c8ed147032a71", "text": "“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.", "title": "" }, { "docid": "6d323f8dbfd7d2883a4926b80097727c", "text": "This work presents a novel geospatial mapping service, based on OpenStreetMap, which has been designed and developed in order to provide personalized path to users with special needs. This system gathers data related to barriers and facilities of the urban environment via crowd sourcing and sensing done by users. It also considers open data provided by bus operating companies to identify the actual accessibility feature and the real time of arrival at the stops of the buses. The resulting service supports citizens with reduced mobility (users with disabilities and/or elderly people) suggesting urban paths accessible to them and providing information related to travelling time, which are tailored to their abilities to move and to the bus arrival time. The manuscript demonstrates the effectiveness of the approach by means of a case study focusing on the differences between the solutions provided by our system and the ones computed by main stream geospatial mapping services.", "title": "" }, { "docid": "47a12c3101f0aa6cd7f9675a211bcfae", "text": "This paper describes the OpenViBE software platform which enables researchers to design, test, and use braincomputer interfaces (BCIs). BCIs are communication systems that enable users to send commands to computers solely by means of brain activity. BCIs are gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). The key features of the platform are (1) high modularity, (2) embedded tools for visualization and feedback based on VR and 3D displays, (3) BCI design made available to non-programmers thanks to visual programming, and (4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.", "title": "" } ]
scidocsrr
2c77904bdd00a17cb190281c6f7ae669
Scene-based automatic image annotation
[ { "docid": "9eaab923986bf74bdd073f6766ca45b2", "text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.", "title": "" }, { "docid": "ba8e974e77d49749c6b8ad2ce950fb64", "text": "We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval.", "title": "" } ]
[ { "docid": "66e11df441e2e5d09dc89be2ab470708", "text": "The current IEEE 802.11 standard mandates multiple transmission rates at the physical layer by employing different modulation and coding schemes. However, it does not specify any rate adaptation mechanism. It is left up to the researchers and vendors to implement adaptation algorithms that utilize the multi-rate capability. Rate adaptation algorithm is a critical component to the wireless system performance. The design of such algorithm is, however, not trivial due to the time-varying characteristics of the wireless channel (attenuation, collisions, interferences etc.). This has attracted the attention of researchers during the last few years. Previous work tends to select bit rates based on either frame loss statistics or physical layer (PHY) metrics, e.g., signal-to-noise ratio. While decisions in frame-based approaches are based on narrow information that limit their adaptability, the decisions in PHYbased approaches are more precise. However, the latter come with the overhead cost of transferring the channel information from the receiver to the transmitter. In this thesis we try to compromise between the channel adaptability and the cost of transferring channel information by signaling a minimized amount of information with respect to channel variations. This thesis presents a novel On-demand Feedback Rate Adaptation (OFRA) algorithm. The novelty of OFRA is that it allows receiver based adaptation through signaling channel information on a rate close to the channel coherence time. Hence, it eliminates the unnecessary overhead of transferring channel information at a fixed rate oblivious to the channel speed. In OFRA, the receiving node assesses the channel conditions by tracking the signal-to-noise ratio. Once it detects variations in the channel, it selects a new bit-rate and signals it back to the sending node. OFRA is to the best of our knowledge the first rate adaptation algorithm that can work in the absence of acknowledgments. This makes OFRA specially beneficial to nonacknowledged traffic that so far had to operate with a fixed bit-rate scheme. The throughput gains using OFRA stem from its ability to react fast in rapidly fluctuating channels and keeping the overhead low. Evaluation results obtained using NS-3 simulator show that OFRA consistently performs well in static as well as in mobile environments and outperforms ARF, Minstrel and Onoe.", "title": "" }, { "docid": "abb01393c17bf9e5dbb07952a80fd2ab", "text": "We report a case of a 48-year-old male patient with “krokodil” drug-related osteonecrosis of both jaws. Patient history included 1.5 years of “krokodil” use, with 8-month drug withdrawal prior to surgery. The patient was HCV positive. On the maxilla, sequestrectomy was performed. On the mandible, sequestrectomy was combined with bone resection. From ramus to ramus, segmental defect was formed, which was not reconstructed with any method. Post-operative follow-up period was 3 years and no disease recurrence was noted. On 3-year post-operative orthopantomogram, newly formed mandibular bone was found. This phenomenon shows that spontaneous bone formation is possible after mandible segmental resection in osteonecrosis patients.", "title": "" }, { "docid": "173d791e05859ec4cc28b9649c414c62", "text": "Breast cancer is the most common invasive cancer in females worldwide. It usually presents with a lump in the breast with or without other manifestations. Diagnosis of breast cancer depends on physical examination, mammographic findings and biopsy results. Treatment of breast cancer depends on the stage of the disease. Lines of treatment include mainly surgical removal of the tumor followed by radiotherapy or chemotherapy. Other lines including immunotherapy, thermochemotherapy and alternative medicine may represent a hope for breast cancer", "title": "" }, { "docid": "ce71b390fb70bf17186bbd1f6233b085", "text": "This report provides detailed description and necessary derivations for the BackPropagation Through Time (BPTT) algorithm. BPTT is often used to learn recurrent neural networks (RNN). Contrary to feed-forward neural networks, the RNN is characterized by the ability of encoding longer past information, thus very suitable for sequential models. The BPTT extends the ordinary BP algorithm to suit the recurrent neural architecture. 1 Basic Definitions For a two-layer feed-forward neural network, we notate the input layer as x indexed by variable i, the hidden layer as s indexed by variable j, and the output layer as y indexed by variable k. The weight matrix that map the input vector to the hidden layer is V, while the hidden layer is propagated through the weight matrix W, to the output layer. In a simple recurrent neural network, we attach every neural layer a time subscript t. The input layer consists of two components, x(t) and the privious activation of the hidden layer s(t − 1) indexed by variable h. The corresponding weight matrix is U. Table 1 lists all the notations used in this report: Neural layer Description Index variable x(t) input layer i s(t− 1) previous hidden (state) layer h s(t) hidden (state) layer j y(t) output layer k Weight matrix Description Index variables V Input layer → Hidden layer i, j U Previous hidden layer → Hidden layer h, j W Hidden layer → Output layer j, k Table 1: Notations in the recurrent neural network. Then, the recurrent neural network can be processed as the following: • Input layer → Hidden layer sj(t) = f(netj(t)) (1)", "title": "" }, { "docid": "3a2740b7f65841f7eb4f74a1fb3c9b65", "text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.", "title": "" }, { "docid": "c62a2f7fae5d56617b71ffc070a30839", "text": "Digitization brings new possibilities to ease our daily life activities by the means of assistive technology. Amazon Alexa, Microsoft Cortana, Samsung Bixby, to name only a few, heralded the age of smart personal assistants (SPAs), personified agents that combine artificial intelligence, machine learning, natural language processing and various actuation mechanisms to sense and influence the environment. However, SPA research seems to be highly fragmented among different disciplines, such as computer science, human-computer-interaction and information systems, which leads to ‘reinventing the wheel approaches’ and thus impede progress and conceptual clarity. In this paper, we present an exhaustive, integrative literature review to build a solid basis for future research. We have identified five functional principles and three research domains which appear promising for future research, especially in the information systems field. Hence, we contribute by providing a consolidated, integrated view on prior research and lay the foundation for an SPA classification scheme.", "title": "" }, { "docid": "74ab7137119c5a1d462d8b6375b89f18", "text": "This paper describes an intelligent computer-aided architectural design system (ICAAD) called ICADS. ICADS encapsulates different types of design knowledge into independent “critic” modules. Each “critic” module possesses expertise in evaluating an architect’s work in different areas of architectural design and can offer expert advice when needed. This research focuses on the representation of spatial information encoded in architectural floor plans and the representation of expert design knowledge. Described in this paper is our research in designing and developing two particular “critic” modules. The first module, FPDX, checks a residential apartment floor plan, verifies that the plan meets a set of government regulations, and offers suggestions for floor plan changes if regulations are not met. The second module, IDX, analyzes room and furniture layout according to a set of interior design guidelines and offers ideas on how furniture should be moved if the placement does not follow good design principles.", "title": "" }, { "docid": "718676a1639fbf3c68df01049f606b14", "text": "Interleukin 6 (IL-6) has a broad effect on cells of the immune system and those not of the immune system and often displays hormone-like characteristics that affect homeostatic processes. IL-6 has context-dependent pro- and anti-inflammatory properties and is now regarded as a prominent target for clinical intervention. However, the signaling cassette that controls the activity of IL-6 is complicated, and distinct intervention strategies can inhibit this pathway. Clinical experience with antagonists of IL-6 has raised new questions about how and when to block this cytokine to improve disease outcome and patient wellbeing. Here we discuss the effect of IL-6 on innate and adaptive immunity and the possible advantages of various antagonists of IL-6 and consider how the immunobiology of IL-6 may inform clinical decisions.", "title": "" }, { "docid": "24620b7089f3057e82f3f4e518ccc2d3", "text": "This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.", "title": "" }, { "docid": "25ae604f6e56aae8baf92693fa4df3d4", "text": "Many automatic image annotation methods are based on the learning by example paradigm. Image tagging, through manual image inspection, is the first step towards this end. However, manual image annotation, even for creating the training sets, is time-consuming, complicated and contains human subjectivity errors. Thus, alternative ways for automatically creating training examples, i.e., pairs of images and tags, are crucial. As we showed in one of our previous studies, tags accompanying photos in social media and especially the Instagram hashtags can be used for image annotation. However, it turned out that only a 20% of the Instagram hashtags are actually relevant to the content of the image they accompany. Identifying those hashtags through crowdsourcing is a plausible solution. In this work, we investigate the effectiveness of the HITS algorithm for identifying the right tags in a crowdsourced image tagging scenario. For this purpose, we create a bipartite graph in which the first type of nodes corresponds to the annotators and the second type to the tags they select, among the hashtags, to annotate a particular Instagram image. From the results, we conclude that the authority value of the HITS algorithm provides an accurate estimation of the appropriateness of each Instagram hashtag to be used as a tag for the image it accompanies while the hub value can be used to filter out the dishonest annotators.", "title": "" }, { "docid": "09b273c9e77f6fc1b2de20f50227c44d", "text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.", "title": "" }, { "docid": "0f54451a622d06d23eec83db4429f52e", "text": "This brief presents a novel scalable architecture for in-place fast Fourier transform (IFFT) computation for real-valued signals. The proposed computation is based on a modified radix-2 algorithm, which removes the redundant operations from the flow graph. A new processing element (PE) is proposed using two radix-2 butterflies that can process four inputs in parallel. A novel conflict-free memory-addressing scheme is proposed to ensure the continuous operation of the FFT processor. Furthermore, the addressing scheme is extended to support multiple parallel PEs. The proposed real-FFT processor simultaneously requires fewer computation cycles and lower hardware cost compared to prior work. For example, the proposed design with two PEs reduces the computation cycles by a factor of 2 for a 256-point real fast Fourier transform (RFFT) compared to a prior work while maintaining a lower hardware complexity. The number of computation cycles is reduced proportionately with the increase in the number of PEs.", "title": "" }, { "docid": "bcf525a37e87ca084e5a39c63cfdde77", "text": "BACKGROUND\nObesity in people with chronic kidney disease (CKD) is associated with longer survival. The purpose of this study was to determine if a relationship exists between body condition score (BCS) and survival in dogs with CKD.\n\n\nHYPOTHESIS/OBJECTIVES\nHigher BCS is a predictor of prolonged survival in dogs with CKD.\n\n\nANIMALS\nOne hundred dogs were diagnosed with CKD (International Renal Interest Society stages II, III or IV) between 2008 and 2009.\n\n\nMETHODS\nRetrospective case review. Data regarding initial body weight and BCS, clinicopathologic values and treatments were collected from medical records and compared with survival times.\n\n\nRESULTS\nFor dogs with BCS recorded (n = 72), 13 were underweight (BCS = 1-3; 18%), 49 were moderate (BCS = 4-6; 68%), and 10 were overweight (BCS = 7-9; 14%). For dogs with at least 2 body weights recorded (n = 77), 21 gained weight, 47 lost weight, and 9 had no change in weight. Dogs classified as underweight at the time of diagnosis (median survival = 25 days) had a significantly shorter survival time compared to that in both moderate (median survival = 190 days; P < .001) and overweight dogs (median survival = 365 days; P < .001). There was no significant difference in survival between moderate and overweight dogs (P = .95).\n\n\nCONCLUSIONS AND CLINICAL IMPORTANCE\nHigher BCS at the time of diagnosis was significantly associated with improved survival. Further research on the effects of body composition could enhance the management of dogs with CKD.", "title": "" }, { "docid": "05e6cbaf225c03de8a3e1f97f8690014", "text": "Despite the fact that seizures are commonly associated with autism spectrum disorder (ASD), the effectiveness of treatments for seizures has not been well studied in individuals with ASD. This manuscript reviews both traditional and novel treatments for seizures associated with ASD. Studies were selected by systematically searching major electronic databases and by a panel of experts that treat ASD individuals. Only a few anti-epileptic drugs (AEDs) have undergone carefully controlled trials in ASD, but these trials examined outcomes other than seizures. Several lines of evidence point to valproate, lamotrigine, and levetiracetam as the most effective and tolerable AEDs for individuals with ASD. Limited evidence supports the use of traditional non-AED treatments, such as the ketogenic and modified Atkins diet, multiple subpial transections, immunomodulation, and neurofeedback treatments. Although specific treatments may be more appropriate for specific genetic and metabolic syndromes associated with ASD and seizures, there are few studies which have documented the effectiveness of treatments for seizures for specific syndromes. Limited evidence supports l-carnitine, multivitamins, and N-acetyl-l-cysteine in mitochondrial disease and dysfunction, folinic acid in cerebral folate abnormalities and early treatment with vigabatrin in tuberous sclerosis complex. Finally, there is limited evidence for a number of novel treatments, particularly magnesium with pyridoxine, omega-3 fatty acids, the gluten-free casein-free diet, and low-frequency repetitive transcranial magnetic simulation. Zinc and l-carnosine are potential novel treatments supported by basic research but not clinical studies. This review demonstrates the wide variety of treatments used to treat seizures in individuals with ASD as well as the striking lack of clinical trials performed to support the use of these treatments. Additional studies concerning these treatments for controlling seizures in individuals with ASD are warranted.", "title": "" }, { "docid": "cb00a4440fdad04f7fd4d372e005315e", "text": "This paper presents a robust physics-based motion control system for realtime synthesis of human grasping. Given an object to be grasped, our system automatically computes physics-based motion control that advances the simulation to achieve realistic manipulation with the object. Our solution leverages prerecorded motion data and physics-based simulation for human grasping. We first introduce a data-driven synthesis algorithm that utilizes large sets of prerecorded motion data to generate realistic motions for human grasping. Next, we present an online physics-based motion control algorithm to transform the synthesized kinematic motion into a physically realistic one. In addition, we develop a performance interface for human grasping that allows the user to act out the desired grasping motion in front of a single Kinect camera. We demonstrate the power of our approach by generating physics-based motion control for grasping objects with different properties such as shapes, weights, spatial orientations, and frictions. We show our physics-based motion control for human grasping is robust to external perturbations and changes in physical quantities.", "title": "" }, { "docid": "56587879aeb4ecce05513e94bc019956", "text": "In opportunistic networks, the nodes usually exploit a contact opportunity to perform hop-by-hop routing, since an end-to-end path between the source node and destination node may not exist. Most social-based routing protocols use social information extracted from real-world encounter networks to select an appropriate message relay. A protocol based on encounter history, however, takes time to build up a knowledge database from which to take routing decisions. An opportunistic routing protocol which extracts social information from multiple social networks, can be an alternative approach to avoid suboptimal paths due to partial information on encounters. While contact information changes constantly and it takes time to identify strong social ties, online social network ties remain rather stable and can be used to augment available partial contact information. In this paper, we propose a novel opportunistic routing approach, called ML-SOR (Multi-layer Social Network based Routing), which extracts social network information from multiple social contexts. To select an effective forwarding node, ML-SOR measures the forwarding capability of a node when compared to an encountered node in terms of node centrality, tie strength and link prediction. These metrics are computed by ML-SOR on different social network layers. Trace driven simulations show that ML-SOR, when compared to other schemes, is able to deliver messages with high probability while keeping overhead ratio very small.", "title": "" }, { "docid": "670b35833f96a62bce9e2ddd58081fc4", "text": "Although video summarization has achieved great success in recent years, few approaches have realized the influence of video structure on the summarization results. As we know, the video data follow a hierarchical structure, i.e., a video is composed of shots, and a shot is composed of several frames. Generally, shots provide the activity-level information for people to understand the video content. While few existing summarization approaches pay attention to the shot segmentation procedure. They generate shots by some trivial strategies, such as fixed length segmentation, which may destroy the underlying hierarchical structure of video data and further reduce the quality of generated summaries. To address this problem, we propose a structure-adaptive video summarization approach that integrates shot segmentation and video summarization into a Hierarchical Structure-Adaptive RNN, denoted as HSA-RNN. We evaluate the proposed approach on four popular datasets, i.e., SumMe, TVsum, CoSum and VTW. The experimental results have demonstrated the effectiveness of HSA-RNN in the video summarization task.", "title": "" }, { "docid": "17a475b655134aafde0f49db06bec127", "text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.", "title": "" }, { "docid": "a47da93173c43eaa7d4b62f96b09be27", "text": "Creating 3D maps on robots and other mobile devices has become a reality in recent years. Online 3D reconstruction enables many exciting applications in robotics and AR/VR gaming. However, the reconstructions are noisy and generally incomplete. Moreover, during online reconstruction, the surface changes with every newly integrated depth image which poses a significant challenge for physics engines and path planning algorithms. This paper presents a novel, fast and robust method for obtaining and using information about planar surfaces, such as walls, floors, and ceilings as a stage in 3D reconstruction based on Signed Distance Fields (SDFs). Our algorithm recovers clean and accurate surfaces, reduces the movement of individual mesh vertices caused by noise during online reconstruction and fills in the occluded and unobserved regions. We implemented and evaluated two different strategies to generate plane candidates and two strategies for merging them. Our implementation is optimized to run in real-time on mobile devices such as the Tango tablet. In an extensive set of experiments, we validated that our approach works well in a large number of natural environments despite the presence of significant amount of occlusion, clutter and noise, which occur frequently. We further show that plane fitting enables in many cases a meaningful semantic segmentation of real-world scenes.", "title": "" }, { "docid": "ad3add7522b3a58359d36e624e9e65f7", "text": "In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: ksrao@iitkgp.ac.in S.G. Koolagudi e-mail: koolagudi@yahoo.com R.R. Vempada e-mail: ramu.csc@gmail.com", "title": "" } ]
scidocsrr
e18a93ce51158ae804fb1e27c742d2fd
Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions
[ { "docid": "9807eace5f1f89f395fb8dff9dda13ab", "text": "This article provides a new, more comprehensive view of event-related brain dynamics founded on an information-based approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average event-related potentials (ERPs) or on changes 'induced' in the EEG power spectrum by experimental events. Although these measures are nearly complementary, they do not fully model the event-related dynamics in the data, and cannot isolate the signals of the contributing cortical areas. We propose that many ERPs and other EEG features are better viewed as time/frequency perturbations of underlying field potential processes. The new approach combines independent component analysis (ICA), time/frequency analysis, and trial-by-trial visualization that measures EEG source dynamics without requiring an explicit head model.", "title": "" } ]
[ { "docid": "2df316f30952ffdb4da1e9797b9658bb", "text": "Breast cancer is a leading disease worldwide, and the success of medical therapies is heavily related to the availability of breast cancer imaging techniques. While current methods, mainly ultrasound, x-ray mammography, and magnetic resonance imaging, all exhibit some disadvantages, a possible alternative investigated in recent years is based on microwave and mm-wave imaging system. A key point for these systems is their reliability in terms of safety, in particular exposure limits. This paper presents a feasibility study for a mm-wave breast cancer imaging system, with the aim of ensuring safety and compliance with the widely adopted European ICNIRP recommendations. The study is based on finite element method models of human tissues, experimentally characterized by measures obtained at one of the most important European clinical center for cancer treatments. Results prove the feasibility of the system, which can meet the exposure limits while providing the required dynamic range to let the receiver detect the cancer anomaly. In addition, the dosimetric quantities used at the present and their maximum limits at mm-waves are taking into discussion and the possibility of needing moderns quantities and limitations is discussed.", "title": "" }, { "docid": "a00f344024cc1df9049485a5c548551a", "text": "This paper describes the first achievement of over 20,000 quality factors among on-chip relaxation oscillators. The proposed Power Averaging Feedback with a Chopped Amplifier enables such a high Q which is close to MEMS oscillators. 1/f noise free design and rail-to-rail oscillation result in low phase noise with small area and low power consumption. The proposed oscillator can be applied to low noise applications (e.g. digital audio players) implemented onto a System on a Chip.", "title": "" }, { "docid": "68971b7efc9663c37113749206b5382b", "text": "Trehalose 6-phosphate (Tre6P), the intermediate of trehalose biosynthesis, has a profound influence on plant metabolism, growth, and development. It has been proposed that Tre6P acts as a signal of sugar availability and is possibly specific for sucrose status. Short-term sugar-feeding experiments were carried out with carbon-starved Arabidopsis thaliana seedlings grown in axenic shaking liquid cultures. Tre6P increased when seedlings were exogenously supplied with sucrose, or with hexoses that can be metabolized to sucrose, such as glucose and fructose. Conditional correlation analysis and inhibitor experiments indicated that the hexose-induced increase in Tre6P was an indirect response dependent on conversion of the hexose sugars to sucrose. Tre6P content was affected by changes in nitrogen status, but this response was also attributable to parallel changes in sucrose. The sucrose-induced rise in Tre6P was unaffected by cordycepin but almost completely blocked by cycloheximide, indicating that de novo protein synthesis is necessary for the response. There was a strong correlation between Tre6P and sucrose even in lines that constitutively express heterologous trehalose-phosphate synthase or trehalose-phosphate phosphatase, although the Tre6P:sucrose ratio was shifted higher or lower, respectively. It is proposed that the Tre6P:sucrose ratio is a critical parameter for the plant and forms part of a homeostatic mechanism to maintain sucrose levels within a range that is appropriate for the cell type and developmental stage of the plant.", "title": "" }, { "docid": "0fc24042efb2fcfd0626c3016372f89e", "text": "OBJECTIVE\nTo investigate the relationship between quantitative EEG (QEEG) scores and \"complicating factors\" (psychopathology, true pharmacoresistance, neurological symptoms) in idiopathic generalised epilepsy (IGE).\n\n\nMETHODS\n35 newly referred, newly diagnosed, unmedicated IGE patients were collected in a prospective and random manner. Standard neuro-psychiatric and EEG examination was done. The patients were treated and controlled at regular visits. After 2 years of follow-up, clinical data were summarised and were compared to QEEG results. Clinical target items were neurologic and psychiatric abnormalities, proven pharmacoresistance. Patients with at least one of these items were labelled \"complicated\", whereas patients without these additional handicap were labelled as \"uncomplicated\". The 12 QEEG target variables were: Z-transformed absolute power values for three (anterior, central, posterior) brain regions and four frequency bands (1.5-3.5; 3.5-7.5; 7.5-12.5; 12.5-25.0 Hz). QEEG scores outside the +/- 2.5 Z range were accepted as abnormal. The overall QEEG result was classified as normal (0-2 abnormal scores), or pathological (3 or more abnormal scores). Clinical and QEEG results were correlated.\n\n\nRESULTS\nAll patients with psychopathology showed 4-8 positive pathological scores (power excess not confined to a single cortical region or frequency band). The two patients with pure pharmacoresistance showed pathological negative values (delta power deficit) all over the scalp. Statistically significant (P < 0.001) association was found between patients with uncomplicated IGE and normal QEEG, and between complicated IGE and pathological QEEG. Patients with neurological items had normal QEEG.\n\n\nCONCLUSION\nHigher degree of cortical dysfunction (as assessed in the clinical setting) is reflected by higher degree of QEEG abnormalities. QEEG analysis can differentiate between IGE patients with or without psychopathology. Forecasting psychopathology may be the practical application of the findings.", "title": "" }, { "docid": "00d44e09b62be682b902b01a3f3a56c2", "text": "A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration.", "title": "" }, { "docid": "59b1cbd4f94c231c7d5a1f06672c3faf", "text": "Life stress is a major predictor of the course of bipolar disorder. Few studies have used laboratory paradigms to examine stress reactivity in bipolar disorder, and none have assessed autonomic reactivity to laboratory stressors. In the present investigation we sought to address this gap in the literature. Participants, 27 diagnosed with bipolar I disorder and 24 controls with no history of mood disorder, were asked to complete a complex working memory task presented as \"a test of general intelligence.\" Self-reported emotions were assessed at baseline and after participants were given task instructions; autonomic physiology was assessed at baseline and continuously during the stressor task. Compared to controls, individuals with bipolar disorder reported greater increases in pretask anxiety from baseline and showed greater cardiovascular threat reactivity during the task. Group differences in cardiovascular threat reactivity were significantly correlated with comorbid anxiety in the bipolar group. Our results suggest that a multimethod approach to assessing stress reactivity-including the use of physiological parameters that differentiate between maladaptive and adaptive profiles of stress responding-can yield valuable information regarding stress sensitivity and its associations with negative affectivity in bipolar disorder. (PsycINFO Database Record (c) 2015 APA, all rights reserved).", "title": "" }, { "docid": "4d3ca12b25de97da5ec6f9b0989d7109", "text": "In a context where personal mobility accounts for about two thirds of the total transportation energy use, assessing an individual’s personal contribution to the emissions of a city becomes highly valuable. Prior efforts in this direction have resulted in web-based CO2 emissions calculators, smartphonebased applications, and wearable sensors that detect a user’s transportation modes. Yet, high energy consumption and had-hoc sensors have limited the potential adoption of these methodologies. In this technical report we outline an approach that could make it possible to assess the individual carbon footprint of an unlimited number of people. Our application can be run on standard smartphones for long periods of time and can operate transparently. Given that we make use of an existing platform (smartphones) that is widely adopted, our method has the potential of unprecedented data collection of mobility patterns. Our method estimates in real-time the CO2 emissions using inertial information gathered from mobile phone sensors. In particular, an algorithm automatically classifies the user’s transportation mode into eight classes using a decision tree. The algorithm is trained on features computed from the Fast Fourier Transform (FFT) coefficients of the total acceleration measured by the mobile phone accelerometer. A working smartphone application for the Android platform has been developed and experimental data have been used to train and validate the proposed method.", "title": "" }, { "docid": "7d9f003bcce3f99b096e3dcd5d849f6d", "text": "Anti-Money Laundering (AML) can be seen as a central problem for financial institutions because of the need to detect compliance violations in various customer contexts. Changing regulations and the strict supervision of financial authorities create an even higher pressure to establish an effective working compliance program. To support financial institutions in building a simple but efficient compliance program we develop a reference model that describes the process and data view for one key process of AML based on literature analysis and expert interviews. Therefore, this paper describes the customer identification process (CIP) as a part of an AML program using reference modeling techniques. The contribution of this work is (i) the application of multi-perspective reference modeling resulting in (ii) a reference model for AML customer identification. Overall, the results help to understand the complexity of AML processes and to establish a sustainable compliance program.", "title": "" }, { "docid": "cf6f0a6d53c3b615f27a20907e6eb93f", "text": "OBJECTIVE\nWe sought to investigate whether a low-fat vegan diet improves glycemic control and cardiovascular risk factors in individuals with type 2 diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nIndividuals with type 2 diabetes (n = 99) were randomly assigned to a low-fat vegan diet (n = 49) or a diet following the American Diabetes Association (ADA) guidelines (n = 50). Participants were evaluated at baseline and 22 weeks.\n\n\nRESULTS\nForty-three percent (21 of 49) of the vegan group and 26% (13 of 50) of the ADA group participants reduced diabetes medications. Including all participants, HbA(1c) (A1C) decreased 0.96 percentage points in the vegan group and 0.56 points in the ADA group (P = 0.089). Excluding those who changed medications, A1C fell 1.23 points in the vegan group compared with 0.38 points in the ADA group (P = 0.01). Body weight decreased 6.5 kg in the vegan group and 3.1 kg in the ADA group (P < 0.001). Body weight change correlated with A1C change (r = 0.51, n = 57, P < 0.0001). Among those who did not change lipid-lowering medications, LDL cholesterol fell 21.2% in the vegan group and 10.7% in the ADA group (P = 0.02). After adjustment for baseline values, urinary albumin reductions were greater in the vegan group (15.9 mg/24 h) than in the ADA group (10.9 mg/24 h) (P = 0.013).\n\n\nCONCLUSIONS\nBoth a low-fat vegan diet and a diet based on ADA guidelines improved glycemic and lipid control in type 2 diabetic patients. These improvements were greater with a low-fat vegan diet.", "title": "" }, { "docid": "328abff1a187a71fe77ce078e9f1647b", "text": "A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy.", "title": "" }, { "docid": "ec57b8a0fef914613be1109d4d79918c", "text": "Clickbaits are articles with misleading titles, exaggerating the content on the landing page. Their goal is to entice users to click on the title in order to monetize the landing page. The content on the landing page is usually of low quality. Their presence in user homepage stream of news aggregator sites (e.g., Yahoo news, Google news) may adversely impact user experience. Hence, it is important to identify and demote or block them on homepages. In this paper, we present a machine-learning model to detect clickbaits. We use a variety of features and show that the degree of informality of a webpage (as measured by different metrics) is a strong indicator of it being a clickbait. We conduct extensive experiments to evaluate our approach and analyze properties of clickbait and non-clickbait articles. Our model achieves high performance (74.9% F-1 score) in predicting clickbaits.", "title": "" }, { "docid": "55adc78a2fcd2e941aae142ed32c5033", "text": "Mobile cloud computing (MCC) has drawn significant research attention as the popularity and capability of mobile devices have been improved in recent years. In this paper, we propose a prototype MCC offloading system that considers multiple cloud resources such as mobile ad-hoc network, cloudlet and public clouds to provide an adaptive MCC service. We propose a context-aware offloading decision algorithm aiming to provide code offloading decisions at runtime on selecting wireless medium and which potential cloud resources as the offloading location based on the device context. We also conduct real experiments on the implemented system to evaluate the performance of the algorithm. Results indicate the system and embedded decision algorithm can select suitable wireless medium and cloud resources based on different context of the mobile devices, and achieve significant performance improvement.", "title": "" }, { "docid": "0d0f9576ba5ccc442f531d4222bb1a12", "text": "This tutorial introduces fingerprint recognition systems and their main components: sensing, feature extraction and matching. The basic technologies are surveyed and some state-of-the-art algorithms are discussed. Due to the extent of this topic it is not possible to provide here all the details and to cover a number of interesting issues such as classification, indexing and multimodal systems. Interested readers can find in [21] a complete and comprehensive guide to fingerprint recognition.", "title": "" }, { "docid": "00e5acdfb1e388b149bc729a7af108ee", "text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574", "title": "" }, { "docid": "de298bb631dd0ca515c161b6e6426a85", "text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.", "title": "" }, { "docid": "b27038accdabab12d8e0869aba20a083", "text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.", "title": "" }, { "docid": "b78dfbd9640d53c6bd782af9be1f278a", "text": "Code analyzers such as Error Prone and FindBugs detect code patterns symptomatic of bugs, performance issues, or bad style. These tools express patterns as quick fixes that detect and rewrite unwanted code. However, it is difficult to come up with new quick fixes and decide which ones are useful and frequently appear in real code. We propose to rely on the collective wisdom of programmers and learn quick fixes from revision histories in software repositories. We present REVISAR, a tool for discovering common Java edit patterns in code repositories. Given code repositories and their revision histories, REVISAR (i) identifies code edits from revisions and (ii) clusters edits into sets that can be described using an edit pattern. The designers of code analyzers can then inspect the patterns and add the corresponding quick fixes to their tools. We ran REVISAR on nine popular GitHub projects, and it discovered 89 useful edit patterns that appeared in 3 or more projects. Moreover, 64% of the discovered patterns did not appear in existing tools. We then conducted a survey with 164 programmers from 124 projects and found that programmers significantly preferred eight out of the nine of the discovered patterns. Finally, we submitted 16 pull requests applying our patterns to 9 projects and, at the time of the writing, programmers accepted 6 (60%) of them. The results of this work aid toolsmiths in discovering quick fixes and making informed decisions about which quick fixes to prioritize based on patterns programmers actually apply in practice.", "title": "" }, { "docid": "8150f588c5eb3919d13f976fec58b736", "text": "We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.", "title": "" }, { "docid": "3b302ce4b5b8b42a61c7c4c25c0f3cbf", "text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.", "title": "" }, { "docid": "f3820e94a204cd07b04e905a9b1e4834", "text": "Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player skill factors are essential for the outcome of a game match. To understand the construct of MOBA player skills, we utilize various skill-based predictive models to decompose player skills into interpretative parts, the impact of which are assessed in statistical terms. We apply this analysis approach on two widely known MOBAs, namely League of Legends (LoL) and Defense of the Ancients 2 (DOTA2). The finding is that base skills of in-game avatars, base skills of players, and players’ champion-specific skills are three prominent skill components influencing LoL’s match outcomes, while those of DOTA2 are mainly impacted by in-game avatars’ base skills but not much by the other two. PLAYER SKILL DECOMPOSITION IN MULTIPLAYER ONLINE BATTLE ARENAS 3 Player Skill Decomposition in Multiplayer Online Battle Arenas", "title": "" } ]
scidocsrr
52684f444f91851852aae4d935c922f3
Curiosity-driven optimization
[ { "docid": "dd2267e380de2bc5ef71ee7ffd2eb00a", "text": "We propose a formal Bayesian definition of surprise to capture subjective aspects of sensory information. Surprise measures how data affects an observer, in terms of differences between posterior and prior beliefs about the world. Only data observations which substantially affect the observer's beliefs yield surprise, irrespectively of how rare or informative in Shannon's sense these observations are. We test the framework by quantifying the extent to which humans may orient attention and gaze towards surprising events or items while watching television. To this end, we implement a simple computational model where a low-level, sensory form of surprise is computed by simple simulated early visual neurons. Bayesian surprise is a strong attractor of human attention, with 72% of all gaze shifts directed towards locations more surprising than the average, a figure rising to 84% when focusing the analysis onto regions simultaneously selected by all observers. The proposed theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.", "title": "" } ]
[ { "docid": "8f78f2efdd2fecaf32fbc7f5ffa79218", "text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.", "title": "" }, { "docid": "826e54e8e46dcea0451b53645e679d55", "text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.", "title": "" }, { "docid": "0e218dd5654ae9125d40bdd5c0a326d6", "text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.", "title": "" }, { "docid": "e39cafd4de135ccb17f7cf74cbd38a97", "text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.", "title": "" }, { "docid": "1aa8d47eed17e1dcbe6fa3f8c5656ed8", "text": "Recent work has introduced CASCADE, an algorithm for creating a globally-consistent taxonomy by crowdsourcing microwork from many individuals, each of whom may see only a tiny fraction of the data (Chilton et al. 2013). While CASCADE needs only unskilled labor and produces taxonomies whose quality approaches that of human experts, it uses significantly more labor than experts. This paper presents DELUGE, an improved workflow that produces taxonomies with comparable quality but demands significantly less crowd labor. Our proposed method for solving the novel problem of crowdsourcing multi-label classification optimizes CASCADE’s categorization step—its most costly step—using less than 10% of the labor required by the original approach. DELUGE’s savings come from the use of decision theory, machine learning, and probabilistic inference, which allow it to pose microtasks that aim to maximize infor-", "title": "" }, { "docid": "e276068ede51c081c71a483b260e546c", "text": "The selection of hyper-parameters plays an important role to the performance of least-squares support vector machines (LS-SVMs). In this paper, a novel hyper-parameter selection method for LS-SVMs is presented based on the particle swarm optimization (PSO). The proposed method does not need any priori knowledge on the analytic property of the generalization performance measure and can be used to determine multiple hyper-parameters at the same time. The feasibility of this method is examined on benchmark data sets. Different kinds of kernel families are investigated by using the proposed method. Experimental results show that the best or quasi-best test performance could be obtained by using the scaling radial basis kernel function (SRBF) and RBF kernel functions, respectively. & 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ee55a72568868837e11da7fabca169fe", "text": "Tying suture knots is a time-consuming task performed frequently during minimally invasive surgery (MIS). Automating this task could greatly reduce total surgery time for patients. Current solutions to this problem replay manually programmed trajectories, but a more general and robust approach is to use supervised machine learning to smooth surgeon-given training trajectories and generalize from them. Since knottying generally requires a controller with internal memory to distinguish between identical inputs that require different actions at different points along a trajectory, it would be impossible to teach the system using traditional feedforward neural nets or support vector machines. Instead we exploit more powerful, recurrent neural networks (RNNs) with adaptive internal states. Results obtained using LSTM RNNs trained by the recent Evolino algorithm show that this approach can significantly increase the efficiency of suture knot tying in MIS over preprogrammed control", "title": "" }, { "docid": "86d196a616e4ae0d28fb6d7099508c49", "text": "As applications are becoming increasingly dynamic, the notion that a schema can be created in advance for an application and remain relatively stable is becoming increasingly unrealistic. This has pushed application developers away from traditional relational database systems and away from the SQL interface, despite their many well-established benefits. Instead, developers often prefer self-describing data models such as JSON, and NoSQL systems designed specifically for their relaxed semantics.\n In this paper, we discuss the design of a system that enables developers to continue to represent their data using self-describing formats without moving away from SQL and traditional relational database systems. Our system stores arbitrary documents of key-value pairs inside physical and virtual columns of a traditional relational database system, and adds a layer above the database system that automatically provides a dynamic relational view to the user against which fully standard SQL queries can be issued. We demonstrate that our design can achieve an order of magnitude improvement in performance over alternative solutions, including existing relational database JSON extensions, MongoDB, and shredding systems that store flattened key-value data inside a relational database.", "title": "" }, { "docid": "4e6709bf897352c4e8b24a5b77e4e2c5", "text": "Large-scale classification is an increasingly critical Big Data problem. So far, however, very little has been published on how this is done in practice. In this paper we describe Chimera, our solution to classify tens of millions of products into 5000+ product types at WalmartLabs. We show that at this scale, many conventional assumptions regarding learning and crowdsourcing break down, and that existing solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in-house analysts), and crowdsourcing to achieve accurate, continuously improving, and cost-effective classification. We discuss a set of lessons learned for other similar Big Data systems. In particular, we argue that at large scales crowdsourcing is critical, but must be used in combination with learning, rules, and in-house analysts. We also argue that using rules (in conjunction with learning) is a must, and that more research attention should be paid to helping analysts create and manage (tens of thousands of) rules more effectively.", "title": "" }, { "docid": "46658067ffc4fd2ecdc32fbaaa606170", "text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.", "title": "" }, { "docid": "b0e3249bbea278ceee2154aba5ea99d8", "text": "Much of the current research in learning Bayesian Networks fails to eeectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data.", "title": "" }, { "docid": "29db0699c332efd2d2dd1612defab65c", "text": "Denial of Service (DoS) attacks are important topics for security courses that teach ethical hacking techniques and intrusion detection. This paper presents a case study of the implementation of comprehensive offensive hands-on lab exercises about three common DoS attacks. The exercises teach students how to perform practically the DoS attacks in an isolated network laboratory environment. The paper discuses also some ethical and legal issues related to teaching ethical hacking, and then lists steps that schools and educators should take to improve the chances of having a successful and problem free information security programs.", "title": "" }, { "docid": "a58ede53f0f2452e60528d5a470c0d7e", "text": "Background. Controversies still prevail as to how exactly epigastric hernia occurs. Both the vascular lacunae hypothesis and the tendinous fibre decussation hypothesis have proved to be widely accepted as possible explanations for the etiology. Patient. We present a patient who suffered from early-onset epigastric hernia. Conclusions. We believe the identification of the ligamentum teres and its accompanying vessel at its fascial defect supports the vascular lacunae hypothesis. However, to further our understanding, biopsy of the linea alba in patients with epigastric hernias is indicated.", "title": "" }, { "docid": "04cabaa5db68da668dd607a696bb59b8", "text": "The current information analysis capabilities of legal professionals are still lagging behind the explosive growth in legal document availability through digital means, driving the need for higher efficiency Legal Information Retrieval (IR) and Question Answering (QA) methods. The IR task in particular has a set of unique challenges that invite the use of semantic motivated NLP techniques. In this work, a two-stage method for Legal Information Retrieval is proposed, combining lexical statistics and distributional sentence representations in the context of Competition on Legal Information Extraction/Entailment (COLIEE). The combination is done by means of disambiguation rules, applied over the lexical rankings when those deemed unreliable for a given query. Competition and experimental results indicate small gains in overall retrieval performance using the proposed approach. Additionally, a analysis of error and improvement cases is presented for a better understanding of the contributions.", "title": "" }, { "docid": "1d427a8473f0a35d14d5fdf357752d63", "text": "We examine Deep Canonically Correlated LSTMs as a way to learn nonlinear transformations of variable length sequences and embed them into a correlated, fixed dimensional space. We use LSTMs to transform multi-view time-series data non-linearly while learning temporal relationships within the data. We then perform correlation analysis on the outputs of these neural networks to find a correlated subspace through which we get our final representation via projection. This work follows from previous work done on Deep Canonical Correlation (DCCA), in which deep feed-forward neural networks were used to learn nonlinear transformations of data while maximizing correlation.", "title": "" }, { "docid": "35712c761dfabeb20904976c8b1a917c", "text": "Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.", "title": "" }, { "docid": "2bcd1284676b58d8a9556a57936abe74", "text": "Traditional whole building energy modeling suffers from several factors, including the large number of inputs required to characterize the building, the specificity required to accurately model building materials and components, simplifying assumptions made by underlying simulation algorithms, and the gap between the as-designed and as-built building. Prior work has attempted to mitigate these problems by using sensor-based machine learning approaches to statistically model energy consumption. We refer to this approach as sensor-based energy modeling (sBEM). However, a majority of the prior sBEM work focuses only on commercial buildings. The sBEM work that focuses on modeling residential buildings primarily focuses on monthly electrical consumption, while commercial sensor-based models focus on hourly consumption. This means there is not a clear indicator of which machine learning approach best predicts next hour residential consumption, since these methods are only evaluated using low-resolution data. We address this issue by testing seven different machine learning algorithms on a unique residential data set, which contains 140 different sensors measurements, collected every 15 minutes. In addition, we validate each learner’s correctness on the ASHRAE Great Energy Prediction Shootout, using the original competition metrics. Our validation results confirm existing conclusions that Neural Network-based methods perform best on commercial buildings. However, the results from testing on our residential data set show that Feed Forward Neural Networks (FFNN), Support Vector Regression (SVR), and Linear Regression methods perform poorly, and that Least Squares Support Vector Preprint submitted to Buildings and Energy October 28, 2011 Machines (LS-SVM) perform best – a technique not previously applied to this domain.", "title": "" }, { "docid": "038c10660f6dcd354dd54027bd9e65eb", "text": "A new architecture for a very fast and secure public key crypto-coprocessor Crypto@1408Bit usable in Smart Card ICs is presented. The key elements of Crypto@1408Bit architecture are a very fast Look Ahead Algorithm for modular multiplication, a very fast and secure serial-parallel adder, a fast and chip area efficient carry handling and a sufficient number of working registers enabling easy programming. With this architecture a new dimension of crypto performance and security against side channel attacks is achieved. Compared to crypto-coprocessors currently available on the Smart Card IC market Crypto@1408Bit offers a performance more than an order of magnitude faster. The security of the crypto-coprocessor relies on hardware and software security features like dual-rail-security logic against differential power attacks, high secure registers for critical operands and an register length with up to 128 Bit buffer for randomization of operands.", "title": "" }, { "docid": "d23cde1d8cbe3be0e7376e9f58ded04c", "text": "This research proposes a procedure that maps a PMSM torque request onto optimal state (current) references. Combining the procedure with a dynamic (current) controller yields a torque controller. The maximum torque per ampere (MTPA) criterion is used to minimize conduction and switching losses. This research extends the concept to field-weakening operation to obtain high efficiency at any machine speed. The resulting constrained MTPA criterion is formalized as an optimization problem. Since it is difficult to solve directly, the maximum and intersection torque subproblems are identified. An algorithm is obtained that maps a torque onto an optimal state reference, and it is sufficiently efficient for real-time implementation. This method is compatible with a variety of state (current) controllers with/without PWM, SPM and IPM machines with saliency and reverse saliency, and a variable dc-link voltage. The proposed procedure relies on a sufficiently accurate torque model that may not be provided using rated machine parameters. Thus, an approach to compute locally optimized machine parameters is proposed that takes magnetic saturation into account. The concept is developed on a software-in-the-loop platform and evaluated on an experimental test bench.", "title": "" }, { "docid": "cc9ff40f0c210ad0669bce44b5043e48", "text": "Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).", "title": "" } ]
scidocsrr
cd7dfd8b561e799271920bfa04c7c3b1
Big Data Privacy in the Internet of Things Era
[ { "docid": "86cb3c072e67bed8803892b72297812c", "text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of", "title": "" } ]
[ { "docid": "8f9d5cd416ac038a4cbdf64737039053", "text": "This paper proposes a method to extract the feature points from faces automatically. It provides a feasible way to locate the positions of two eyeballs, near and far corners of eyes, midpoint of nostrils and mouth corners from face image. This approach would help to extract useful features on human face automatically and improve the accuracy of face recognition. The experiments show that the method presented in this paper could locate feature points from faces exactly and quickly.", "title": "" }, { "docid": "ecf90b3e40eb695eb8b4d6d6701d6b06", "text": "Digital forensic visualization is an understudied area despite its potential to achieve significant improvements in the efficiency of an investigation, criminal or civil. In this study, a three-stage forensic data storage and visualization life cycle is presented. The first stage is the decoding of data, which involves preparing both structured and unstructured data for storage. In the storage stage, data are stored within our proposed database schema designed for ensuring data integrity and speed of storage and retrieval. The final stage is the visualization of stored data in a manner that facilitates user interaction. These functionalities are implemented in a proof of concept to demonstrate the utility of the proposed life cycle. The proof of concept demonstrates the utility of the proposed approach for the storage and visualization of digital forensic data.", "title": "" }, { "docid": "d39800cd22bd86ec986ad647805e56a3", "text": "Depression is the major cause of years lived in disability world-wide; however, its diagnosis and tracking methods still rely mainly on assessing self-reported depressive symptoms, methods that originated more than fifty years ago. These methods, which usually involve filling out surveys or engaging in face-to-face interviews, provide limited accuracy and reliability and are costly to track and scale. In this paper, we develop and test the efficacy of machine learning techniques applied to objective data captured passively and continuously from E4 wearable wristbands and from sensors in an Android phone for predicting the Hamilton Depression Rating Scale (HDRS). Input data include electrodermal activity (EDA), sleep behavior, motion, phone-based communication, location changes, and phone usage patterns. We introduce our feature generation and transformation process, imputing missing clinical scores from self-reported measures, and predicting depression severity from continuous sensor measurements. While HDRS ranges between 0 and 52, we were able to impute it with 2.8 RMSE and predict it with 4.5 RMSE which are low relative errors. Analyzing the features and their relation to depressive symptoms, we found that poor mental health was accompanied by more irregular sleep, less motion, fewer incoming messages, less variability in location patterns, and higher asymmetry of EDA between the right and the left wrists.", "title": "" }, { "docid": "64389907530dd26392e037f1ab2d1da5", "text": "Most current license plate (LP) detection and recognition approaches are evaluated on a small and usually unrepresentative dataset since there are no publicly available large diverse datasets. In this paper, we introduce CCPD, a large and comprehensive LP dataset. All images are taken manually by workers of a roadside parking management company and are annotated carefully. To our best knowledge, CCPD is the largest publicly available LP dataset to date with over 250k unique car images, and the only one provides vertices location annotations. With CCPD, we present a novel network model which can predict the bounding box and recognize the corresponding LP number simultaneously with high speed and accuracy. Through comparative experiments, we demonstrate our model outperforms current object detection and recognition approaches in both accuracy and speed. In real-world applications, our model recognizes LP numbers directly from relatively high-resolution images at over 61 fps and 98.5% accuracy.", "title": "" }, { "docid": "428de42a8b3091728724ea9abefffb0b", "text": "BACKGROUND\nIn developed countries, regular breakfast consumption is inversely associated with excess weight and directly associated with better dietary and improved physical activity behaviors. Our objective was to describe the frequency of breakfast consumption among school-going adolescents in Delhi and evaluate its association with overweight and obesity as well as other dietary, physical activity, and sedentary behaviors.\n\n\nMETHODS\n\n\n\nDESIGN\nCross-sectional study.\n\n\nSETTING\nEight schools (Private and Government) of Delhi in the year 2006.\n\n\nPARTICIPANTS\n1814 students from 8th and 10th grades; response rate was 87.2%; 55% were 8th graders, 60% were boys and 52% attended Private schools.\n\n\nMAIN OUTCOME MEASURES\nBody mass index, self-reported breakfast consumption, diet and physical activity related behaviors, and psychosocial factors.\n\n\nDATA ANALYSIS\nMixed effects regression models were employed, adjusting for age, gender, grade level and school type (SES).\n\n\nRESULTS\nSignificantly more Government school (lower SES) students consumed breakfast daily as compared to Private school (higher SES) students (73.8% vs. 66.3%; p<0.01). More 8th graders consumed breakfast daily vs.10th graders (72.3% vs. 67.0%; p<0.05). A dose-response relationship was observed such that overall prevalence of overweight and obesity among adolescents who consumed breakfast daily (14.6%) was significantly lower vs. those who only sometimes (15.2%) or never (22.9%) consumed breakfast (p<0.05 for trend). This relationship was statistically significant for boys (15.4 % vs. 16.5% vs. 26.0; p<0.05 for trend) but not for girls. Intake of dairy products, fruits and vegetables was 5.5 (95% CI 2.4-12.5), 1.7 (95% CI 1.1-2.5) and 2.2 (95% CI 1.3-3.5) times higher among those who consumed breakfast daily vs. those who never consumed breakfast. Breakfast consumption was associated with greater physical activity vs. those who never consumed breakfast. Positive values and beliefs about healthy eating; body image satisfaction; and positive peer and parental influence were positively associated with daily breakfast consumption, while depression was negatively associated.\n\n\nCONCLUSION\nDaily breakfast consumption is associated with less overweight and obesity and with healthier dietary- and physical activity-related behaviors among urban Indian students. Although prospective studies should confirm the present results, intervention programs to prevent or treat childhood obesity in India should consider emphasizing regular breakfast consumption.", "title": "" }, { "docid": "0f853c6ccf6ce4cf025050135662f725", "text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.", "title": "" }, { "docid": "51f437b8631ba8494d3cf3bad7578794", "text": "The notion of a <italic>program slice</italic>, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point <italic>p</italic> and a variable <italic>x</italic>; the slice consists of all statements of the program that might affect the value of <italic>x</italic> at point <italic>p</italic>. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a <italic>system dependence graph</italic>, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point <italic>p</italic> and an <italic>arbitrary</italic> variable, a slice must be taken with respect to a variable that is <italic>defined</italic> or <italic>used</italic> at <italic>p</italic>.)\nThe chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent <italic>transitive</italic> dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammar's nonterminals.", "title": "" }, { "docid": "68104b1000f8bc381436f7ac0cfbd247", "text": "Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available. To address this fundamental obstacle, we introduce the MultiDomain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of 10k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The contribution of this work apart from the open-sourced dataset labelled with dialogue belief states and dialogue actions is two-fold: firstly, a detailed description of the data collection procedure along with a summary of data structure and analysis is provided. The proposed data-collection pipeline is entirely based on crowd-sourcing without the need of hiring professional annotators; secondly, a set of benchmark results of belief tracking, dialogue act and response generation is reported, which shows the usability of the data and sets a baseline for future studies.", "title": "" }, { "docid": "fbb71a8a7630350a7f33f8fb90b57965", "text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.", "title": "" }, { "docid": "7b5331b0e6ad693fc97f5f3b543bf00c", "text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.", "title": "" }, { "docid": "b0e58ee4008fbf0e2555851c7889300d", "text": "Projection technology typically places several constraints on the geometric relationship between the projector and the projection surface to obtain an undistorted, properly sized image. In this paper we describe a simple, robust, fast, and low-cost method for automatic projector calibration that eliminates many of these constraints. We embed light sensors in the target surface, project Gray-coded binary patterns to discover the sensor locations, and then prewarp the image to accurately fit the physical features of the projection surface. This technique can be expanded to automatically stitch multiple projectors, calibrate onto non-planar surfaces for object decoration, and provide a method for simple geometry acquisition.", "title": "" }, { "docid": "1a02d963590683c724a814f341f94f92", "text": "The concept of the quality attribute scenario was introduced in 2003 to support the development of software architectures. This concept is useful because it provides an operational means to represent the quality requirements of a system. It also provides a more concrete basis with which to teach software architecture. Teaching this concept however has some unexpected issues. In this paper, I present my experiences of teaching quality attribute scenarios and outline Bus Tracker, a case study I have developed to support my teaching.", "title": "" }, { "docid": "d2afe7dcd2b31d3b8dc3ba80f450980d", "text": "The Speech Training, Assessment, and Remediation (STAR) system is intended to assist Speech and Language Pathologists in treating children with articulation problems. The system is embedded in an interactive video game that is set in a spaceship and involves teaching aliens to “understand” selected words by spoken example. The sequence of events leads children through a series of successively more diff icult speech production tasks, beginning with CV syllables and progressing to words/phrases. Word selection is further tailored to emphasize the contrastive nature of phonemes by the use of minimal pairs (e.g., run/won) in production sets. To assess children’s speech, a discrete hidden Markov model recognition engine is used[1]. Phone models were trained on the CMU Kids database[2]. Performance of the HMM recognizer was compared to perceptual ratings of speech recorded from children who substitute /w/ for /r/. The difference in log likelihood between /r/ and /w/ models correlates well with perceptual ratings of utterances containing substitution errors, but very poorly for correctly articulated examples. The poor correlation between perceptual and machine ratings for correctly articulated utterances may be due to very restricted variance in the perceptual data for those utterances.", "title": "" }, { "docid": "dce75562a7e8b02364d39fd7eb407748", "text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.", "title": "" }, { "docid": "86ef6d2d822e6ff2c57b653901150a6f", "text": "Aspect-Oriented Programming (AOP) is a programming paradigm that supports the modular implementation of crosscutting concerns. Thereby, AOP improves the maintainability, reusability, and configurability of software in general. Although already popular in the Java domain, AOP is still not commonly used in conjunction with C/C++. For a broad adoption of AOP by the software industry, it is crucial to provide solid language and tool support. However, research and tool development for C++ is known to be an extremely hard and tedious task, as the language is overwhelmed with interacting features and hard to analyze. Getting AOP into the C++ domain is not just technical challenge. It is also the question of integrating AOP concepts with the philosophy of the C++ language, which is very different from Java. This paper describes the design and development of the AspectC++ language and weaver, which brings fully-fledged AOP support into the C++ domain.", "title": "" }, { "docid": "665da3a85a548d12864de5fad517e3ee", "text": "To characterize the neural correlates of being personally involved in social interaction as opposed to being a passive observer of social interaction between others we performed an fMRI study in which participants were gazed at by virtual characters (ME) or observed them looking at someone else (OTHER). In dynamic animations virtual characters then showed socially relevant facial expressions as they would appear in greeting and approach situations (SOC) or arbitrary facial movements (ARB). Differential neural activity associated with ME>OTHER was located in anterior medial prefrontal cortex in contrast to the precuneus for OTHER>ME. Perception of socially relevant facial expressions (SOC>ARB) led to differentially increased neural activity in ventral medial prefrontal cortex. Perception of arbitrary facial movements (ARB>SOC) differentially activated the middle temporal gyrus. The results, thus, show that activation of medial prefrontal cortex underlies both the perception of social communication indicated by facial expressions and the feeling of personal involvement indicated by eye gaze. Our data also demonstrate that distinct regions of medial prefrontal cortex contribute differentially to social cognition: whereas the ventral medial prefrontal cortex is recruited during the analysis of social content as accessible in interactionally relevant mimic gestures, differential activation of a more dorsal part of medial prefrontal cortex subserves the detection of self-relevance and may thus establish an intersubjective context in which communicative signals are evaluated.", "title": "" }, { "docid": "22160219ffa40e4e42f1519fe25ecb6a", "text": "We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-t prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. Cross-validation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higherorder interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation. We implement a procedure to fit generalized linear models in R with the Student-t prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several applications, including a series of logistic regressions predicting voting preferences, a small bioassay experiment, and an imputation model for a public health data set.", "title": "" }, { "docid": "095f8d5c3191d6b70b2647b562887aeb", "text": "Hardware specialization, in the form of datapath and control circuitry customized to particular algorithms or applications, promises impressive performance and energy advantages compared to traditional architectures. Current research in accelerators relies on RTL-based synthesis flows to produce accurate timing, power, and area estimates. Such techniques not only require significant effort and expertise but also are slow and tedious to use, making large design space exploration infeasible. To overcome this problem, the authors developed Aladdin, a pre-RTL, power-performance accelerator modeling framework and demonstrated its application to system-on-chip (SoC) simulation. Aladdin estimates performance, power, and area of accelerators within 0.9, 4.9, and 6.6 percent with respect to RTL implementations. Integrated with architecture-level general-purpose core and memory hierarchy simulators, Aladdin provides researchers with a fast but accurate way to model the power and performance of accelerators in an SoC environment.", "title": "" }, { "docid": "6eace0f6216d17b9041f1bed42459c40", "text": "Predicting possible code-switching points can help develop more accurate methods for automatically processing mixed-language text, such as multilingual language models for speech recognition systems and syntactic analyzers. We present in this paper exploratory results on learning to predict potential codeswitching points in Spanish-English. We trained different learning algorithms using a transcription of code-switched discourse. To evaluate the performance of the classifiers, we used two different criteria: 1) measuring precision, recall, and F-measure of the predictions against the reference in the transcription, and 2) rating the naturalness of artificially generated code-switched sentences. Average scores for the code-switched sentences generated by our machine learning approach were close to the scores of those generated by humans.", "title": "" } ]
scidocsrr
06b8928286f8fb258d2a9a043708d27a
Efficient inverse kinematics for general 6R manipulators
[ { "docid": "1b98568349b1a1e8239013385e9c6023", "text": "We present fast and robust algorithms for the inverse kinematics of serial manipulators consisting of six or fewer joints. When stated mathematically, the problem of inverse kinematics reduces to simultaneously solving a system of algebraic equations. In this paper, we use a series of algebraic and numeric transformations to reduce the problem to computing the eigenstructure of a matrix pencil. To e ciently compute the eigenstructure, we make use of the symbolic formulation of the matrix and use a number of techniques from linear algebra and matrix computations. The resulting algorithm computes all the solution of a serial manipulator with six or fewer joints in the order of tens of milliseconds on the current workstations. It has been implemented as part of a generic package, KINEM, for the inverse kinematics of serial manipulators.", "title": "" } ]
[ { "docid": "3e922369ad05877f08fcd8f50e425453", "text": "The “Big Data” revolution is spawning systems designed to make decisions from data. In particular, deep learning methods have emerged as the state of the art method in many important breakthroughs [18, 20, 28]. This is due to the statistical flexibility and computational scalability of large and deep neural networks which allows them to harness the information of large and rich datasets. At the same time, elementary decision theory shows that the only admissible decision rules are Bayesian [5, 30]. Colloquially, this means that any decision rule which is not Bayesian can be strictly improved (or even exploited) by some Bayesian alternative [6]. The implication of these results is clear: combine deep learning with Bayesian inference for the best decisions from data.", "title": "" }, { "docid": "67a3f670b310afdac5967086902840eb", "text": "The tint of forehead skin so exactly matches that of the face and nose that a forehead flap must be the first choice for reconstruction of a nasal defect. The forehead flap makes by far the best nose. With some plastic surgery juggling, the forehead defect can be camouflaged effectively. This article describes the author's technique in two-stage and three-stage forehead flap procedures.", "title": "" }, { "docid": "f0958d2c952c7140c998fa13a2bf4374", "text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.", "title": "" }, { "docid": "d7301a0dadc035ed1a0d676f8da2c037", "text": "This study describes a novel active rotary-legs mechanism for a stair-climbing mobility vehicle. We have previously developed a stair-climbing up and down wheelchair with lever propelled rotary-legs operated using the human upper body. The previous results indicated that the required torque for the stair-climbing up and down procedure can be reduced based on the user's posture transition, which was achieved via the procedure using only the human upper body. The design principle of the proposed active rotary-legs mechanism in this study is to develop a more compact and lightweight mechanism for a stair-climbing up and down mobility vehicle. We achieve this objective based on the previous observations by appropriately combining the active and passive components. The developed mechanism consists of a four-bar linkage mechanism with motors and gas springs as active and passive components, respectively. The gas springs are connected to the linkage mechanism as parallel elastic actuators, which can reduce the required motor torque and make the complete mechanism compact and lightweight. We describe herein a detailed design of the active rotary-legs mechanism and conduct simulations and preliminary experiments to investigate the effectiveness of our proposed methodology.", "title": "" }, { "docid": "812c41737bb2a311d45c5566f773a282", "text": "Acceleration, sprint and agility performance are crucial in sports like soccer. There are few studies regarding the effect of training on youth soccer players in agility performance and in sprint distances shorter than 30 meter. Therefore, the aim of the recent study was to examine the effect of a high-intensity sprint and plyometric training program on 13-year-old male soccer players. A training group of 14 adolescent male soccer players, mean age (±SD) 13.5 years (±0.24) followed an eight week intervention program for one hour per week, and a group of 12 adolescent male soccer players of corresponding age, mean age 13.5 years (±0.23) served as control a group. Preand post-tests assessed 10-m linear sprint, 20-m linear sprint and agility performance. Results showed a significant improvement in agility performance, pre 8.23 s (±0.34) to post 7.69 s (± 0.34) (p<0.01), and a significant improvement in 0-20m linear sprint, pre 3.54s (±0.17) to post 3.42s (±0.18) (p<0.05). In 0-10m sprint the participants also showed an improvement, pre 2.02s (±0.11) to post 1.96s (± 0.11), however this was not significant. The correlation between 10-m sprint and agility was r = 0.53 (p<0.01), and between 20-m linear sprint and agility performance, r = 0.67 (p<0.01). The major finding in the study is the significant improvement in agility performance and in 0-20 m linear sprint in the intervention group. These findings suggest that organizing the training sessions with short-burst high-intensity sprint and plyometric exercises interspersed with adequate recovery time, may result in improvements in both agility and in linear sprint performance in adolescent male soccer players. Another finding is the correlation between linear sprint and agility performance, indicating a difference when compared to adults. 4 | Mathisen: EFFECT OF HIGH-SPEED...", "title": "" }, { "docid": "c7bc0bc901d1a32bd255f68cf4b63c97", "text": "One of the key challenges for operations researchers solving real-world problems is designing and implementing high-quality heuristics to guide their search procedures. In the past, machine learning techniques have failed to play a major role in operations research approaches, especially in terms of guiding branching and pruning decisions. We integrate deep neural networks into a heuristic tree search procedure to decide which branch to choose next and to estimate a bound for pruning the search tree of an optimization problem. We call our approach Deep Learning assisted heuristic Tree Search (DLTS) and apply it to a well-known problem from the container terminals literature, the container pre-marshalling problem (CPMP). Our approach is able to learn heuristics customized to the CPMP solely through analyzing the solutions to CPMP instances, and applies this knowledge within a heuristic tree search to produce the highest quality heuristic solutions to the CPMP to date.", "title": "" }, { "docid": "f18aefe00103d33ae256a6fd161531ff", "text": "Conventional database optimizers take full advantage of associativity and commutativity properties of join to implement e cient and powerful optimizations on select/project/join queries. However, only limited optimization is performed on other binary operators. In this paper, we present the theory and algorithms needed to generate alternative evaluation orders for the optimization of queries containing outerjoins. Our results include both a complete set of transformation rules, suitable for new-generation, transformation-based optimizers, and a bottom-up join enumeration algorithm compatible with those used by traditional optimizers.", "title": "" }, { "docid": "d09dddd8a678370375c30dd14b3f2482", "text": "Deep learning on graphs and in particular, graph convolutional neural networks, have recently attracted significant attention in the machine learning community. Many of such techniques explore the analogy between the graph Laplacian eigenvectors and the classical Fourier basis, allowing to formulate the convolution as a multiplication in the spectral domain. One of the key drawback of spectral CNNs is their explicit assumption of an undirected graph, leading to a symmetric Laplacian matrix with orthogonal eigendecomposition. In this work we propose MotifNet, a graph CNN capable of dealing with directed graphs by exploiting local graph motifs. We present experimental evidence showing the advantage of our approach on real data.", "title": "" }, { "docid": "180a8fbad7d1810e9c7f8a2aeb5e6a8d", "text": "Hybrid systems are dynamical systems exhibiting both continuous and discrete behavior. Having states that can evolve continuously or discretely, hybrid dynamical systems permit modeling and simulation of systems in a wide range of applications including robotics, automotive systems, power systems, biological systems, to just list a few. Key motivation for studying hybrid systems comes from the recognition of the capabilities of hybrid feedback in robust stabilization of nonlinear systems. Numerous frameworks for modeling and analysis of hybrid systems have appeared in the literature. These include the work of Tavernini [26], Michel and Hu [11], Lygeros et al. [10], Aubin et al. [2], and van der Schaft and Schumacher [28], among others. In this paper, we consider the hybrid systems framework in [7, 6], where the continuous dynamics (or flows) of a hybrid system are modeled using differential inclusions while the discrete dynamics (or jumps) are captured by difference inclusions. Trajectories to a hybrid system conveniently use two parameters: an ordinary time parameter t ∈ [0,+∞), which is incremented continuously as flows occur, an a discrete time parameter j ∈ {0, 1, 2, . . .}, which is incremented at unitary steps when jumps occur. The conditions determining whether a trajectory to a hybrid system should flow or jump are captured by subsets of the state space and input space. In simple terms, given an input (t, j) 7→ u(t, j), a trajectory (t, j) 7→ x(t, j) to a hybrid system satisfies, over intervals of flow,", "title": "" }, { "docid": "3d90cdb88faee8794a9fd08143f7046e", "text": "Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.", "title": "" }, { "docid": "e4cba1a4ebef9fa18c3ee11258160a8b", "text": "Subocclusive hymenal variants, such as microperforate or septate hymen, impair somatic functions (e.g., vaginal intercourse or menstrual hygiene) and can negatively impact the quality of life of young women. We know little about the prevalence and inheritance of subocclusive hymenal variants. So far, eight cases of familial occurrence of occlusive hymenal anomalies (imperforate hymen) have been reported. In one of these cases, monozygotic twins were affected. We are reporting the first case of subocclusive hymenal variants (microperforate hymen and septate hymen) in 16-year-old white dizygotic twins. In addition, we review and discuss the current evidence. Conclusion: The mode of inheritance of hymenal variants has not been determined so far. Because surgical corrections of hymenal variants should be carried out in asymptomatic patients (before menarche), gynecologists and pediatricians should keep in mind that familial occurrences may occur.", "title": "" }, { "docid": "0a55717b9efe122c8559f34ac858c282", "text": "Semantic role labeling (SRL) is dedicated to recognizing the predicate-argument structure of a sentence. Previous studies have shown syntactic information has a remarkable contribution to SRL performance. However, such perception was challenged by a few recent neural SRL models which give impressive performance without a syntactic backbone. This paper intends to quantify the importance of syntactic information to dependency SRL in deep learning framework. We propose an enhanced argument labeling model companying with an extended korder argument pruning algorithm for effectively exploiting syntactic information. Our model achieves state-of-the-art results on the CoNLL-2008, 2009 benchmarks for both English and Chinese, showing the quantitative significance of syntax to neural SRL together with a thorough empirical survey over existing models.", "title": "" }, { "docid": "0674479836883d572b05af6481f27a0d", "text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …", "title": "" }, { "docid": "3d2a072f265b259169fce33ccd6dd11a", "text": "gem5-gpu is a new simulator that models tightly integrated CPU-GPU systems. It builds on gem5, a modular full-system CPU simulator, and GPGPUSim, a detailed GPGPU simulator. gem5-gpu routes most memory accesses through Ruby, which is a highly configurable memory system in gem5. By doing this, it is able to simulate many system configurations, ranging from a system with coherent caches and a single virtual address space across the CPU and GPU to a system that maintains separate GPU and CPU physical address spaces. gem5gpu can run most unmodified CUDA 3.2 source code. Applications can launch non-blocking kernels, allowing the CPU and GPU to execute simultaneously. We present gem5-gpu's software architecture and a brief performance validation. We also discuss possible extensions to the simulator. gem5-gpu is open source and available at gem5-gpu.cs.wisc.edu.", "title": "" }, { "docid": "d01b8d59f5e710bcf75978d1f7dcdfa3", "text": "Over the last few decades, the use of electroencephalography (EEG) signals for motor imagery based brain-computer interface (MI-BCI) has gained widespread attention. Deep learning have also gained widespread attention and used in various application such as natural language processing, computer vision and speech processing. However, deep learning has been rarely used for MI EEG signal classification. In this paper, we present a deep learning approach for classification of MI-BCI that uses adaptive method to determine the threshold. The widely used common spatial pattern (CSP) method is used to extract the variance based CSP features, which is then fed to the deep neural network for classification. Use of deep neural network (DNN) has been extensively explored for MI-BCI classification and the best framework obtained is presented. The effectiveness of the proposed framework has been evaluated using dataset IVa of the BCI Competition III. It is found that the proposed framework outperforms all other competing methods in terms of reducing the maximum error. The framework can be used for developing BCI systems using wearable devices as it is computationally less expensive and more reliable compared to the best competing methods.", "title": "" }, { "docid": "7bd5d9a477d563ffe5782241ddc4c5cd", "text": "Research on code reviews has often focused on defect counts instead of defect types, which offers an imperfect view of code review benefits. In this paper, we classified the defects of nine industrial (C/C++) and 23 student (Java) code reviews, detecting 388 and 371 defects, respectively. First, we discovered that 75 percent of defects found during the review do not affect the visible functionality of the software. Instead, these defects improved software evolvability by making it easier to understand and modify. Second, we created a defect classification consisting of functional and evolvability defects. The evolvability defect classification is based on the defect types found in this study, but, for the functional defects, we studied and compared existing functional defect classifications. The classification can be useful for assigning code review roles, creating checklists, assessing software evolvability, and building software engineering tools. We conclude that, in addition to functional defects, code reviews find many evolvability defects and, thus, offer additional benefits over execution-based quality assurance methods that cannot detect evolvability defects. We suggest that code reviews may be most valuable for software products with long life cycles as the value of discovering evolvability defects in them is greater than for short life cycle systems.", "title": "" }, { "docid": "ba3e1e2996e3c2a736bd090605b59ee3", "text": "Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance.", "title": "" }, { "docid": "3e037f897b96e778eb01bd063357a527", "text": "Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.", "title": "" }, { "docid": "8fc05d9e26c0aa98ffafe896d8c5a01b", "text": "We describe our clinical question answering system implemented for the Text Retrieval Conference (TREC 2016) Clinical Decision Support (CDS) track. We submitted five runs using a combination of knowledge-driven (based on a curated knowledge graph) and deep learning-based (using key-value memory networks) approaches to retrieve relevant biomedical articles for answering generic clinical questions (diagnoses, treatment, and test) for each clinical scenario provided in three forms: notes, descriptions, and summaries. The submitted runs were varied based on the use of notes, descriptions, or summaries in association with different diagnostic inferencing methodologies applied prior to biomedical article retrieval. Evaluation results demonstrate that our systems achieved best or close to best scores for 20% of the topics and better than median scores for 40% of the topics across all participants considering all evaluation measures. Further analysis shows that on average our clinical question answering system performed best with summaries using diagnostic inferencing from the knowledge graph whereas our key-value memory network model with notes consistently outperformed the knowledge graph-based system for notes and descriptions. ∗The author is also affiliated with Worcester Polytechnic Institute (szhao@wpi.edu). †The author is also affiliated with Northwestern University (kathy.lee@eecs.northwestern.edu). ‡The author is also affiliated with Brandeis University (aprakash@brandeis.edu).", "title": "" }, { "docid": "7550448608d03a79ee6e281ea511b772", "text": "ii Development of a Robotic Wheelchair Abstract Robotic wheelchairs extend the capabilities of traditional powered devices by introducing control and navigational intelligence. These devices can ease the lives of many disabled people, particularly those with severe impairments, by increasing their range of mobility. A robotic wheelchair has been under development at the University of Wollongong for some years. This thesis describes ongoing work towards the ultimate aim of an intelligent and useful device.", "title": "" } ]
scidocsrr
48a96acc6c87d453c96d0a3540089755
How To Extract Fashion Trends From Social Media? A Robust Object Detector With Support For Unsupervised Learning
[ { "docid": "fb87648c3bb77b1d9b162a8e9dbc5e86", "text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "title": "" } ]
[ { "docid": "2ad76db05382d5bbdae27d5192cccd72", "text": "Very large-scale classification taxonomies typically have hundreds of thousands of categories, deep hierarchies, and skewed category distribution over documents. However, it is still an open question whether the state-of-the-art technologies in automated text categorization can scale to (and perform well on) such large taxonomies. In this paper, we report the first evaluation of Support Vector Machines (SVMs) in web-page classification over the full taxonomy of the Yahoo! categories. Our accomplishments include: 1) a data analysis on the Yahoo! taxonomy; 2) the development of a scalable system for large-scale text categorization; 3) theoretical analysis and experimental evaluation of SVMs in hierarchical and non-hierarchical settings for classification; 4) an investigation of threshold tuning algorithms with respect to time complexity and their effect on the classification accuracy of SVMs. We found that, in terms of scalability, the hierarchical use of SVMs is efficient enough for very large-scale classification; however, in terms of effectiveness, the performance of SVMs over the Yahoo! Directory is still far from satisfactory, which indicates that more substantial investigation is needed.", "title": "" }, { "docid": "1c5ab22135bb293919022585bae160ef", "text": "Job satisfaction and employee performance has been a topic of research for decades. Whether job satisfaction influences employee satisfaction in organizations remains a crucial issue to managers and psychologists. That is where the problem lies. Therefore, the objective of this paper is to trace the relationship between job satisfaction and employee performance in organizations with particular reference to Nigeria. Related literature on the some theories of job satisfaction such as affective events, two-factor, equity and job characteristics was reviewed and findings from these theories indicate that a number of factors like achievement, recognition, responsibility, pay, work conditions and so on, have positive influence on employee performance in organizations. The paper adds to the theoretical debate on whether job satisfaction impacts positively on employee performance. It concludes that though the concept of job satisfaction is complex, using appropriate variables and mechanisms can go a long way in enhancing employee performance. It recommends that managers should use those factors that impact employee performance to make them happy, better their well being and the environment. It further specifies appropriate mechanisms using a theoretical approach to support empirical approaches which often lack clarity as to why the variables are related.", "title": "" }, { "docid": "4d8f38413169a572c0087fd180a97e44", "text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.", "title": "" }, { "docid": "2059db0707ffc28fd62b7387ba6d09ae", "text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.", "title": "" }, { "docid": "4e28055d48d6c00aebb7ddb6a287636d", "text": "BACKGROUND\nIt is commonly assumed that motion sickness caused by moving visual scenes arises from the illusion of self-motion (i.e., vection).\n\n\nHYPOTHESES\nBoth studies reported here investigated whether sickness and vection were correlated. The first study compared sickness and vection created by real and virtual visual displays. The second study investigated whether visual fixation to suppress eye movements affected motion sickness or vection.\n\n\nMETHOD\nIn the first experiment subjects viewed an optokinetic drum and a virtual simulation of the optokinetic drum. The second experiment investigated two conditions on a virtual display: a) moving black and white stripes; and b) moving black and white stripes with a stationary cross on which subjects fixated to reduce eye movements.\n\n\nRESULTS\nIn the first study, ratings of motion sickness were correlated between the conditions (real and the virtual drum), as were ratings of vection. With both conditions, subjects with poor visual acuity experienced greater sickness. There was no correlation between ratings of vection and ratings of sickness in either condition. In the second study, fixation reduced motion sickness but had no affect on vection. Motion sickness was correlated with visual acuity without fixation, but not with fixation. Again, there was no correlation between vection and motion sickness.\n\n\nCONCLUSIONS\nVection is not the primary cause of sickness with optokinetic stimuli. Vection appears to be influenced by peripheral vision whereas motion sickness is influenced by central vision. When the eyes are free to track moving stimuli, there is an association between visual acuity and motion sickness. Virtual displays can create vection and may be used to investigate visually induced motion sickness.", "title": "" }, { "docid": "e40228513cb17052c182dd1f421c659a", "text": "This manuscript describes our participation in the International Skin Imaging Collaboration’s 2017 Skin Lesion Analysis Towards Melanoma Detection competition. We participated in Part 3: Lesion Classification. The two stated goals of this binary image classification challenge were to distinguish between (a) melanoma and (b) nevus and seborrheic keratosis, followed by distinguishing between (a) seborrheic keratosis and (b) nevus and melanoma. We chose a deep neural network approach with a transfer learning strategy, using a pre-trained Inception V3 network as both a feature extractor to provide input for a multi-layer perceptron as well as fine-tuning an augmented Inception network. This approach yielded validation set AUC’s of 0.84 on the second task and 0.76 on the first task, for an average AUC of 0.80. We joined the competition unfortunately late, and we look forward to improving on these results. Keywords—transfer learning; melanoma; seborrheic keratosis; nevus;", "title": "" }, { "docid": "10b7ce647229f3c9fe5aeced5be85e38", "text": "The proliferation of deep learning methods in natural language processing (NLP) and the large amounts of data they often require stands in stark contrast to the relatively data-poor clinical NLP domain. In particular, large text corpora are necessary to build high-quality word embeddings, yet often large corpora that are suitably representative of the target clinical data are unavailable. This forces a choice between building embeddings from small clinical corpora and less representative, larger corpora. This paper explores this trade-off, as well as intermediate compromise solutions. Two standard clinical NLP tasks (the i2b2 2010 concept and assertion tasks) are evaluated with commonly used deep learning models (recurrent neural networks and convolutional neural networks) using a set of six corpora ranging from the target i2b2 data to large open-domain datasets. While combinations of corpora are generally found to work best, the single-best corpus is generally task-dependent.", "title": "" }, { "docid": "a0ca6986d59905cea49ed28fa378c69e", "text": "The epidemic of type 2 diabetes and impaired glucose tolerance is one of the main causes of morbidity and mortality worldwide. In both disorders, tissues such as muscle, fat and liver become less responsive or resistant to insulin. This state is also linked to other common health problems, such as obesity, polycystic ovarian disease, hyperlipidaemia, hypertension and atherosclerosis. The pathophysiology of insulin resistance involves a complex network of signalling pathways, activated by the insulin receptor, which regulates intermediary metabolism and its organization in cells. But recent studies have shown that numerous other hormones and signalling events attenuate insulin action, and are important in type 2 diabetes.", "title": "" }, { "docid": "4b2510dfa7b0d9de17a9a1e43a362e85", "text": "Stakeholder marketing has established foundational support for redefining and broadening the marketing discipline. An extensive literature review of 58 marketing articles that address six primary stakeholder groups (i.e., customers, suppliers, employees, shareholders, regulators, and the local community) provides evidence of the important role the groups play in stakeholder marketing. Based on this review and in conjunction with established marketing theory, we define stakeholder marketing as “activities and processes within a system of social institutions that facilitate and maintain value through exchange relationships with multiple stakeholders.” In an effort to focus on the stakeholder marketing field of study, we offer both a conceptual framework for understanding the pivotal role of stakeholder marketing and research questions for examining the linkages among stakeholder exchanges, value creation, and marketing outcomes.", "title": "" }, { "docid": "fd03cf7e243571e9b3e81213fe91fd29", "text": "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "title": "" }, { "docid": "6ba73f29a71cda57450f1838ef012356", "text": "Addressing the challenges of feeding the burgeoning world population with limited resources requires innovation in sustainable, efficient farming. The practice of precision agriculture offers many benefits towards addressing these challenges, such as improved yield and efficient use of such resources as water, fertilizer and pesticides. We describe the design and development of a light-weight, multi-spectral 3D imaging device that can be used for automated monitoring in precision agriculture. The sensor suite consists of a laser range scanner, multi-spectral cameras, a thermal imaging camera, and navigational sensors. We present techniques to extract four key data products - plant morphology, canopy volume, leaf area index, and fruit counts - using the sensor suite. We demonstrate its use with two systems: multi-rotor micro aerial vehicles and on a human-carried, shoulder-mounted harness. We show results of field experiments conducted in collaboration with growers and agronomists in vineyards, apple orchards and orange groves.", "title": "" }, { "docid": "d07d6fe33b01fbfb21ba5adc76ec786f", "text": "Dunaliella salina (Dunal) Teod, a unicellular eukaryotic green alga, is a highly salt-tolerant organism. To identify novel genes with potential roles in salinity tolerance, a salt stress-induced D. salina cDNA library was screened based on the expression in Haematococcus pluvialis, an alga also from Volvocales but one that is hypersensitive to salt. Five novel salt-tolerant clones were obtained from the library. Among them, Ds-26-16 and Ds-A3-3 contained the same open reading frame (ORF) and encoded a 6.1 kDa protein. Transgenic tobacco overexpressing Ds-26-16 and Ds-A3-3 exhibited increased leaf area, stem height, root length, total chlorophyll, and glucose content, but decreased proline content, peroxidase activity, and ascorbate content, and enhanced transcript level of Na+/H+ antiporter salt overly sensitive 1 gene (NtSOS1) expression, compared to those in the control plants under salt condition, indicating that Ds-26-16 enhanced the salt tolerance of tobacco plants. The transcript of Ds-26-16 in D. salina was upregulated in response to salt stress. The expression of Ds-26-16 in Escherichia coli showed that the ORF contained the functional region and changed the protein(s) expression profile. A mass spectrometry assay suggested that the most abundant and smallest protein that changed is possibly a DNA-binding protein or Cold shock-like protein. Subcellular localization analysis revealed that Ds-26-16 was located in the nuclei of onion epidermal cells or nucleoid of E. coli cells. In addition, the possible use of shoots regenerated from leaf discs to quantify the salt tolerance of the transgene at the initial stage of tobacco transformation was also discussed.", "title": "" }, { "docid": "d73d16ff470669b4935e85e2de815cb8", "text": "As organizations aggressively deploy radio frequency identification systems, activists are increasingly concerned about RFID's potential to invade user privacy. This overview highlights potential threats and how they might be addressed using both technology and public policy.", "title": "" }, { "docid": "5d7f5a6981824a257fe3868375f1d18f", "text": "This paper describes a mobile robotic assistant, developed to assist elderly individuals with mild cognitive and physical impairments, as well as support nurses in their daily activities. We present three software modules relevant to ensure successful human–robot interaction: an automated reminder system; a people tracking and detection system; and finally a high-level robot controller that performs planning under uncertainty by incorporating knowledge from low-level modules, and selecting appropriate courses of actions. During the course of experiments conducted in an assisted living facility, the robot successfully demonstrated that it could autonomously provide reminders and guidance for elderly residents. a a Purchase Export", "title": "" }, { "docid": "f15508a8cd342cb6ea0ec2d0328503d7", "text": "An order book consists of a list of all buy and sell offers, represented by price and quantity, available to a market agent. The order book changes rapidly, within fractions of a second, due to new orders being entered into the book. The volume at a certain price level may increase due to limit orders, i.e. orders to buy or sell placed at the end of the queue, or decrease because of market orders or cancellations. In this paper a high-dimensional Markov chain is used to represent the state and evolution of the entire order book. The design and evaluation of optimal algorithmic strategies for buying and selling is studied within the theory of Markov decision processes. General conditions are provided that guarantee the existence of optimal strategies. Moreover, a value-iteration algorithm is presented that enables finding optimal strategies numerically. As an illustration a simple version of the Markov chain model is calibrated to high-frequency observations of the order book in a foreign exchange market. In this model, using an optimally designed strategy for buying one unit provides a significant improvement, in terms of the expected buy price, over a naive buy-one-unit strategy.", "title": "" }, { "docid": "8df1395775e139c281512e4e4c1920d9", "text": "Over the past 20 years, breakthrough discoveries of chromatin-modifying enzymes and associated mechanisms that alter chromatin in response to physiological or pathological signals have transformed our knowledge of epigenetics from a collection of curious biological phenomena to a functionally dissected research field. Here, we provide a personal perspective on the development of epigenetics, from its historical origins to what we define as 'the modern era of epigenetic research'. We primarily highlight key molecular mechanisms of and conceptual advances in epigenetic control that have changed our understanding of normal and perturbed development.", "title": "" }, { "docid": "5ee410ddc75170aa38c39281a8d86827", "text": "Research in automotive safety leads to the conclusion that modern vehicle should utilize active and passive sensors for the recognition of the environment surrounding them. Thus, the development of tracking systems utilizing efficient state estimators is very important. In this case, problems such as moving platform carrying the sensor and maneuvering targets could introduce large errors in the state estimation and in some cases can lead to the divergence of the filter. In order to avoid sub-optimal performance, the unscented Kalman filter is chosen, while a new curvilinear model is applied which takes into account both the turn rate of the detected object and its tangential acceleration, leading to a more accurate modeling of its movement. The performance of the unscented filter using the proposed model in the case of automotive applications is proven to be superior compared to the performance of the extended and linear Kalman filter.", "title": "" }, { "docid": "71bd071b09ba6323877f7e9a51145751", "text": "We introduce multilingual image description, the task of generating descriptions of images given data in multiple languages. This can be viewed as visually-grounded machine translation, allowing the image to play a role in disambiguating language. We present models for this task that are inspired by neural models for image description and machine translation. Our multilingual image description models generate target-language sentences using features transferred from separate models: multimodal features from a monolingual source-language image description model and visual features from an object recognition model. In experiments on a dataset of images paired with English and German sentences, using BLEU and Meteor as a metric, our models substantially improve upon existing monolingual image description models.", "title": "" }, { "docid": "fb60eb0a7334ce5c5d3c62b812b9f4f8", "text": "The structure and culture of an organization does affect implementation of projects. In this paper we try to identify organizational factors that could affect the implementation efforts of an Integrated Financial Management Information System (IFMIS). The information system in question has taken overtly a long time and it's not complete yet. We set out to And out whether organizational issues are at play in this particular project. The project under study is a large-scale integrated information system which aims at strengthening and further developing Financial Management Information in the wider public service in Kenya. We borrow concepts from Structuration Theory (ST) as applied in sociology to understand the organizational perspective in the project. We use the theory to help explain some of the meanings, norms and issues of power experienced during the implementation of the IFMIS. Without ruling out problems of technological nature, the findings suggest that many of the problems in the IFMIS implementation may be attributed to organizational factors, and that certain issues are related to the existing organization culture within government.", "title": "" }, { "docid": "7691fba64da5d36d57d11d7319f742a4", "text": "The design of flow control systems remains a challenge due to the nonlinear nature of the equations that govern fluid flow. However, recent advances in computational fluid dynamics (CFD) have enabled the simulation of complex fluid flows with high accuracy, opening the possibility of using learning-based approaches to facilitate controller design. We present a method for learning the forced and unforced dynamics of airflow over a cylinder directly from CFD data. The proposed approach, grounded in Koopman theory, is shown to produce stable dynamical models that can predict the time evolution of the cylinder system over extended time horizons. Finally, by performing model predictive control with the learned dynamical models, we are able to find a straightforward, interpretable control law for suppressing vortex shedding in the wake of the cylinder.", "title": "" } ]
scidocsrr
381c5a230a8ee8d6c2c00a14c0282d59
Financial time series forecasting using support vector machines
[ { "docid": "f9824ae0b73ebecf4b3a893392e77d67", "text": "This paper proposes genetic algorithms (GAs) approach to feature discretization and the determination of connection weights for artificial neural networks (ANNs) to predict the stock price index. Previous research proposed many hybrid models of ANN and GA for the method of training the network, feature subset selection, and topology optimization. In most of these studies, however, GA is only used to improve the learning algorithm itself. In this study, GA is employed not only to improve the learning algorithm, but also to reduce the complexity in feature space. GA optimizes simultaneously the connection weights between layers and the thresholds for feature discretization. The genetically evolved weights mitigate the well-known limitations of the gradient descent algorithm. In addition, globally searched feature discretization reduces the dimensionality of the feature space and eliminates irrelevant factors. Experimental results show that GA approach to the feature discretization model outperforms the other two conventional models. q 2000 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "131517391d81c321f922e2c1507bb247", "text": "This study was undertaken to apply recurrent neural networks to the recognition of stock price patterns, and to develop a new method for evaluating the networks. In stock tradings, triangle patterns indicate an important clue to the trend of future change in stock prices, but the patterns are not clearly defined by rule-based approaches. From stock price data for all names of corporations listed in The First Section of Tokyo Stock Exchange, an expert called c h a d reader extracted sixteen triangles. These patterns were divided into two groups, 15 training patterns and one test pattern. Using stock data during past 3 years for 16 names, 16 experiments for the recognition were carried out, where the groups were cyclically used. The experiments revealed that the given test triangle was accurately recognized in 15 out of 16 experiments, and that the number of the mismatching patterns was 1.06 per name on the average. A new method was developed for evaluating recurrent networks with context transition performances, in particular, temporal transition performances. The method for the triangle sequences is applicable to decrease in mismatching patterns. By applying a cluster analysis to context vectors generated in the networks at recognition stage, a transition chart for context vector categorization was obtained for each stock price sequence. The finishing categories for the context vectors in the charts indicated that this method was effective in decreasing mismatching patterns.", "title": "" }, { "docid": "247c8cd5e076809a208849abe4dce3e5", "text": "This paper deals with the application of a novel neural network technique, support vector machine (SVM), in !nancial time series forecasting. The objective of this paper is to examine the feasibility of SVM in !nancial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast !nancial time series. ? 2001 Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "ce0b0543238a81c3f02c43e63a285605", "text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.", "title": "" }, { "docid": "3876701c2d9c91d06436e3c5eef9a877", "text": "This work uses enhanced symmetric key encryption algorithm, in which same structure of encryption and decryption procedure algorithm is used. In conventional encryption methods the key for encryption and decryption is same and remain secret. The algorithm uses key generation method by random number in algorithm for increasing efficiency of algorithm. The algorithm use key size of 512 bits for providing better security and it also provide the concept of internal key generation at receiver end on the basis of 512 bits key which will entered by the sender. This internal key will store in the sender end database and send to the receiver end by other path for preventing brute force attack and other harmful attacks on security. This algorithm is more efficient for large data where existing algorithms provides efficient encryption and decryption only for 2MB data. This work provides better speed in comparison to existing algorithms for large size of files with less overhead.", "title": "" }, { "docid": "d23d22c773f15e120e95e8a160833404", "text": "In this work we describe and evaluate methods to learn musical embeddings. Each embedding is a vector that represents four contiguous beats of music and is derived from a symbolic representation. We consider autoencoding-based methods including denoising autoencoders, and context reconstruction, and evaluate the resulting embeddings on a forward prediction and a classification task.", "title": "" }, { "docid": "bf37ea1cfab3b13ffd1bead9d9ead0e7", "text": "We present a new tool for training neural network language mo dels (NNLMs), scoring sentences, and generating text. The to ol has been written using Python library Theano, which allows r esearcher to easily extend it and tune any aspect of the traini ng process. Regardless of the flexibility, Theano is able to gen erate extremely fast native code that can utilize a GPU or multi ple CPU cores in order to parallelize the heavy numerical com putations. The tool has been evaluated in difficult Finnish a nd English conversational speech recognition tasks, and sign ifica t improvement was obtained over our best back-off n-gram models. The results that we obtained in the Finnish task were com pared to those from existing RNNLM and RWTHLM toolkits, and found to be as good or better, while training times were an order of magnitude shorter.", "title": "" }, { "docid": "45ac3cf0d48352bce84c576b4205fd97", "text": "We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard υ-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the υ-metric and therefore lead to a better classification performance.", "title": "" }, { "docid": "d9ca4757091ae8e6ac5da18820faa823", "text": "Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this article, we introduce a new dimensionality reduction technique, which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower-bounding, but very tight, Euclidean distance approximation, and show how they can support fast exact searching and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority.", "title": "" }, { "docid": "1e07453726e1f1095136f005bb94b1b7", "text": "Efficient wireless power transfer across tissue is highly desirable for removing bulky energy storage components. Most existing power transfer systems are conceptually based on coils linked by slowly varying magnetic fields (less than 10 MHz). These systems have many important capabilities, but are poorly suited for tiny, millimeter-scale implants where extreme asymmetry between the source and the receiver results in weak coupling. This paper first surveys the analysis of near-field power transfer and associated strategies to optimize efficiency. It then reviews analytical models that show that significantly higher efficiencies can be obtained in the electromagnetic midfield. The performance limits of such systems are explored through optimization of the source, and a numerical example of a cardiac implant demonstrates that millimeter-sized devices are feasible.", "title": "" }, { "docid": "28b23fc65a17b2b29e4e2a6b78ab401b", "text": "In 1980, the N400 event-related potential was described in association with semantic anomalies within sentences. When, in 1992, a second waveform, the P600, was reported in association with syntactic anomalies and ambiguities, the story appeared to be complete: the brain respected a distinction between semantic and syntactic representation and processes. Subsequent studies showed that the P600 to syntactic anomalies and ambiguities was modulated by lexical and discourse factors. Most surprisingly, more than a decade after the P600 was first described, a series of studies reported that semantic verb-argument violations, in the absence of any violations or ambiguities of syntax can evoke robust P600 effects and no N400 effects. These observations have raised fundamental questions about the relationship between semantic and syntactic processing in the brain. This paper provides a comprehensive review of the recent studies that have demonstrated P600s to semantic violations in light of several proposed triggers: semantic-thematic attraction, semantic associative relationships, animacy and semantic-thematic violations, plausibility, task, and context. I then discuss these findings in relation to a unifying theory that attempts to bring some of these factors together and to link the P600 produced by semantic verb-argument violations with the P600 evoked by unambiguous syntactic violations and syntactic ambiguities. I suggest that normal language comprehension proceeds along at least two competing neural processing streams: a semantic memory-based mechanism, and a combinatorial mechanism (or mechanisms) that assigns structure to a sentence primarily on the basis of morphosyntactic rules, but also on the basis of certain semantic-thematic constraints. I suggest that conflicts between the different representations that are output by these distinct but interactive streams lead to a continued combinatorial analysis that is reflected by the P600 effect. I discuss some of the implications of this non-syntactocentric, dynamic model of language processing for understanding individual differences, language processing disorders and the neuroanatomical circuitry engaged during language comprehension. Finally, I suggest that that these two processing streams may generalize beyond the language system to real-world visual event comprehension.", "title": "" }, { "docid": "d46916f82e8f6ac8f4f3cb3df1c6875f", "text": "Mobile devices are becoming the prevalent computing platform for most people. TouchDevelop is a new mobile development environment that enables anyone with a Windows Phone to create new apps directly on the smartphone, without a PC or a traditional keyboard. At the core is a new mobile programming language and editor that was designed with the touchscreen as the only input device in mind. Programs written in TouchDevelop can leverage all phone sensors such as GPS, cameras, accelerometer, gyroscope, and stored personal data such as contacts, songs, pictures. Thousands of programs have already been written and published with TouchDevelop.", "title": "" }, { "docid": "1090297224c76a5a2c4ade47cb932dba", "text": "Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.", "title": "" }, { "docid": "d0cdbd1137e9dca85d61b3d90789d030", "text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).", "title": "" }, { "docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22", "text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.", "title": "" }, { "docid": "0d9420b97012ce445fdf39fb009e32c4", "text": "Greater numbers of young children with complicated, serious physical health, mental health, or developmental problems are entering foster care during the early years when brain growth is most active. Every effort should be made to make foster care a positive experience and a healing process for the child. Threats to a child’s development from abuse and neglect should be understood by all participants in the child welfare system. Pediatricians have an important role in assessing the child’s needs, providing comprehensive services, and advocating on the child’s behalf. The developmental issues important for young children in foster care are reviewed, including: 1) the implications and consequences of abuse, neglect, and placement in foster care on early brain development; 2) the importance and challenges of establishing a child’s attachment to caregivers; 3) the importance of considering a child’s changing sense of time in all aspects of the foster care experience; and 4) the child’s response to stress. Additional topics addressed relate to parental roles and kinship care, parent-child contact, permanency decision-making, and the components of comprehensive assessment and treatment of a child’s development and mental health needs. More than 500 000 children are in foster care in the United States.1,2 Most of these children have been the victims of repeated abuse and prolonged neglect and have not experienced a nurturing, stable environment during the early years of life. Such experiences are critical in the shortand long-term development of a child’s brain and the ability to subsequently participate fully in society.3–8 Children in foster care have disproportionately high rates of physical, developmental, and mental health problems1,9 and often have many unmet medical and mental health care needs.10 Pediatricians, as advocates for children and their families, have a special responsibility to evaluate and help address these needs. Legal responsibility for establishing where foster children live and which adults have custody rests jointly with the child welfare and judiciary systems. Decisions about assessment, care, and planning should be made with sufficient information about the particular strengths and challenges of each child. Pediatricians have an important role in helping to develop an accurate, comprehensive profile of the child. To create a useful assessment, it is imperative that complete health and developmental histories are available to the pediatrician at the time of these evaluations. Pediatricians and other professionals with expertise in child development should be proactive advisors to child protection workers and judges regarding the child’s needs and best interests, particularly regarding issues of placement, permanency planning, and medical, developmental, and mental health treatment plans. For example, maintaining contact between children and their birth families is generally in the best interest of the child, and such efforts require adequate support services to improve the integrity of distressed families. However, when keeping a family together may not be in the best interest of the child, alternative placement should be based on social, medical, psychological, and developmental assessments of each child and the capabilities of the caregivers to meet those needs. Health care systems, social services systems, and judicial systems are frequently overwhelmed by their responsibilities and caseloads. Pediatricians can serve as advocates to ensure each child’s conditions and needs are evaluated and treated properly and to improve the overall operation of these systems. Availability and full utilization of resources ensure comprehensive assessment, planning, and provision of health care. Adequate knowledge about each child’s development supports better placement, custody, and treatment decisions. Improved programs for all children enhance the therapeutic effects of government-sponsored protective services (eg, foster care, family maintenance). The following issues should be considered when social agencies intervene and when physicians participate in caring for children in protective services. EARLY BRAIN AND CHILD DEVELOPMENT More children are entering foster care in the early years of life when brain growth and development are most active.11–14 During the first 3 to 4 years of life, the anatomic brain structures that govern personality traits, learning processes, and coping with stress and emotions are established, strengthened, and made permanent.15,16 If unused, these structures atrophy.17 The nerve connections and neurotransmitter networks that are forming during these critical years are influenced by negative environmental conditions, including lack of stimulation, child abuse, or violence within the family.18 It is known that emotional and cognitive disruptions in the early lives of children have the potential to impair brain development.18 Paramount in the lives of these children is their need for continuity with their primary attachment figures and a sense of permanence that is enhanced The recommendations in this statement do not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. PEDIATRICS (ISSN 0031 4005). Copyright © 2000 by the American Acad-", "title": "" }, { "docid": "298b65526920c7a094f009884439f3e4", "text": "Big Data concerns massive, heterogeneous, autonomous sources with distributed and decentralized control. These characteristics make it an extreme challenge for organizations using traditional data management mechanism to store and process these huge datasets. It is required to define a new paradigm and re-evaluate current system to manage and process Big Data. In this paper, the important characteristics, issues and challenges related to Big Data management has been explored. Various open source Big Data analytics frameworks that deal with Big Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed.", "title": "" }, { "docid": "5d48cd6c8cc00aec5f7f299c346405c9", "text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of", "title": "" }, { "docid": "21d41b706b4dbac414003e2f0cc68c18", "text": "Cortical computation arises from the interaction of multiple neuronal types, including pyramidal (Pyr) cells and interneurons expressing Sst, Vip, or Pvalb. To study the circuit underlying such interactions, we imaged these four types of cells in mouse primary visual cortex (V1). Our recordings in darkness were consistent with a \"disinhibitory\" model in which locomotion activates Vip cells, thus inhibiting Sst cells and disinhibiting Pyr cells. However, the disinhibitory model failed when visual stimuli were present: locomotion increased Sst cell responses to large stimuli and Vip cell responses to small stimuli. A recurrent network model successfully predicted each cell type's activity from the measured activity of other types. Capturing the effects of locomotion, however, required allowing it to increase feedforward synaptic weights and modulate recurrent weights. This network model summarizes interneuron interactions and suggests that locomotion may alter cortical computation by changing effective synaptic connectivity.", "title": "" }, { "docid": "c8db1af44dccc23bf0e06dcc8c43bca6", "text": "A reconfigurable mechanism for varying the footprint of a four-wheeled omnidirectional vehicle is developed and applied to wheelchairs. The variable footprint mechanism consists of a pair of beams intersecting at a pivotal point in the middle. Two pairs of ball wheels at the diagonal positions of the vehicle chassis are mounted, respectively, on the two beams intersecting in the middle. The angle between the two beams varies actively so that the ratio of the wheel base to the tread may change. Four independent servo motors driving the four ball wheels allow the vehicle to move in an arbitrary direction from an arbitrary configuration as well as to change the angle between the two beams and thereby change the footprint. The objective of controlling the beam angle is threefold. One is to augment static stability by varying the footprint so that the mass centroid of the vehicle may be kept within the footprint at all times. The second is to reduce the width of the vehicle when going through a narrow doorway. The third is to apparently change the gear ratio relating the vehicle speed to individual actuator speeds. First the concept of the varying footprint mechanism is described, and its kinematic behavior is analyzed, followed by the three control algorithms for varying the footprint. A prototype vehicle for an application as a wheelchair platform is designed, built, and tested.", "title": "" }, { "docid": "af9945b69a5f6b33ced38b6d030c6197", "text": "This paper presents a physical random number generator for mainly cryptographical applications based on alpha decay of Americium 241. A simple and low-cost implementation is shown to detect the decay events of a radioactive source often found in common household smoke detectors. Three different algorithms for the extraction of random bits from the exponentially distributed impulses are discussed. In the concrete application a speed optimized method was chosen to gain a reasonable high data rate from a moderate radiation source (0,1 μCi). To the author’s best knowledge this technique has not been applied so far in the context of radiation-based random generators. A tentative application of statistical suits of tests indicates a high quality of the data delivered by the device.", "title": "" }, { "docid": "08d8e372c5ae4eef9848552ee87fbd64", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …", "title": "" }, { "docid": "93d4d58e974e66c11c9b41d12a833da0", "text": "OBJECTIVE\nButyrate enemas may be effective in the treatment of active distal ulcerative colitis. Because colonic fermentation of Plantago ovata seeds (dietary fiber) yields butyrate, the aim of this study was to assess the efficacy and safety of Plantago ovata seeds as compared with mesalamine in maintaining remission in ulcerative colitis.\n\n\nMETHODS\nAn open label, parallel-group, multicenter, randomized clinical trial was conducted. A total of 105 patients with ulcerative colitis who were in remission were randomized into groups to receive oral treatment with Plantago ovata seeds (10 g b.i.d.), mesalamine (500 mg t.i.d.), and Plantago ovata seeds plus mesalamine at the same doses. The primary efficacy outcome was maintenance of remission for 12 months.\n\n\nRESULTS\nOf the 105 patients, 102 were included in the final analysis. After 12 months, treatment failure rate was 40% (14 of 35 patients) in the Plantago ovata seed group, 35% (13 of 37) in the mesalamine group, and 30% (nine of 30) in the Plantago ovata plus mesalamine group. Probability of continued remission was similar (Mantel-Cox test, p = 0.67; intent-to-treat analysis). Therapy effects remained unchanged after adjusting for potential confounding variables with a Cox's proportional hazards survival analysis. Three patients were withdrawn because of the development of adverse events consisting of constipation and/or flatulence (Plantago ovata seed group = 1 and Plantago ovata seed plus mesalamine group = 2). A significant increase in fecal butyrate levels (p = 0.018) was observed after Plantago ovata seed administration.\n\n\nCONCLUSIONS\nPlantago ovata seeds (dietary fiber) might be as effective as mesalamine to maintain remission in ulcerative colitis.", "title": "" } ]
scidocsrr
0e2a6fa1dcde051c2679771e108d7e84
Searching optimal product bundles by means of GA-based Engine and Market Basket Analysis
[ { "docid": "44c9526319039305edf89ce58deb6398", "text": "Networks of constraints fundamental properties and applications to picture processing Sketchpad: a man-machine graphical communication system Using auxiliary variables and implied constraints to model non-binary problems Solving constraint satisfaction problems using neural-networks C. Search Backtracking algorithms for constraint satisfaction problems; a survey", "title": "" }, { "docid": "55b405991dc250cd56be709d53166dca", "text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.", "title": "" } ]
[ { "docid": "0fc3976820ca76c630476647761f9c21", "text": "Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.", "title": "" }, { "docid": "87a7e7fe82a5768633b606e95727244d", "text": "Hashing is fundamental to many algorithms and data structures widely used in practice. For theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealized model is unrealistic because a truly random hash function requires an exponential number of bits to describe. Alternatively, one can provide rigorous bounds on performance when explicit families of hash functions are used, such as 2-universal or O(1)-wise independent families. For such families, performance guarantees are often noticeably weaker than for ideal hashing.\n In practice, however, it is commonly observed that simple hash functions, including 2-universal hash functions, perform as predicted by the idealized analysis for truly random hash functions. In this paper, we try to explain this phenomenon. We demonstrate that the strong performance of universal hash functions in practice can arise naturally from a combination of the randomness of the hash function and the data. Specifially, following the large body of literature on random sources and randomness extraction, we model the data as coming from a \"block source,\" whereby each new data item has some \"entropy\" given the previous ones. As long as the (Renyi) entropy per data item is sufficiently large, it turns out that the performance when choosing a hash function from a 2-universal family is essentially the same as for a truly random hash function. We describe results for several sample applications, including linear probing, balanced allocations, and Bloom filters.", "title": "" }, { "docid": "7dc9afa44cc609a658b11a949829e2b9", "text": "To achieve security in wireless sensor networks, it is important to he able to encrypt messages sent among sensor nodes. Keys for encryption purposes must he agreed upon by communicating nodes. Due to resource constraints, achieving such key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and public-key based schemes, are not suitable for wireless sensor networks. Pre-distribution of secret keys for all pairs of nodes is not viable due to the large amount of memory used when the network size is large. Recently, a random key pre-distribution scheme and its improvements have been proposed. A common assumption made by these random key pre-distribution schemes is that no deployment knowledge is available. Noticing that in many practical scenarios, certain deployment knowledge may be available a priori, we propose a novel random key pre-distribution scheme that exploits deployment knowledge and avoids unnecessary key assignments. We show that the performance (including connectivity, memory usage, and network resilience against node capture) of sensor networks can he substantially improved with the use of our proposed scheme. The scheme and its detailed performance evaluation are presented in this paper.", "title": "" }, { "docid": "355d040cf7dd706f08ef4ce33d53a333", "text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.", "title": "" }, { "docid": "409baee7edaec587727624192eab93aa", "text": "It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.", "title": "" }, { "docid": "6ee8efea33f518d68f5582097c4c2929", "text": "The COMPOSE project aims to provide an open Marketplace for the Internet of Things as well as the necessary platform to support it. A necessary component of COMPOSE is an API that allows things, COMPOSE users and the platform to communicate. The COMPOSE API allows for things to push data to the platform, the platform to initiate asynchronous actions on the things, and COMPOSE users to retrieve and process data from the things. In this paper we present the design and implementation of the COMPOSE API, as well as a detailed description of the main key requirements that the API must satisfy. The API documentation and the source code for the platform are available online.", "title": "" }, { "docid": "56cfaf2e85696a9b42762c1f863a11ff", "text": "With an increasing inflow and outflow of users from social media, understanding the factors the drive their adoption becomes even more pressing. This paper reports on a study with 494 users of Facebook and WhatsApp. Different from traditional uses & gratifications studies that probe into typical uses of social media, we sampled users' single recent, outstanding (either satisfying or unsatisfying) experiences, based on a contemporary theoretical and methodological framework of 10 universal human needs. Using quantitative and qualitative analyses, we found WhatsApp to unlock new opportunities for intimate communications, Facebook to be characterized by primarily non-social uses, and both media to be powerful lifelogging tools. Unsatisfying experiences were primarily rooted in the tools' breach of offline social norms, as well in content fatigue and exposure to undesirable content in the case of Facebook. We discuss the implications of the findings for the design of social media. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c04cf54a40cd84961657bf50153ff68b", "text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.", "title": "" }, { "docid": "35ab98f6e5b594261e52a21740c70336", "text": "Artificial Bee Colony (ABC) algorithm which is one of the most recently introduced optimization algorithms, simulates the intelligent foraging behavior of a honey bee swarm. Clustering analysis, used in many disciplines and applications, is an important tool and a descriptive task seeking to identify homogeneous groups of objects based on the values of their attributes. In this work, ABC is used for data clustering on benchmark problems and the performance of ABC algorithm is compared with Particle Swarm Optimization (PSO) algorithm and other nine classification techniques from the literature. Thirteen of typical test data sets from the UCI Machine Learning Repository are used to demonstrate the results of the techniques. The simulation results indicate that ABC algorithm can efficiently be used for multivariate data clustering. © 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "defb837e866948e5e092ab64476d33b5", "text": "Recent multicoil polarised pads called Double D pads (DDP) and Bipolar Pads (BPP) show excellent promise when used in lumped charging due to having single sided fields and high native Q factors. However, improvements to field leakage are desired to enable higher power transfer while keeping the leakage flux within ICNIRP levels. This paper proposes a method to reduce the leakage flux which a lumped inductive power transfer (IPT) system exhibits by modifying the ferrite structure of its pads. The DDP and BPP pads ferrite structures are both modified by extending them past the ends of the coils in each pad with the intention of attracting only magnetic flux generated by the primary pad not coupled onto the secondary pad. Simulated improved ferrite structures are validated through practical measurements.", "title": "" }, { "docid": "15e7feebdbcafc58aca3abdf9a8c093a", "text": "Aqueous solutions of lead salts (1, 2) and saturated solutions of lead hydroxide (1) have been used as stains to enhance the electron-scattering properties of components of biological materials examined in the electron microscope. Saturated solutions of lead hydroxide (1), while staining more intensely than either lead acetate or monobasic lead acetate (l , 2), form insoluble lead carbonate upon exposure to air. The avoidance of such precipitates which contaminate surfaces of sections during staining has been the stimulus for the development of elaborate procedures for exclusion of air or carbon dioxide (3, 4). Several modifications of Watson's lead hydroxide stain (1) have recently appeared (5-7). All utilize relatively high pH (approximately 12) and one contains small amounts of tartrate (6), a relatively weak complexing agent (8), in addition to lead. These modified lead stains are less liable to contaminate the surface of the section with precipitated stain products. The stain reported here differs from previous alkaline lead stains in that the chelating agent, citrate, is in sufficient excess to sequester all lead present. Lead citrate, soluble in high concentrations in basic solutions, is a chelate compound with an apparent association constant (log Ka) between ligand and lead ion of 6.5 (9). Tissue binding sites, presumably organophosphates, and other anionic species present in biological components following fixation, dehydration, and plastic embedding apparently have a greater affinity for this cation than lead citrate inasmuch as cellular and extracellular structures in the section sequester lead from the staining solution. Alkaline lead citrate solutions are less likely to contaminate sections, as no precipitates form when droplets of fresh staining solution are exposed to air for periods of up to 30 minutes. The resultant staining of the sections is of high intensity in sections of Aralditeor Epon-embedded material. Cytoplasmic membranes, ribosomes, glycogen, and nuclear material are stained (Figs. 1 to 3). STAIN SOLUTION: Lead citrate is prepared by", "title": "" }, { "docid": "39bf990d140eb98fa7597de1b6165d49", "text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.", "title": "" }, { "docid": "b575cc4b98ab5c0f704b92e0bf50ed5f", "text": "The emerging Asian market of Korean broadcasting programs is pushing forward a new phase of cultural marketing. The Korean trend in Asia brought issues such as cultural proximity, and the issues have been analyzed by structural analysis. This article suggests which kind of program Asians adopted as the favorites based on the factors of cultural frame in the aspect of performance. The results of analysis shows that Korean programs satisfy Asian emotional needs as being easy to assimilate to similar life styles, cultural proximity and expressiveness. The preference of Korean programs shows that Asians express sympathy for Asian culture frames including family morals, high morality, love and sacrifice. Additionally, as a case study this paper analyzes the characteristics of the most favorite Korean programs in Asia using five categories: harmony, tension, compromise, participation and agreement. The result of the case study showed that Asian people have a similar culture frame and like stories dealing with love, harmony oriented stories, stories with tension in daily life, low participation and the agreement and reinforcement with their traditional values.", "title": "" }, { "docid": "87a296ad9c3dd7b32b7ed876b9132fb2", "text": "Reservoir Computing is an attractive paradigm of recurrent neural network architecture, due to the ease of training and existing neuromorphic implementations. Successively applied on speech recognition and time series forecasting, few works have so far studied the behavior of such networks on computer vision tasks. Therefore we decided to investigate the ability of Echo State Networks to classify the digits of the MNIST database. We show that even if ESNs are not able to outperform state-of-the-art convolutional networks, they allow low error thanks to a suitable preprocessing of images. The best performance is obtained with a large reservoir of 4,000~neurons, but committees of smaller reservoirs are also appealing and might be further investigated.", "title": "" }, { "docid": "b7433fd28642e8ae9a6532a01cfe5301", "text": "Self-organizing sensor networks are one of the systems that would benefit from the new local positioning features offered by the new generation of wireless technologies. Location dependent sensor data transfers could be optimized by means of local positioning services. Many start-up companies have available proprietary positioning systems meeting the unique requirements of each application. Therefore, the use of a standard like Bluetooth/spl trade/ is a step towards a universal solution. Our system provides location services for mobile industrial and biomedical Bluetooth/spl trade/ -enabled sensors. The RSSI distance estimation, together with a GPS-like triangulation algorithm, lead to a 3 m error positioning system with remote and self positioning topologies. A real time positioning tracking system for one mobile sensor is provided in this article.", "title": "" }, { "docid": "11229bf95164064f954c25681c684a16", "text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.", "title": "" }, { "docid": "585c589cdab52eaa63186a70ac81742d", "text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).", "title": "" }, { "docid": "58aecdf120e7ba887354f9c1a40200b2", "text": "The ant colony optimization (ACO) algorithms are multi-agent systems in which the behaviour of each ant is inspired by the foraging behaviour of real ants to solve optimization problem. This paper presents the ACO based algorithm to find global minimum. Algorithm is based on that each ant searches only around the best solution of the previous iteration. This algorithm was experimented on test problems, and successful results were obtained. The algorithm was compared with other methods which had been experimented on the same test problems, and observed to be better. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "79fa41307c5d13d355f05f8aad30e2a2", "text": "Millimeter-wave (mmw) frequency bands, especially 60 GHz unlicensed band, are considered as a promising solution for gigabit short range wireless communication systems. IEEE standard 802.11ad, also known as WiGig, is standardized for the usage of the 60 GHz unlicensed band for wireless local area networks (WLANs). By using this mmw WLAN, multi-Gbps rate can be achieved to support bandwidthintensive multimedia applications. Exhaustive search along with beamforming (BF) is usually used to overcome 60 GHz channel propagation loss and accomplish data transmissions in such mmw WLANs. Because of its short range transmission with a high susceptibility to path blocking, multiple number of mmw access points (APs) should be used to fully cover a typical target environment for future high capacity multi-Gbps WLANs. Therefore, coordination among mmw APs is highly needed to overcome packet collisions resulting from un-coordinated exhaustive search BF and to increase total capacity of mmw WLANs. In this paper, we firstly give the current status of mmw WLANs with our developed WiGig AP prototype. Then, we highlight the great need for coordinated transmissions among mmw APs as a key enabler for future high capacity mmw WLANs. Two different types of coordinated mmw WLAN architecture are introduced. One is distributed antenna type architecture to realize centralized coordination, while the other is autonomous coordination with the assistance of legacy Wi-Fi signaling. Moreover, two heterogeneous network (HetNet) architectures are also introduced to efficiently extend the coordinated mmw WLANs to be used for future 5th Generation (5G) cellular networks. key words: millimeter wave, IEEE802.11ad, coordinated mmw WLAN, 5G cellular networks, heterogeneous networks", "title": "" }, { "docid": "3b2db7bd323243676cc24b2af506564b", "text": "Scenarios are possible future states of the world that represent alternative plausible conditions under different assumptions. Often, scenarios are developed in a context relevant to stakeholders involved in their applications since the evaluation of scenario outcomes and implications can enhance decision-making activities. This paper reviews the state-of-the-art of scenario development and proposes a formal approach to scenario development in environmental decision-making. The discussion of current issues in scenario studies includes advantages and obstacles in utilizing a formal scenario development framework, and the different forms of uncertainty inherent in scenario development, as well as how they should be treated. An appendix for common scenario terminology has been attached for clarity. Major recommendations for future research in this area include proper consideration of uncertainty in scenario studies in particular in relation to stakeholder relevant information, construction of scenarios that are more diverse in nature, and sharing of information and resources among the scenario development research community. 2008 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
085ebefe816232938cc19fd0651aace6
72 dB SNR, 240 Hz Frame Rate Readout IC With Differential Continuous-Mode Parallel Architecture for Larger Touch-Screen Panel Applications
[ { "docid": "e191dc25d17c79dbbfc5e6e09ad4e3e0", "text": "Capacitive touch-screen technology introduces new concepts to user interfaces, such as multi-touch, pinch zoom-in/out gestures, thus expanding the smartphone market. However, capacitive touch-screen technology still suffers from performance degradation like a low frame scan rate and poor accuracy, etc. One of the key performance factors is the immunity to external noise, which intrudes randomly into the touch-screen system. HUM, display noise, and SMPS are such noise sources. The main electrical power source produces HUM, one of the most important sources of noise, which has a 50 or 60Hz component. Display noise is emitted when an LCD or OLED is driven by the internal timing controller, which generates the driving signal in the tens of kHz range. The touch performance of On-Cell or In-Cell touch displays is seriously affected by this kind of noise, because the distance between the display pixel layer and the capacitive touchscreen panel is getting smaller. SMPS is another noise source that ranges up to 300kHz. The charger for a smart-phone, the USB port in a computer, a tri-phosphor fluorescent light bulb are all examples of sources of SMPS. There have been many attempts to remove such noise. Amplitude modulation with frequency hopping is proposed in [1]. However, when the noise environment changes, this method needs recalibration, resulting in non-constant touch response time. Another method tries to filter the noise from the display [2], but it does not remove other noise sources like HUM or SMPS.", "title": "" }, { "docid": "53ada9fce2d0af2208c4c312870a2912", "text": "This paper describes a CMOS capacitive sensing amplifier for a monolithic MEMS accelerometer fabricated by post-CMOS surface micromachining. This chopper stabilized amplifier employs capacitance matching with optimal transistor sizing to minimize sensor noise floor. Offsets due to sensor and circuit are reduced by ac offset calibration and dc offset cancellation based on a differential difference amplifier (DDA). Low-duty-cycle periodic reset is used to establish robust dc bias at the sensing electrodes with low noise. This work shows that continuous-time voltage sensing can achieve lower noise than switched-capacitor charge integration for sensing ultra-small capacitance changes. A prototype accelerometer integrated with this circuit achieves 50g Hz acceleration noise floor and 0.02-aF Hz capacitance noise floor while chopped at 1 MHz.", "title": "" } ]
[ { "docid": "4ee5931bf57096913f7e13e5da0fbe7e", "text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.", "title": "" }, { "docid": "4aa17982590e86fea90267e4386e2ef1", "text": "There are many promising psychological interventions on the horizon, but there is no clear methodology for preparing them to be scaled up. Drawing on design thinking, the present research formalizes a methodology for redesigning and tailoring initial interventions. We test the methodology using the case of fixed versus growth mindsets during the transition to high school. Qualitative inquiry and rapid, iterative, randomized \"A/B\" experiments were conducted with ~3,000 participants to inform intervention revisions for this population. Next, two experimental evaluations showed that the revised growth mindset intervention was an improvement over previous versions in terms of short-term proxy outcomes (Study 1, N=7,501), and it improved 9th grade core-course GPA and reduced D/F GPAs for lower achieving students when delivered via the Internet under routine conditions with ~95% of students at 10 schools (Study 2, N=3,676). Although the intervention could still be improved even further, the current research provides a model for how to improve and scale interventions that begin to address pressing educational problems. It also provides insight into how to teach a growth mindset more effectively.", "title": "" }, { "docid": "f765a0c29c6d553ae1c7937b48416e9c", "text": "Although the topic of psychological well-being has generated considerable research, few studies have investigated how adults themselves define positive functioning. To probe their conceptions of well-being, interviews were conducted with a community sample of 171 middle-aged (M = 52.5 years, SD = 8.7) and older (M = 73.5 years, SD = 6.1) men and women. Questions pertained to general life evaluations, past life experiences, conceptions of well-being, and views of the aging process. Responses indicated that both age groups and sexes emphasized an \"others orientation\" (being a caring, compassionate person, and having good relationships) in defining well-being. Middle-aged respondents stressed self-confidence, self-acceptance, and self-knowledge, whereas older persons cited accepting change as an important quality of positive functioning. In addition to attention to positive relations with others as an index of well-being, lay views pointed to a sense of humor, enjoying life, and accepting change as criteria of successful aging.", "title": "" }, { "docid": "0acfa73c168328e33a92be4cc9de9c61", "text": "This article reviews recent advances in applying natural language processing NLP to Electronic Health Records EHRs for computational phenotyping. NLP-based computational phenotyping has numerous applications including diagnosis categorization, novel phenotype discovery, clinical trial screening, pharmacogenomics, drug-drug interaction DDI, and adverse drug event ADE detection, as well as genome-wide and phenome-wide association studies. Significant progress has been made in algorithm development and resource construction for computational phenotyping. Among the surveyed methods, well-designed keyword search and rule-based systems often achieve good performance. However, the construction of keyword and rule lists requires significant manual effort, which is difficult to scale. Supervised machine learning models have been favored because they are capable of acquiring both classification patterns and structures from data. Recently, deep learning and unsupervised learning have received growing attention, with the former favored for its performance and the latter for its ability to find novel phenotypes. Integrating heterogeneous data sources have become increasingly important and have shown promise in improving model performance. Often, better performance is achieved by combining multiple modalities of information. Despite these many advances, challenges and opportunities remain for NLP-based computational phenotyping, including better model interpretability and generalizability, and proper characterization of feature relations in clinical narratives.", "title": "" }, { "docid": "b6b63aa72904f9b7e24e3750c0db12f0", "text": "The explosion of the learning materials in personal learning environments has caused difficulties to locate appropriate learning materials to learners. Personalized recommendations have been used to support the activities of learners in personal learning environments and this technology can deliver suitable learning materials to learners. In order to improve the quality of recommendations, this research considers the multidimensional attributes of material, rating of learners, and the order and sequential patterns of the learner's accessed material in a unified model. The proposed approach has two modules. In the sequential-based recommendation module, latent patterns of accessing materials are discovered and presented in two formats including the weighted association rules and the compact tree structure (called Pattern-tree). In the attribute-based module, after clustering the learners using latent patterns by K-means algorithm, the learner preference tree (LPT) is introduced to consider the multidimensional attributes of materials, rating of learners, and also order of the accessed materials. The mixed, weighted, and cascade hybrid methods are employed to generate the final combined recommendations. The experiments show that the proposed approach outperforms the previous algorithms in terms of precision, recall, and intra-list similarity measure. The main contributions are improvement of the recommenda-tions' quality and alleviation of the sparsity problem by combining the contextual information, including order and sequential patterns of the accessed material, rating of learners, and the multidimensional attributes of materials. With the explosion of learning materials available on personal learning environments (PLEs), it is difficult for learners to discover the most appropriate materials according to keyword searching method. One way to address this challenge is the use of recom-mender systems [16]. In addition, up to very recent years, several researches have expressed the need for personalization in e-learning environments. In fact, one of the new forms of personalization in e-learning environments is to provide recommendations to learners to support and help them through the e-learning process [19]. According to the strategies applied, recommender systems can be segmented into three major categories: content-based, collabo-rative, and hybrid recommendation [1]. Hybrid recommendation mechanisms attempt to deal with some of the limitations and overcome the drawbacks of pure content-based approach and pure collaborative approach by combining the two approaches. The majority of the traditional recommendation algorithms have been developed for e-commerce applications, which are unable to cover the entire requirements of learning environments. One of these drawbacks is that they do not consider the learning process in their recommendation …", "title": "" }, { "docid": "c4b5c4c94faa6e77486a95457cdf502f", "text": "In this paper, we implement an optical fiber communication system as an end-to-end deep neural network, including the complete chain of transmitter, channel model, and receiver. This approach enables the optimization of the transceiver in a single end-to-end process. We illustrate the benefits of this method by applying it to intensity modulation/direct detection (IM/DD) systems and show that we can achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold. We model all componentry of the transmitter and receiver, as well as the fiber channel, and apply deep learning to find transmitter and receiver configurations minimizing the symbol error rate. We propose and verify in simulations a training method that yields robust and flexible transceivers that allow—without reconfiguration—reliable transmission over a large range of link dispersions. The results from end-to-end deep learning are successfully verified for the first time in an experiment. In particular, we achieve information rates of 42 Gb/s below the HD-FEC threshold at distances beyond 40 km. We find that our results outperform conventional IM/DD solutions based on two- and four-level pulse amplitude modulation with feedforward equalization at the receiver. Our study is the first step toward end-to-end deep learning based optimization of optical fiber communication systems.", "title": "" }, { "docid": "4b10247d93eeda55d35e43e232611b4c", "text": "To make the business accessible to a large number of customers worldwide, many companies small and big have put up their presence on the internet. Online businesses gave birth to e-commerce platforms which in turn use digital modes of transaction such as credit-card, debit card etc. This kind of digital transaction attracted millions of users to transact on the internet. Along came the risk of online credit card frauds.", "title": "" }, { "docid": "3428f44611fad7c42b621da9384008a0", "text": "In this issue, “Best of the Web” presents the modified National Institute of Standards and Technology (MNIST) resources, consisting of a collection of handwritten digit images used extensively in optical character recognition and machine learning research.", "title": "" }, { "docid": "4688caf6a80463579f293b2b762da5b5", "text": "To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs' schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators' revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.", "title": "" }, { "docid": "978deffd9337932a217dde27130be0e4", "text": "Semantic memory includes all acquired knowledge about the world and is the basis for nearly all human activity, yet its neurobiological foundation is only now becoming clear. Recent neuroimaging studies demonstrate two striking results: the participation of modality-specific sensory, motor, and emotion systems in language comprehension, and the existence of large brain regions that participate in comprehension tasks but are not modality-specific. These latter regions, which include the inferior parietal lobe and much of the temporal lobe, lie at convergences of multiple perceptual processing streams. These convergences enable increasingly abstract, supramodal representations of perceptual experience that support a variety of conceptual functions including object recognition, social cognition, language, and the remarkable human capacity to remember the past and imagine the future.", "title": "" }, { "docid": "a28267004a26f08550d2b2b129fff860", "text": "Falls accounted for 5.9% of the childhood deaths due to trauma in a review of the medical examiner's files in a large urban county. Falls represented the seventh leading cause of traumatic death in all children 15 years of age or younger, but the third leading cause of death in children 1 to 4 years old. The mean age of those with accidental falls was 2.3 years, which is markedly younger than that seen in hospital admission series, suggesting that infants are much more likely to die from a fall than older children. Forty-one per cent of the deaths occurred from \"minor\" falls such as falls from furniture or while playing; 50% were falls from a height of one story or greater; the remainder were falls down stairs. Of children falling from less than five stories, death was due to a lethal head injury in 86%. Additionally, 61.3% of the children with head injuries had mass lesions which would have required acute neurosurgical intervention. The need for an organized pediatric trauma system is demonstrated as more than one third of the children were transferred to another hospital, with more than half of these deteriorating during the delay. Of the patients with \"minor\" falls, 38% had parental delay in seeking medical attention, with deterioration of all. The trauma system must also incorporate the education of parents and medical personnel to the potential lethality of \"minor\" falls in infants and must legislate injury prevention programs.", "title": "" }, { "docid": "5a770f72b9d47f4ce654cdb58919b925", "text": "DNA mismatch repair (MMR) is one of the biological pathways, which plays a critical role in DNA homeostasis, primarily by repairing base-pair mismatches and insertion/deletion loops that occur during DNA replication. MMR also takes part in other metabolic pathways and regulates cell cycle arrest. Defects in MMR are associated with genomic instability, predisposition to certain types of cancers and resistance to certain therapeutic drugs. Moreover, genetic and epigenetic alterations in the MMR system demonstrate a significant relationship with human fertility and related treatments, which helps us to understand the etiology and susceptibility of human infertility. Alterations in the MMR system may also influence the health of offspring conceived by assisted reproductive technology in humans. However, further studies are needed to explore the specific mechanisms by which the MMR system may affect human infertility. This review addresses the physiological mechanisms of the MMR system and associations between alterations of the MMR system and human fertility and related treatments, and potential effects on the next generation. 简要概括DNA 损伤修复系统在人体中的作用和机制,并探讨其改变与人类生殖能力以及通过辅助生殖技术诞生的子代之间的相互影响。希望更多相关工作的进行能够为人类不孕症的预防、诊断和治疗工作建立一个更好的医疗体系。", "title": "" }, { "docid": "4f57590f8bbf00d35b86aaa1ff476fc0", "text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.", "title": "" }, { "docid": "c15f36dccebee50056381c41e6ddb2dc", "text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.", "title": "" }, { "docid": "cccc206a025f6ae2a47a4068b6ded4c6", "text": "Most existing methods for audio sentiment analysis use automatic speech recognition to convert speech to text, and feed the textual input to text-based sentiment classifiers. This study shows that such methods may not be optimal, and proposes an alternate architecture where a single keyword spotting system (KWS) is developed for sentiment detection. In the new architecture, the text-based sentiment classifier is utilized to automatically determine the most powerful sentiment-bearing terms, which is then used as the term list for KWS. In order to obtain a compact yet powerful term list, a new method is proposed to reduce text-based sentiment classifier model complexity while maintaining good classification accuracy. Finally, the term list information is utilized to build a more focused language model for the speech recognition system. The result is a single integrated solution which is focused on vocabulary that directly impacts classification. The proposed solution is evaluated on videos from YouTube.com and UT-Opinion corpus (which contains naturalistic opinionated audio collected in real-world conditions). Our experimental results show that the KWS based system significantly outperforms the traditional architecture in difficult practical tasks.", "title": "" }, { "docid": "5f6d142860a4bd9ff1fa9c4be9f17890", "text": "Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl’s algorithm for singly-connected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable which breaks it. The main advantage of this algorithm is that it computes the probability directly on the original network instead of building a cluster tree, and this can save time when debugging a model and when the sparsity of evidence allows a pruning of the network. The algorithm is also advantageous when some families in the network interact through AND/OR gates. A parallel implementation of the algorithm with a processor for each node is possible even in the case of multiply-connected networks.", "title": "" }, { "docid": "be29c412c17f9a87829cfe86fd3b1040", "text": "Nowadays there is a continuously increasing worldwide concern for the development of wastewater treatment technologies. The utilization of iron oxide nanomaterials has received much attention due to their unique properties, such as extremely small size, high surface-area-to-volume ratio, surface modifiability, excellent magnetic properties and great biocompatibility. A range of environmental clean-up technologies have been proposed in wastewater treatment which applied iron oxide nanomaterials as nanosorbents and photocatalysts. Moreover, iron oxide based immobilization technology for enhanced removal efficiency tends to be an innovative research point. This review outlined the latest applications of iron oxide nanomaterials in wastewater treatment, and gaps which limited their large-scale field applications. The outlook for potential applications and further challenges, as well as the likely fate of nanomaterials discharged to the environment were discussed.", "title": "" }, { "docid": "362301e0a25d8e14054b2eee20d9ba31", "text": "Preterm birth is “a birth which takes place after at least 20, but less than 37, completed weeks of gestation. This includes both live births, and stillbirths” [15]. Preterm birth may cause problems such as perinatal mortality, serious neonatal morbidity and moderate to severe childhood disability. Between 6-10% of all births in Western countries are preterm and preterm deaths are the cause for more than two-third of all perinatal deaths [9]. While the recent advances in neonatal medicine has greatly increase the chance of survival of infants born after 20 weeks of gestation, these infants still frequently suffer from lifelong handicaps, and their care can exceed a million dollars during the first year of life [5 as cited in 6]. As a first step for preventing preterm birth, decision support tools are needed to help doctors predict preterm birth [6].", "title": "" }, { "docid": "c4d0dc9ef6e982fbfd218fb7b4c92f68", "text": "In this paper, we present new theoretical and experimental results for bidirectional A∗ search. Unlike most previous research on this topic, our results do not require assumptions of either consistent or balanced heuristic functions for the search. Our theoretical work examines new results on the worst-case number of node expansions for inconsistent heuristic functions with bounded estimation errors. Additionally, we consider several alternative termination criteria in order to more quickly terminate the bidirectional search, and we provide worst-case approximation bounds for our suggested criteria. We prove that our approximation bounds are purely additive in nature (a general improvement over previous multiplicative approximations). Experimental evidence on large-scale road networks suggests that the errors introduced are truly quite negligible in practice, while the performance gains are significant.", "title": "" }, { "docid": "d24331326c59911f9c1cdc5dd5f14845", "text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The soft-switching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zero-voltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters", "title": "" } ]
scidocsrr
cf4f9621794aa42dc8e18d986bc3c6a5
Text to 3D Scene Generation with Rich Lexical Grounding
[ { "docid": "9eb701e36cb353643b8fdd773dff387e", "text": "As robots become more ubiquitous and capable, it becomes ever more important for untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to the physical world. We present an approach for joint learning of language and perception models for grounded attribute induction. The perception model includes classifiers for physical characteristics and a language model based on a probabilistic categorial grammar that enables the construction of compositional meaning representations. We evaluate on the task of interpreting sentences that describe sets of objects in a physical workspace, and demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes.", "title": "" } ]
[ { "docid": "71e786ccfc57ad62e90dd4a7b85cbedd", "text": "Studies addressing behavioral functions of dopamine (DA) in the nucleus accumbens septi (NAS) are reviewed. A role of NAS DA in reward has long been suggested. However, some investigators have questioned the role of NAS DA in rewarding effects because of its role in aversive contexts. As findings supporting the role of NAS DA in mediating aversively motivated behaviors accumulate, it is necessary to accommodate such data for understanding the role of NAS DA in behavior. The aim of the present paper is to provide a unifying interpretation that can account for the functions of NAS DA in a variety of behavioral contexts: (1) its role in appetitive behavioral arousal, (2) its role as a facilitator as well as an inducer of reward processes, and (3) its presently undefined role in aversive contexts. The present analysis suggests that NAS DA plays an important role in sensorimotor integrations that facilitate flexible approach responses. Flexible approach responses are contrasted with fixed instrumental approach responses (habits), which may involve the nigro-striatal DA system more than the meso-accumbens DA system. Functional properties of NAS DA transmission are considered in two stages: unconditioned behavioral invigoration effects and incentive learning effects. (1) When organisms are presented with salient stimuli (e.g., novel stimuli and incentive stimuli), NAS DA is released and invigorates flexible approach responses (invigoration effects). (2) When proximal exteroceptive receptors are stimulated by unconditioned stimuli, NAS DA is released and enables stimulus representations to acquire incentive properties within specific environmental context. It is important to make a distinction that NAS DA is a critical component for the conditional formation of incentive representations but not the retrieval of incentive stimuli or behavioral expressions based on over-learned incentive responses (i.e., habits). Nor is NAS DA essential for the cognitive perception of environmental stimuli. Therefore, even without normal NAS DA transmission, the habit response system still allows animals to perform instrumental responses given that the tasks take place in fixed environment. Such a role of NAS DA as an incentive-property constructor is not limited to appetitive contexts but also aversive contexts. This dual action of NAS DA in invigoration and incentive learning may explain the rewarding effects of NAS DA as well as other effects of NAS DA in a variety of contexts including avoidance and unconditioned/conditioned increases in open-field locomotor activity. Particularly, the present hypothesis offers the following interpretation for the finding that both conditioned and unconditioned aversive stimuli stimulate DA release in the NAS: NAS DA invigorates approach responses toward 'safety'. Moreover, NAS DA modulates incentive properties of the environment so that organisms emit approach responses toward 'safety' (i.e., avoidance responses) when animals later encounter similar environmental contexts. There may be no obligatory relationship between NAS DA release and positive subjective effects, even though these systems probably interact with other brain systems which can mediate such effects. The present conceptual framework may be valuable in understanding the dynamic interplay of NAS DA neurochemistry and behavior, both normal and pathophysiological.", "title": "" }, { "docid": "41b6a43f720fc67a3bf0b8136d7a8db9", "text": "☆ The authors would like to acknowledge the financial ash Research Graduate School (MRGS) and the Faculty Monash University. ☆☆The authors would like to thank the two anonymou and invaluable feedback. ⁎ Corresponding author at: Department of Marketin 197, Caulfield East, Victoria, 3145. Tel.: +61 3 9903 256 E-mail addresses: munyaradzi.nyadzayo@monash.ed margaret.matanda@monash.edu (M.J. Matanda), michae (M.T. Ewing). 1 Tel.: +61 3 990 31286. 2 Tel.: +61 3 990 44021.", "title": "" }, { "docid": "be80f1f3411725aa5105f38721735616", "text": "The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of \"vocabulary gap\" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores.", "title": "" }, { "docid": "373d3549865647bd469b160d60db71c8", "text": "The encoding of time and its binding to events are crucial for episodic memory, but how these processes are carried out in hippocampal–entorhinal circuits is unclear. Here we show in freely foraging rats that temporal information is robustly encoded across time scales from seconds to hours within the overall population state of the lateral entorhinal cortex. Similarly pronounced encoding of time was not present in the medial entorhinal cortex or in hippocampal areas CA3–CA1. When animals’ experiences were constrained by behavioural tasks to become similar across repeated trials, the encoding of temporal flow across trials was reduced, whereas the encoding of time relative to the start of trials was improved. The findings suggest that populations of lateral entorhinal cortex neurons represent time inherently through the encoding of experience. This representation of episodic time may be integrated with spatial inputs from the medial entorhinal cortex in the hippocampus, allowing the hippocampus to store a unified representation of what, where and when. Temporal information that is useful for episodic memory is encoded across a wide range of timescales in the lateral entorhinal cortex, arising inherently from its representation of ongoing experience.", "title": "" }, { "docid": "340f4f9336dd0884bb112345492b47f9", "text": "Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the nondifferentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-theart on all metrics (including human evaluation) on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC2002 dataset, where we achieve higher scores than a state-of-the-art model.", "title": "" }, { "docid": "28c142db30818e7e5012074e31cfd1c3", "text": "The design of an IC on-chip oscillator including a temperature compensation circuitry, a robust spread reduction technique and digital trimming is described. The IC oscillator provides a 12.8MHz clock signal with a frequency spread of ±25% before the 8-bits digital trimming. After centering the oscillator at the target frequency, a temperature compensated voltage and current reference circuit allows for less than ±5% frequency variation when operating from 3 to 5V of power supply and from -40 to 125°C of temperature range. The oscillator is implemented in a 0.5 µm CMOS technology, occupies an area of 420x440 µm2 and dissipates less than 400 µW at 3V of supply without requiring any external reference or components.", "title": "" }, { "docid": "6b73e2bf2c8de87e9ab749b1d72d3515", "text": "We present a robust framework for estimating non-rigid 3D shape and motion in video sequences. Given an input video sequence, and a user-specified region to reconstruct, the algorithm automatically solves for the 3D time-varying shape and motion of the object, and estimates which pixels are outliers, while learning all system parameters, including a PDF over non-rigid deformations. There are no user-tuned parameters (other than initialization); all parameters are learned by maximizing the likelihood of the entire image stream. We apply our method to both rigid and non-rigid shape reconstruction, and demonstrate it in challenging cases of occlusion and variable illumination.", "title": "" }, { "docid": "d9eed063ea6399a8f33c6cbda3a55a62", "text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "e971fd6eac427df9a68f10cad490b2db", "text": "We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the 'PICO' elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.", "title": "" }, { "docid": "a645f2b68ced60099d8ae93f79e1714a", "text": "The purpose of this study was to examine the extent to which fundamental movement skills and physical fitness scores assessed in early adolescence predict self-reported physical activity assessed 6 years later. The sample comprised 333 (200 girls, 133 boys; M age = 12.41) students. The effects of previous physical activity, sex, and body mass index (BMI) were controlled in the main analyses. Adolescents' fundamental movement skills, physical fitness, self-report physical activity, and BMI were collected at baseline, and their self-report energy expenditure (metabolic equivalents: METs) and intensity of physical activity were collected using the International Physical Activity Questionnaire 6 years later. Results showed that fundamental movement skills predicted METs, light, moderate, and vigorous intensity physical activity levels, whereas fitness predicted METs, moderate, and vigorous physical activity levels. Hierarchical regression analyses also showed that after controlling for previous levels of physical activity, sex, and BMI, the size of the effect of fundamental movement skills and physical fitness on energy expenditure and physical activity intensity was moderate (R(2) change between 0.06 and 0.15), with the effect being stronger for high intensity physical activity.", "title": "" }, { "docid": "1d5cd4756e424f3d282545f029c1e9bb", "text": "Anomaly detection systems deployed for monitoring in oil and gas industries are mostly WSN based systems or SCADA systems which all suffer from noteworthy limitations. WSN based systems are not homogenous or incompatible systems. They lack coordinated communication and transparency among regions and processes. On the other hand, SCADA systems are expensive, inflexible, not scalable, and provide data with long delay. In this paper, a novel IoT based architecture is proposed for Oil and gas industries to make data collection from connected objects as simple, secure, robust, reliable and quick. Moreover, it is suggested that how this architecture can be applied to any of the three categories of operations, upstream, midstream and downstream. This can be achieved by deploying a set of IoT based smart objects (devices) and cloud based technologies in order to reduce complex configurations and device programming. Our proposed IoT architecture supports the functional and business requirements of upstream, midstream and downstream oil and gas value chain of geologists, drilling contractors, operators, and other oil field services. Using our proposed IoT architecture, inefficiencies and problems can be picked and sorted out sooner ultimately saving time and money and increasing business productivity.", "title": "" }, { "docid": "6bae81e837f4a498ae4c814608aac313", "text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.", "title": "" }, { "docid": "8335faee33da234e733d8f6c95332ec3", "text": "Myanmar script uses no space between words and syllable segmentation represents a significant process in many NLP tasks such as word segmentation, sorting, line breaking and so on. In this study, a rulebased approach of syllable segmentation algorithm for Myanmar text is proposed. Segmentation rules were created based on the syllable structure of Myanmar script and a syllable segmentation algorithm was designed based on the created rules. A segmentation program was developed to evaluate the algorithm. A training corpus containing 32,283 Myanmar syllables was tested in the program and the experimental results show an accuracy rate of 99.96% for segmentation.", "title": "" }, { "docid": "0397514e0d4a87bd8b59d9b317f8c660", "text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.", "title": "" }, { "docid": "465c1ecc79617d96c9509106badc8673", "text": "Bacterial replicative DNA polymerases such as Polymerase III (Pol III) share no sequence similarity with other polymerases. The crystal structure, determined at 2.3 A resolution, of a large fragment of Pol III (residues 1-917), reveals a unique chain fold with localized similarity in the catalytic domain to DNA polymerase beta and related nucleotidyltransferases. The structure of Pol III is strikingly different from those of members of the canonical DNA polymerase families, which include eukaryotic replicative polymerases, suggesting that the DNA replication machinery in bacteria arose independently. A structural element near the active site in Pol III that is not present in nucleotidyltransferases but which resembles an element at the active sites of some canonical DNA polymerases suggests that, at a more distant level, all DNA polymerases may share a common ancestor. The structure also suggests a model for interaction of Pol III with the sliding clamp and DNA.", "title": "" }, { "docid": "a99785b0563ca5922da304f69aa370c0", "text": "Marcel Fritz, Christian Schlereth, Stefan Figge Empirical Evaluation of Fair Use Flat Rate Strategies for Mobile Internet The fair use flat rate is a promising tariff concept for the mobile telecommunication industry. Similar to classical flat rates it allows unlimited usage at a fixed monthly fee. Contrary to classical flat rates it limits the access speed once a certain usage threshold is exceeded. Due to the current global roll-out of the LTE (Long Term Evolution) technology and the related economic changes for telecommunication providers, the application of fair use flat rates needs a reassessment. We therefore propose a simulation model to evaluate different pricing strategies and their contribution margin impact. The key input element of the model is provided by socalled discrete choice experiments that allow the estimation of customer preferences. Based on this customer information and the simulation results, the article provides the following recommendations. Classical flat rates do not allow profitable provisioning of mobile Internet access. Instead, operators should apply fair use flat rates with a lower usage threshold of 1 or 3 GB which leads to an improved contribution margin. Bandwidth and speed are secondary and do merely impact customer preferences. The main motivation for new mobile technologies such as LTE should therefore be to improve the cost structure of an operator rather than using it to skim an assumed higher willingness to pay of mobile subscribers.", "title": "" }, { "docid": "4d5bba781cac8b78040e7c3baeed4f3a", "text": "Area efficient architecture is today's major concern in the field of VLSI, Digital signal processing circuits, cryptographic algorithms, wireless communications and Internet of Things (IOT). Majority of the architectures use multiplication. Realization of multiplication by using repetitive addition and shift and add methods consumes more area, power and delay. Vedic is one of the efficient multipliers. Design of Vedic multiplier using different sutras reduces area and power. From the structure of Vedic multiplier, it is clearly observed that there is scope to design an efficient architecture. In this research, Vedic multiplier is designed using modified full adder which consumes less number of LUT's, slices and delay when compared to normal conventional Vedic multiplier. Simulation and synthesis are carried on XILINX ISE 12.2 software. FPGA results of the proposed multiplier show that number of LUT's is less by 13.8% in the modified Vedic Multiplier (4×4) and less by 7.5% in modified Vedic Multiplier(8×8). Delay is less by 10% in modified Vedic Multiplier (4×4) and 7.2 % in modified Vedic Multiplier (8×8).", "title": "" }, { "docid": "45cc3369df084b22642cfc7288bc0abb", "text": "This paper proposes a novel unsupervised feature selection method by jointing self-representation and subspace learning. In this method, we adopt the idea of self-representation and use all the features to represent each feature. A Frobenius norm regularization is used for feature selection since it can overcome the over-fitting problem. The Locality Preserving Projection (LPP) is used as a regularization term as it can maintain the local adjacent relations between data when performing feature space transformation. Further, a low-rank constraint is also introduced to find the effective low-dimensional structures of the data, which can reduce the redundancy. Experimental results on real-world datasets verify that the proposed method can select the most discriminative features and outperform the state-of-the-art unsupervised feature selection methods in terms of classification accuracy, standard deviation, and coefficient of variation.", "title": "" }, { "docid": "0cd9577750b6195c584e55aac28cc2ba", "text": "The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, incentives are becoming as important to dependability as technical design. The new field provides valuable insights not just into ‘security’ topics such as privacy, bugs, spam, and phishing, but into more general areas such as system dependability (the design of peer-to-peer systems and the optimal balance of effort by programmers and testers), and policy (particularly digital rights management). This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and sociology. Most recently it has started to interact with psychology, both through the psychology-and-economics tradition and in response to phishing. The promise of this research program is a novel framework for analyzing information security problems – one that is both principled and effective.", "title": "" }, { "docid": "2d86f517026d93454bb1761dd21c7e9d", "text": "This article presents a new approach to movement planning, on-line trajectory modification, and imitation learning by representing movement plans based on a set of nonlinear differential equations with well-defined attractor dynamics. In contrast to non-autonomous movement representations like splines, the resultant movement plan remains an autonomous set of nonlinear differential equations that forms a control policy (CP) which is robust to strong external perturbations and that can be modified on-line by additional perceptual variables. The attractor landscape of the control policy can be learned rapidly with a locally weighted regression technique with guaranteed convergence of the learning algorithm and convergence to the movement target. This property makes the system suitable for movement imitation and also for classifying demonstrated movement according to the parameters of the learning system. We evaluate the system with a humanoid robot simulation and an actual humanoid robot. Experiments are presented for the imitation of three types of movements: reaching movements with one arm, drawing movements of 2-D patterns, and tennis swings. Our results demonstrate (a) that multi-joint human movements can be encoded successfully by the CPs, (b) that a learned movement policy can readily be reused to produce robust trajectories towards different targets, (c) that a policy fitted for one particular target provides a good predictor of human reaching movements towards neighboring targets, and (d) that the parameter space which encodes a policy is suitable for measuring to which extent two trajectories are qualitatively similar.", "title": "" } ]
scidocsrr
8462bebaecad0e8b6b912393fab9706b
Estimating driving behavior by a smartphone
[ { "docid": "75d76315376a1770c4be06d420a0bf96", "text": "Motor vehicles greatly influence human life but are also a major cause of death and road congestion, which is an obstacle to future economic development. We believe that by learning driving patterns, useful navigation support can be provided for drivers. In this paper, we present a simple and reliable method for the recognition of driving events using hidden Markov models (HMMs), popular stochastic tools for studying time series data. A data acquisition system was used to collect longitudinal and lateral acceleration and speed data from a real vehicle in a normal driving environment. Data were filtered, normalized, segmented, and quantified to obtain the symbolic representation necessary for use with discrete HMMs. Observation sequences for training and evaluation were manually selected and classified as events of a particular type. An appropriate model size was selected, and the model was trained for each type of driving events. Observation sequences from the training set were evaluated by multiple models, and the highest probability decides what kind of driving event this sequence represents. The recognition results showed that HMMs could recognize driving events very accurately and reliably.", "title": "" }, { "docid": "3c444d8918a31831c2dc73985d511985", "text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.", "title": "" } ]
[ { "docid": "93e2a4357573c446b2747f7b21d9d443", "text": "Social Network Systems pioneer a paradigm of access control that is distinct from traditional approaches to access control. Gates coined the term Relationship-Based Access Control (ReBAC) to refer to this paradigm. ReBAC is characterized by the explicit tracking of interpersonal relationships between users, and the expression of access control policies in terms of these relationships. This work explores what it takes to widen the applicability of ReBAC to application domains other than social computing. To this end, we formulate an archetypical ReBAC model to capture the essence of the paradigm, that is, authorization decisions are based on the relationship between the resource owner and the resource accessor in a social network maintained by the protection system. A novelty of the model is that it captures the contextual nature of relationships. We devise a policy language, based on modal logic, for composing access control policies that support delegation of trust. We use a case study in the domain of Electronic Health Records to demonstrate the utility of our model and its policy language. This work provides initial evidence to the feasibility and utility of ReBAC as a general-purpose paradigm of access control.", "title": "" }, { "docid": "5802a9b6f95783d78ceb22410b0d6c18", "text": "Social Internet of Things (SIoT) is a new paradigm where Internet of Things (IoT) merges with social networks, allowing people and devices to interact, and facilitating information sharing. However, security and privacy issues are a great challenge for IoT but they are also enabling factors to create a “trust ecosystem.” In fact, the intrinsic vulnerabilities of IoT devices, with limited resources and heterogeneous technologies, together with the lack of specifically designed IoT standards, represent a fertile ground for the expansion of specific cyber threats. In this paper, we try to bring order on the IoT security panorama providing a taxonomic analysis from the perspective of the three main key layers of the IoT system model: 1) perception; 2) transportation; and 3) application levels. As a result of the analysis, we will highlight the most critical issues with the aim of guiding future research directions.", "title": "" }, { "docid": "0e27a00b36626b0454b11f4f8b1fb522", "text": "Although active islanding detection techniques have smaller non-detection zones than passive techniques, active methods could degrade the system power quality and are not as simple and easy to implement as passive methods. The islanding detection strategy, proposed in this paper, combines the advantages of both active and passive islanding detection methods. The distributed generation (DG) interface was designed so that the DG maintains stable operation while being grid connected and loses its stability once islanded. Thus, the over/under voltage and variation in the reactive power method be sufficient to detect islanding. The main advantage of the proposed technique is that it relies on a simple approach for islanding detection and has negligible non-detection zone. The proposed system was simulated on MATLAB/SIMULINK and simulation results are presented to highlight the effectiveness of the proposed technique.", "title": "" }, { "docid": "14d5fe4a4af7c6d2e530eae57d359a9f", "text": "The new formulation of the stochastic vortex particle method has been presented. Main elements of the algorithms: the construction of the particles, governing equations, stretching modeling and boundary condition enforcement are described. The test case is the unsteady flow past a spherical body. Sample results concerning patterns in velocity and vorticity fields, streamlines, pressure and aerodynamic forces are presented.", "title": "" }, { "docid": "209b304009db4a04400da178d19fe63e", "text": "Mecanum wheels give vehicles and robots autonomous omni-directional capabilities, while regular wheels don’t. The omni-directionality that such wheels provide makes the vehicle extremely maneuverable, which could be very helpful in different indoor and outdoor applications. However, current Mecanum wheel designs can only operate on flat hard surfaces, and perform very poorly on rough terrains. This paper presents two modified Mecanum wheel designs targeted for complex rough terrains and discusses their advantages and disadvantages in comparison to regular Mecanum wheels. The wheels proposed here are particularly advantageous for overcoming obstacles up to 75% of the overall wheel diameter in lateral motion which significantly facilitates the lateral motion of vehicles on hard rough surfaces and soft soils such as sand which cannot be achieved using other types of wheels. The paper also presents control aspects that need to be considered when controlling autonomous vehicles/robots using the proposed wheels.", "title": "" }, { "docid": "815fe60934f0313c56e631d73b998c95", "text": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.", "title": "" }, { "docid": "e29296607c63951174a7a5e942f653c7", "text": "Corresponding author: Douglas Kunda Mulungushi University, School of Science, Engineering and Technology, Kabwe, Zambia Email: dkunda@mu.edu.zm Abstract: Agile development is a software development process that advocates adaptive planning, early delivery, evolutionary development and continuous betterment and supports rapid and flexible response to change. The purpose of Agile development is minimize project failure through customer interactions and responding to change. However, Agile development is vulnerable to failure because of a number of factors and these factors can be categorized under four dimensions, namely; organizational, people, process and technical. This paper reports the result of a study aimed at identifying factors that influence success and/or failure of Agile development in a developing country, Zambia. A multiple case study approach and grounded theory approach was used for this case study. The study shows that there are challenges that are unique to developing countries and therefore measures should be developed to address these unique problems when implementing Agile projects in developing countries.", "title": "" }, { "docid": "e7522c776e1219196aa52147834b6f61", "text": "Machine learning deals with the issue of how to build programs that improve their performance at some task through experience. Machine learning algorithms have proven to be of great practical value in a variety of application domains. They are particularly useful for (a) poorly understood problem domains where littl e knowledge exists for the humans to develop effective algorithms; (b) domains where there are large databases containing valuable implicit regularities to be discovered; or (c) domains where programs must adapt to changing conditions. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicabilit y of some frequently utili zed machine learning algorithms. We then provide formulations of some software development tasks using learning algorithms. Finally, a brief summary is given of the existing work.", "title": "" }, { "docid": "81a9c8a0314703f2c73789f46b394bfe", "text": "In order to reproduce jaw motions and mechanics that match the human jaw function truthfully with the conception of bionics, a novel human jaw movement robot based on mechanical biomimetic principles was proposed. Firstly, based on the biomechanical properties of mandibular muscles, a jaw robot is built based on the 6-PSS parallel mechanism. Secondly, the inverse kinematics solution equations are derived. Finally, kinematics performances, such as workspace with the orientation constant, manipulability, dexterity of the jaw robot are obtained. These indices show that the parallel mechanism have a big enough flexible workspace, no singularity, and a good motion transfer performance for human chewing movement.", "title": "" }, { "docid": "104c9347338f4e725e3c1907a4991977", "text": "This paper derives a speech parameter generation algorithm for HMM-based speech synthesis, in which speech parameter sequence is generated from HMMs whose observation vector consists of spectral parameter vector and its dynamic feature vectors. In the algorithm, we assume that the state sequence (state and mixture sequence for the multi-mixture case) or a part of the state sequence is unobservable (i.e., hidden or latent). As a result, the algorithm iterates the forward-backward algorithm and the parameter generation algorithm for the case where state sequence is given. Experimental results show that by using the algorithm, we can reproduce clear formant structure from multi-mixture HMMs as compared with that produced from single-mixture HMMs.", "title": "" }, { "docid": "561e9f599e5dc470ca6f57faa62ebfce", "text": "Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a dynamic representation space and use it for oneshot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.", "title": "" }, { "docid": "80f098f2cee2f0cef196c946ba93cb99", "text": "In this paper we propose a new approach to incrementally initialize a manifold surface for automatic 3D reconstruction from images. More precisely we focus on the automatic initialization of a 3D mesh as close as possible to the final solution; indeed many approaches require a good initial solution for further refinement via multi-view stereo techniques. Our novel algorithm automatically estimates an initial manifold mesh for surface evolving multi-view stereo algorithms, where the manifold property needs to be enforced. It bootstraps from 3D points extracted via Structure from Motion, then iterates between a state-of-the-art manifold reconstruction step and a novel mesh sweeping algorithm that looks for new 3D points in the neighborhood of the reconstructed manifold to be added in the manifold reconstruction. The experimental results show quantitatively that the mesh sweeping improves the resolution and the accuracy of the manifold reconstruction, allowing a better convergence of state-of-the-art surface evolution multi-view stereo algorithms.", "title": "" }, { "docid": "91c937ddfcf7aa0957e1c9a997149f87", "text": "Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.", "title": "" }, { "docid": "a64a83791259350d5d76dc1ea097a7fb", "text": "Today the channels for expressing opinions seem to increase daily. When these opinions are relevant to a company, they are important sources of business insight, whether they represent critical intelligence about a customer's defection risk, the impact of an influential reviewer on other people's purchase decisions, or early feedback on product releases, company news or competitors. Capturing and analyzing these opinions is a necessity for proactive product planning, marketing and customer service and it is also critical in maintaining brand integrity. The importance of harnessing opinion is growing as consumers use technologies such as Twitter to express their views directly to other consumers. Tracking the disparate sources of opinion is hard - but even harder is quickly and accurately extracting the meaning so companies can analyze and act. Tweets' Language is complicated and contextual, especially when people are expressing opinions and requires reliable sentiment analysis based on parsing many linguistic shades of gray. This article argues that using the R programming platform for analyzing tweets programmatically simplifies the task of sentiment analysis and opinion mining. An R programming technique has been used for testing different sentiment lexicons as well as different scoring schemes. Experiments on analyzing the tweets of users over six NHL hockey teams reveals the effectively of using the opinion lexicon and the Latent Dirichlet Allocation (LDA) scoring scheme.", "title": "" }, { "docid": "3ef23f2c076837f804819e11f39734f9", "text": "Non-wet solder joints in processor sockets are causing mother board failures. These board failures can escape to customers resulting in returns and dissatisfaction. The current process to identify these non-wets is to use a 2D or advanced X-ray tool with multidimension capability to image solder joints in processor sockets. The images are then examined by an operator who determines if each individual joint is good or bad. There can be an average of 150 images for an operator to examine for each socket. Each image contains more than 30 joints. These factors make the inspection process time consuming and the output variable depending on the skill and alertness of the operator. This paper presents an automatic defect identification and classification system for the detection of non-wet solder joints. The main components of the proposed system consist of region of interest (ROI) segmentation, feature extraction, reference-free classification, and automatic mapping. The ROI segmentation process is a noise-resilient segmentation method for the joint area. The centroids of the segmented joints (ROIs) are used as feature parameters to detect the suspect joints. The proposed reference-free classification can detect defective joints in the considered images with high accuracy without the need for training data or reference images. An automatic mapping procedure which maps the positions of all joints to a known Master Ball Grid Array file is used to get the precise label and location of the suspect joint for display to the operator and collection of non-wet statistics. The accuracy of the proposed system was determined to be 95.8% based on the examination of 56 sockets (76 496 joints). The false alarm rate is 1.1%. In comparison, the detection rate of a currently available advanced X-ray tool with multidimension capability is in the range of 43% to 75%. The proposed method reduces the operator effort to examine individual images by 89.6% (from looking at 154 images to 16 images) by presenting only images with suspect joints for inspection. When non-wet joints are missed, the presented system has been shown to identify the neighboring joints. This fact provides the operator with the capability to make 100% detection of all non-wets when utilizing a user interface that highlights the suspect joint area. The system works with a 2D X-ray imaging device, which saves cost over more expensive advanced X-ray tools with multidimension capability. The proposed scheme is relatively inexpensive to implement, easy to set up and can work with a variety of 2D X-ray tools.", "title": "" }, { "docid": "ea937e1209c270a7b6ab2214e0989fed", "text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.", "title": "" }, { "docid": "4fe5c25f57d5fa5b71b0c2b9dae7db29", "text": "Position control of a quad tilt-wing UAV via a nonlinear hierarchical adaptive control approach is presented. The hierarchy consists of two levels. In the upper level, a model reference adaptive controller creates virtual control commands so as to make the UAV follow a given desired trajectory. The virtual control inputs are then converted to desired attitude angle references which are fed to the lower level attitude controller. Lower level controller is a nonlinear adaptive controller. The overall controller is developed for the full nonlinear dynamics of the tilt-wing UAV and thus no linearization is required. In addition, since the approach is adaptive, uncertainties in the UAV dynamics can be handled. Performance of the controller is presented via simulation results.", "title": "" }, { "docid": "baa59c53346e16f4c55b6fef20f19a89", "text": "Incoming and outgoing processing for a given TCP connection often execute on different cores: an incoming packet is typically processed on the core that receives the interrupt, while outgoing data processing occurs on the core running the relevant user code. As a result, accesses to read/write connection state (such as TCP control blocks) often involve cache invalidations and data movement between cores' caches. These can take hundreds of processor cycles, enough to significantly reduce performance.\n We present a new design, called Affinity-Accept, that causes all processing for a given TCP connection to occur on the same core. Affinity-Accept arranges for the network interface to determine the core on which application processing for each new connection occurs, in a lightweight way; it adjusts the card's choices only in response to imbalances in CPU scheduling. Measurements show that for the Apache web server serving static files on a 48-core AMD system, Affinity-Accept reduces time spent in the TCP stack by 30% and improves overall throughput by 24%.", "title": "" }, { "docid": "e6e74971af2576ff119d277927727659", "text": "In Germany there is limited information available about the distribution of the tropical rat mite (Ornithonyssus bacoti) in rodents. A few case reports show that this hematophagous mite species may also cause dermatitis in man. Having close body contact to small rodents is an important question for patients with pruritic dermatoses. The definitive diagnosis of this ectoparasitosis requires the detection of the parasite, which is more likely to be found in the environment of its host (in the cages, in the litter or in corners or cracks of the living area) than on the hosts' skin itself. A case of infestation with tropical rat mites in a family is reported here. Three mice that had been removed from the home two months before were the reservoir. The mites were detected in a room where the cage with the mice had been placed months ago. Treatment requires the eradication of the parasites on its hosts (by a veterinarian) and in the environment (by an exterminator) with adequate acaricides such as permethrin.", "title": "" }, { "docid": "626cbfd87a6582d36cd1a98342ce2cc2", "text": "We apply the two-pluyer game assumprio~ls of 1i111ited search horizon and cornn~itnrent to nroves i constant time, to .single-agent heuristic search problems. We present a varicrtion of nrinimcr lookuhead search, and an analog to ulphu-betu pruning rlrot signijicantly improves the efficiency c. the algorithm. Paradoxically. the search horizon reachuble with this algorithm increases wir. increusing branching facior. hl addition. we present a new algorithm, called Real-Time-A ', fo interleaving planning and execution. We prove that the ulgorithm makes locally optimal decision and is guaranteed to find a solution. We also present a learning version of this algorithm thrr improves its performance over successive problen~ solving trials by learning more accurate heuristi values, and prove that the learned values converge to their exact values along every optimal path These algorithms ef/ectively solve significanrly larger problems rhan have previously beerr solvabk using heuristic evaluation functions.", "title": "" } ]
scidocsrr
aee763c4a87ee06c276571279a9dbb3a
Accelerating vector graphics rendering using the graphics hardware pipeline
[ { "docid": "d2836880ac69bf35e53f5bc6de8bc5dc", "text": "There is currently significant interest in freeform, curve-based authoring of graphic images. In particular, \"diffusion curves\" facilitate graphic image creation by allowing an image designer to specify naturalistic images by drawing curves and setting colour values along either side of those curves. Recently, extensions to diffusion curves based on the biharmonic equation have been proposed which provide smooth interpolation through specified colour values and allow image designers to specify colour gradient constraints at curves. We present a Boundary Element Method (BEM) for rendering diffusion curve images with smooth interpolation and gradient constraints, which generates a solved boundary element image representation. The diffusion curve image can be evaluated from the solved representation using a novel and efficient line-by-line approach. We also describe \"curve-aware\" upsampling, in which a full resolution diffusion curve image can be upsampled from a lower resolution image using formula evaluated orrections near curves. The BEM solved image representation is compact. It therefore offers advantages in scenarios where solved image representations are transmitted to devices for rendering and where PDE solving at the device is undesirable due to time or processing constraints.", "title": "" } ]
[ { "docid": "b341d0317db66608eeedbe25a7bbe6d8", "text": "We developed a compact hybrid-integrated 100 Gbit/s TOSA using an EADFB laser array with a spot-size converter and a silica-based AWG multiplexer. Error-free operation for a 40-km transmission was demonstrated at an operating temperature of 55 °C.", "title": "" }, { "docid": "d2f929806163b2be07c57f0b34fdb3da", "text": "This article reviews the use of robotic technology for otolaryngologic surgery. The authors discuss the development of the technology and its current uses in the operating room. They address procedures such as oropharyngeal transoral robotic surgery (TORS), laryngeal TORS, and thyroidectomy, and also note the role of robotics in teaching.", "title": "" }, { "docid": "4a632b9913c88fb9fceae81809c2c119", "text": "Intraarticular fractures carry a significant risk for posttraumatic osteoarthritis, and this risk varies across different joint surfaces of the lower extremity. These differences are likely due to the anatomic and biomechanical specifics of each joint surface. High-quality human studies are lacking to delineate the threshold articular incongruity that significantly increases risk for posttraumatic osteoarthritis and diminished clinical outcomes for many joint surfaces. Even with anatomic reduction of the articular surface, close attention must be paid to mechanical axis and joint stability to optimize outcomes.", "title": "" }, { "docid": "c4f86b84282df841bd5ee7bcca3b01eb", "text": "Image binarization is the process of separation of pixel values into two groups, white as background and black as foreground. Thresholding plays a major in binarization of images. Thresholding can be categorized into global thresholding and local thresholding. In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate. In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background. In such cases, binarization with local thresholding is more appropriate. This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation. Normally the local mean computational time depends on the window size. Our technique uses integral sum image as a prior processing to calculate local mean. It does not involve calculations of standard deviations as in other local adaptive techniques. This along with the fact that calculations of mean is independent of window size speed up the process as compared to other local thresholding techniques.", "title": "" }, { "docid": "7d7121df4a1ff79db0ffdb3d43ea4e47", "text": "BACKGROUND\nPeer teaching has been shown to enhance student learning and levels of self efficacy.\n\n\nOBJECTIVES\nThe purpose of the current study was to examine the impact of peer-teaching learning experiences on nursing students in roles of tutee and tutor in a clinical lab environment.\n\n\nSETTINGS\nThis study was conducted over a three-semester period at a South Central University that provides baccalaureate nursing education.\n\n\nPARTICIPANTS\nOver three semesters, 179 first year nursing students and 51 third year nursing students participated in the study.\n\n\nMETHODS\nThis mixed methods study, through concurrent use of a quantitative intervention design and qualitative survey data, examined differences during three semesters in perceptions of a clinical lab experience, self-efficacy beliefs, and clinical knowledge for two groups: those who received peer teaching-learning in addition to faculty instruction (intervention group) and those who received faculty instruction only (control group). Additionally, peer teachers' perceptions of the peer teaching learning experience were examined.\n\n\nRESULTS\nResults indicated positive response from the peer tutors with no statistically significant differences for knowledge acquisition and self-efficacy beliefs between the tutee intervention and control groups. In contrast to previous research, students receiving peer tutoring in conjunction with faculty instruction were statistically more anxious about performing lab skills with their peer tutor than with their instructors. Additionally, some students found instructors' feedback moderately more helpful than their peers and increased gains in knowledge and responsibility for preparation and practice with instructors than with peer tutors.\n\n\nCONCLUSIONS\nThe findings in this study differ from previous research in that the use of peer tutors did not decrease anxiety in first year students, and no differences were found between the intervention and control groups related to self efficacy or cognitive improvement. These findings may indicate the need to better prepare peer tutors, and research should be conducted using more complex skills.", "title": "" }, { "docid": "b207f2efab5abaf254ec34a8c1559d49", "text": "Image processing algorithms used in surveillance systems are designed to work under good weather conditions. For example, in a rainy day, raindrops are adhered to camera lenses and windshields, resulting in partial occlusions in acquired images, and making performance of image processing algorithms significantly degraded. To improve performance of surveillance systems in a rainy day, raindrops have to be automatically detected and removed from images. Addressing this problem, this paper proposes an adherent raindrop detection method from a single image which does not need training data and special devices. The proposed method employs image segmentation using Maximally Stable Extremal Regions (MSER) and qualitative metrics to detect adherent raindrops from the result of MSER-based image segmentation. Through a set of experiments, we demonstrate that the proposed method exhibits efficient performance of adherent raindrop detection compared with conventional methods.", "title": "" }, { "docid": "71509dc8fdd1783e360a9b534ff59cba", "text": "This paper proposes a zero-voltage-switching (ZVS) pulse-width modulation three-level converter with current-doubler-rectifier, which achieves ZVS for all the switches under wide load range and in a wide line range. The rectifier diodes commutate naturally, therefore no oscillation occurs. The determination of the output filter inductance and the blocking capacitor is discussed in details. The experimental results are presented to verify the operation principle of the proposed converter.", "title": "" }, { "docid": "59291cb1c13ab274f06b619698784e23", "text": "We present a new class of Byzantine-tolerant State Machine Replication protocols for asynchronous environments that we term Byzantine Chain Replication. We demonstrate two implementations that present different trade-offs between performance and security, and compare these with related work. Leveraging an external reconfiguration service, these protocols are not based on Byzantine consensus, do not require majoritybased quorums during normal operation, and the set of replicas is easy to reconfigure. One of the implementations is instantiated with t+ 1 replicas to tolerate t failures and is useful in situations where perimeter security makes malicious attacks unlikely. Applied to in-memory BerkeleyDB replication, it supports 20,000 transactions per second while a fully Byzantine implementation supports 12,000 transactions per second—about 70% of the throughput of a non-replicated database.", "title": "" }, { "docid": "d07ebefd02d5e7e732a5570aa6a7dec8", "text": "Starting from the principle and strategy of changing the diameter of walking wheel, applying ANSYS \"C FEMBS \"C SIMPACK to construct model of flexible connectors with geometric stiffness. Based on the theory of continuous colLision, we introduce mobile marker to define the nonLinear contact between the wheel's caster and ground. The rigid-flexible simulation analysis was carried out to wheel body system which consisting of six flexible spring connectors, to get the dynamic response in the mode of changing the diameter in situ and to get the minimum size of torque needing for changing diameter. The simulation's results provides theoretical basis for subsequent prototype development, and can be used as the references for further experiment's studying.", "title": "" }, { "docid": "7bf64a2dbfa14b52d0ee46d0c61bf8d2", "text": "Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.", "title": "" }, { "docid": "20b6881a9faf4811b504fd1791babe68", "text": "When users post photos on Facebook, they have the option of allowing their friends, followers, or anyone at all to subsequently reshare the photo. A portion of the billions of photos posted to Facebook generates cascades of reshares, enabling many additional users to see, like, comment, and reshare the photos. In this paper we present characteristics of such cascades in aggregate, finding that a small fraction of photos account for a significant proportion of reshare activity and generate cascades of non-trivial size and depth. We also show that the true influence chains in such cascades can be much deeper than what is visible through direct attribution. To illuminate how large cascades can form, we study the diffusion trees of two widely distributed photos: one posted on President Barack Obama’s page following his reelection victory, and another posted by an individual Facebook user hoping to garner enough likes for a cause. We show that the two cascades, despite achieving comparable total sizes, are markedly different in their time evolution, reshare depth distribution, predictability of subcascade sizes, and the demographics of users who propagate them. The findings suggest not only that cascades can achieve considerable size but that they can do so in distinct ways.", "title": "" }, { "docid": "6a2584657154d6c9fd0976c30469349a", "text": "A major challenge for managers in turbulent environments is to make sound decisions quickly. Dynamic capabilities have been proposed as a means for addressing turbulent environments by helping managers extend, modify, and reconfigure existing operational capabilities into new ones that better match the environment. However, because dynamic capabilities have been viewed as an elusive black box, it is difficult for managers to make sound decisions in turbulent environments if they cannot effectively measure dynamic capabilities. Therefore, we first seek to propose a measurable model of dynamic capabilities by conceptualizing, operationalizing, and measuring dynamic capabilities. Specifically, drawing upon the dynamic capabilities literature, we identify a set of capabilities—sensing the environment, learning, coordinating, and integrating— that help reconfigure existing operational capabilities into new ones that better match the environment. Second, we propose a structural model where dynamic capabilities influence performance by reconfiguring existing operational capabilities in the context of new product development (NPD). Data from 180 NPD units support both the measurable model of dynamic capabilities and also the structural model by which dynamic capabilities influence performance in NPD by reconfiguring operational capabilities, particularly in higher levels of environmental turbulence. The study’s implications for managerial decision making in turbulent environments by capturing the elusive black box of dynamic capabilities are discussed. Subject Areas: Decision Making in Turbulent Environments, Dynamic Capabilities, Environmental Turbulence, New Product Development, and Operational Capabilities.", "title": "" }, { "docid": "7547da5f5e33051dcbbb8a2d7abe46ce", "text": "We introduce the joint time-frequency scattering transform, a time shift invariant descriptor of time-frequency structure for audio classification. It is obtained by applying a two-dimensional wavelet transform in time and log-frequency to a time-frequency wavelet scalogram. We show that this descriptor successfully characterizes complex time-frequency phenomena such as time-varying filters and frequency modulated excitations. State-of-the-art results are achieved for signal reconstruction and phone segment classification on the TIMIT dataset.", "title": "" }, { "docid": "a8a01603c67c98cad7f0b13ba453161a", "text": "A computational fluid dynamics study of three-dimensional turbulent flow over a backward facing step is presented. An available experimental study is investigated computationally using an open source tool. The wall static pressure distribution, the skin friction distribution and the reattachment length have been calculated and compared with the experimental data. Two different mathematical models were implemented using the OpenFOAM computational fluid dynamics (CFD) package. The long term goals for this research are to investigate and actively control the wake dynamics behind the step which will be useful to study the wake characteristics behind different types of bodies.", "title": "" }, { "docid": "5d08089a4f80a6ad67d5362fe0f055d6", "text": "This paper considers the problem of embedding trees into the hyperbolic plane. We show that any tree can be realized as the Delaunay graph of its embedded vertices. Particularly, a weighted tree can be embedded such that the weight on each edge is realized as the hyperbolic distance between its embedded vertices. Thus the embedding preserves the metric information of the tree along with its topology. The distance distortion between non adjacent vertices can be made arbitrarily small – less than a (1 + ε) factor for any given ε. Existing results on low distortion of embedding discrete metrics into trees carry over to hyperbolic metric through this result. The Delaunay character implies useful properties such as guaranteed greedy routing and realization as minimum spanning trees.", "title": "" }, { "docid": "b5997c5c88f57b387e56dc68445b38e2", "text": "Identifying the relationship between two text objects is a core research problem underlying many natural language processing tasks. A wide range of deep learning schemes have been proposed for text matching, mainly focusing on sentence matching, question answering or query document matching. We point out that existing approaches do not perform well at matching long documents, which is critical, for example, to AI-based news article understanding and event or story formation. The reason is that these methods either omit or fail to fully utilize complicated semantic structures in long documents. In this paper, we propose a graph approach to text matching, especially targeting long document matching, such as identifying whether two news articles report the same event in the real world, possibly with different narratives. We propose the Concept Interaction Graph to yield a graph representation for a document, with vertices representing different concepts, each being one or a group of coherent keywords in the document, and with edges representing the interactions between different concepts, connected by sentences in the document. Based on the graph representation of document pairs, we further propose a Siamese Encoded Graph Convolutional Network that learns vertex representations through a Siamese neural network and aggregates the vertex features though Graph Convolutional Networks to generate the matching result. Extensive evaluation of the proposed approach based on two labeled news article datasets created at Tencent for its intelligent news products show that the proposed graph approach to long document matching significantly outperforms a wide range of state-of-the-art methods.", "title": "" }, { "docid": "2ff60b62850c325fa55904ccf4cb4070", "text": "In DSM-IV-TR, trichotillomania (TTM) is classified as an impulse control disorder (not classified elsewhere), skin picking lacks its own diagnostic category (but might be diagnosed as an impulse control disorder not otherwise specified), and stereotypic movement disorder is classified as a disorder usually first diagnosed in infancy, childhood, or adolescence. ICD-10 classifies TTM as a habit and impulse disorder, and includes stereotyped movement disorders in a section on other behavioral and emotional disorders with onset usually occurring in childhood and adolescence. This article provides a focused review of nosological issues relevant to DSM-V, given recent empirical findings. This review presents a number of options and preliminary recommendations to be considered for DSM-V: (1) Although TTM fits optimally into a category of body-focused repetitive behavioral disorders, in a nosology comprised of relatively few major categories it fits best within a category of motoric obsessive-compulsive spectrum disorders, (2) available evidence does not support continuing to include (current) diagnostic criteria B and C for TTM in DSM-V, (3) the text for TTM should be updated to describe subtypes and forms of hair pulling, (4) there are persuasive reasons for referring to TTM as \"hair pulling disorder (trichotillomania),\" (5) diagnostic criteria for skin picking disorder should be included in DSM-V or in DSM-Vs Appendix of Criteria Sets Provided for Further Study, and (6) the diagnostic criteria for stereotypic movement disorder should be clarified and simplified, bringing them in line with those for hair pulling and skin picking disorder.", "title": "" }, { "docid": "0374d93d82ec404b7beee18aaa9bfbf1", "text": "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma’s Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to encourage exploration and improve performance on hardexploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through exploiting any available means (including by introducing determinism), then robustify (create a policy that can reliably perform the solution) via imitation learning. The combined effect of these principles generates dramatic performance improvements on hardexploration problems. On Montezuma’s Revenge, without being provided any domain knowledge, Go-Explore scores over 43,000 points, almost 4 times the previous state of the art. Go-Explore can also easily harness human-provided domain knowledge, and when augmented with it Go-Explore scores a mean of over 650,000 points on Montezuma’s Revenge. Its max performance of nearly 18 million surpasses the human world record by an order of magnitude, thus meeting even the strictest definition of “superhuman” performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean performance of almost 60,000 points also exceeds expert human performance. Because GoExplore can produce many high-performing demonstrations automatically and cheaply, it also outperforms previous imitation learning work in which the solution was provided in the form of a human demonstration. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that often harness a simulator during training (e.g. robotics).", "title": "" }, { "docid": "ca93e2d0af218e5c8a286ff5f3e0e02b", "text": "Educational justice is a major global challenge. In most underdeveloped countries, many students do not have access to education and in most advanced democracies, school attainment and success are still, to a large extent, dependent on a student’s social background. However, it has often been argued that social justice is an essential part of teachers’ work in a democracy. This article raises an important overriding question: how can we realize the goal of educational justice in the field of teaching? In this essay, I examine culturally responsive teaching as an educational practice and conclude that it is possible to realize educational justice in the field of teaching because in its true implementation, culturally responsive teaching conceptualizes the connection between education and social justice and creates the space needed for discussing social change in society.", "title": "" }, { "docid": "37cfea7e4395aa2df109d2ce024b1bd5", "text": "We develop and extend social capital theory by exploring the creation of organizational social capital within a highly pervasive, yet often overlooked organizational form: family firms. We argue that family firms are unique in that, although they work as a single entity, at least two forms of social capital coexist: the family’s and the firm’s. We investigate mechanisms that link a family’s social capital to the creation of the family firm’s social capital and examine how factors underlying the family’s social capital affect this creation. Moreover, we identify contingency dimensions that affect these relationships and the potential risks associated with family social capital. Finally, we suggest these insights are generalizable to several other types of organizations with similar characteristics.", "title": "" } ]
scidocsrr
2834812682bfff8580d1441f7145699a
Facial Expression Recognition in Video with Multiple Feature Fusion
[ { "docid": "23afac6bd3ed34fc0c040581f630c7bd", "text": "Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.", "title": "" }, { "docid": "6e8f02cfdab45ed1277e8649bd73c6cf", "text": "Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.", "title": "" } ]
[ { "docid": "c0d7ba264ca5b8a4effeca047f416763", "text": "We propose a novel dependency-based hybrid tree model for semantic parsing, which converts natural language utterance into machine interpretable meaning representations. Unlike previous state-of-the-art models, the semantic information is interpreted as the latent dependency between the natural language words in our joint representation. Such dependency information can capture the interactions between the semantics and natural language words. We integrate a neural component into our model and propose an efficient dynamicprogramming algorithm to perform tractable inference. Through extensive experiments on the standard multilingual GeoQuery dataset with eight languages, we demonstrate that our proposed approach is able to achieve state-ofthe-art performance across several languages. Analysis also justifies the effectiveness of using our new dependency-based representation.1", "title": "" }, { "docid": "cf1c04b4d0c61632d7a3969668d5e751", "text": "A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.", "title": "" }, { "docid": "8c63ce71aaa0409372efeb3ea392394f", "text": "This paper describes the application of evolutionary fuzzy systems for subgroup discovery to a medical problem, the study on the type of patients who tend to visit the psychiatric emergency department in a given period of time of the day. In this problem, the objective is to characterise subgroups of patients according to their time of arrival at the emergency department. To solve this problem, several subgroup discovery algorithms have been applied to determine which of them obtains better results. The multiobjective evolutionary algorithm MESDIF for the extraction of fuzzy rules obtains better results and so it has been used to extract interesting information regarding the rate of admission to the psychiatric emergency department.", "title": "" }, { "docid": "89dbc16a2510e3b0e4a248f428a9ffc0", "text": "Complex networks are ubiquitous in our daily life, with the World Wide Web, social networks, and academic citation networks being some of the common examples. It is well understood that modeling and understanding the network structure is of crucial importance to revealing the network functions. One important problem, known as community detection, is to detect and extract the community structure of networks. More recently, the focus in this research topic has been switched to the detection of overlapping communities. In this paper, based on the matrix factorization approach, we propose a method called bounded nonnegative matrix tri-factorization (BNMTF). Using three factors in the factorization, we can explicitly model and learn the community membership of each node as well as the interaction among communities. Based on a unified formulation for both directed and undirected networks, the optimization problem underlying BNMTF can use either the squared loss or the generalized KL-divergence as its loss function. In addition, to address the sparsity problem as a result of missing edges, we also propose another setting in which the loss function is defined only on the observed edges. We report some experiments on real-world datasets to demonstrate the superiority of BNMTF over other related matrix factorization methods.", "title": "" }, { "docid": "0ccfe04a4426e07dcbd0260d9af3a578", "text": "We present an efficient algorithm to perform approximate offsetting operations on geometric models using GPUs. Our approach approximates the boundary of an object with point samples and computes the offset by merging the balls centered at these points. The underlying approach uses Layered Depth Images (LDI) to organize the samples into structured points and performs parallel computations using multiple cores. We use spatial hashing to accelerate intersection queries and balance the workload among various cores. Furthermore, the problem of offsetting with a large distance is decomposed into successive offsetting using smaller distances. We derive bounds on the accuracy of offset computation as a function of the sampling rate of LDI and offset distance. In practice, our GPU-based algorithm can accurately compute offsets of models represented using hundreds of thousands of points in a few seconds on GeForce GTX 580 GPU. We observe more than 100 times speedup over prior serial CPU-based approximate offset computation algorithms.", "title": "" }, { "docid": "58858f0cd3561614f1742fe7b0380861", "text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.", "title": "" }, { "docid": "425cf4dceac465543820e2ff212e90df", "text": "Auto-enucleation is a sign of untreated psychosis. We describe two patients who presented with attempted auto-enucleation while being incarcerated. This is an observation two-case series of two young men who suffered untreated psychosis while being incarcerated. These young men showed severe self-inflicted ocular trauma during episodes of untreated psychosis. Injuries included orbital bone fracture and dehiscence of the lateral rectus in one patient and severe retinal hemorrhage and partial optic nerve avulsion in the second patient. Auto-enucleation is a severe symptom of untreated psychosis. This urgent finding can occur in a jail setting in which psychiatric care may be minimal.", "title": "" }, { "docid": "e3a7d060b145fae69d6b956b7d701a9e", "text": "[1] Eruptive activity of individual monogenetic volcanoes usually lasts a few days or weeks. However, their short lifetime does not always mean that their dynamics and structure are simple. Monogenetic cones construction is rarely witnessed from the beginning to the end, and conditions for observing their internal structure are hardly reached. We provide high-resolution electrical resistivity sections (10m electrode spacing) of three monogenetic cones from northeastern Spain, comparing our results to geological observations to interpret their underground continuation. The 100m maximum depth of exploration provides information on almost the entire edifices, highlighting the relationships between Strombolian and hydromagmatic deposits in two multiphase edifices. A main observation is a column of distinct resistivity centered on the Puig d’Adri volcano, which we interpret as the eruptive conduit. This method can provide valuable information on the past volcanic dynamics of monogenetic volcanic fields, which has real implications for the forecast of future activity. Citation: Barde-Cabusson, S., X. Bolós, D. Pedrazzi, R. Lovera, G. Serra, J. Martí, and A. Casas (2013), Electrical resistivity tomography revealing the internal structure of monogenetic volcanoes, Geophys. Res. Lett., 40, doi:10.1002/grl.50538.", "title": "" }, { "docid": "3aff2b8faba77dc2466ed63e0f6eb809", "text": "OBJECTIVE\nAlthough recent neuroimaging studies have shown that painful stimuli can produce activity in multiple cortical areas, the question remains as to the role of each area in particular aspects of human pain perception. To solve this problem we used transcranial magnetic stimulation (TMS) as an 'interference approach' tool to test the consequence on pain perception of disrupting activity in several areas of cortex known to be activated by painful input.\n\n\nMETHODS\nWeak CO(2) laser stimuli at an intensity around the threshold for pain were given to the dorsum of the left hand in 9 normal subjects. At variable delays (50, 150, 250, 350 ms) after the onset of the laser stimulus, pairs of TMS pulses (dTMS: interpulse interval of 50 ms, and stimulus intensity of 120% resting motor threshold) were applied in separate blocks of trials over either the right sensorimotor cortex (SMI), midline occipital cortex (OCC), second somatosensory cortex (SII), or medial frontal cortex (MFC). Subjects were instructed to judge whether or not the stimulus was painful and to point to the stimulated spot on a drawing of subject's hand.\n\n\nRESULTS\nSubjects judged that the stimulus was painful on more trials than control when dTMS was delivered over SMI at 150-200 ms after the laser stimulus; the opposite occurred when dTMS was delivered over MFC at 50-100 ms. dTMS over the SII or OCC failed to alter the pain threshold.\n\n\nCONCLUSIONS\nThese results suggest that TMS to SMI can facilitate whereas stimulation over MFC suppresses central processing of pain perception. Since there was no effect of dTMS at any of the scalp sites on the localization task, the cortical locus for point localization of pain may be different from that for perception of pain intensity or may involve a more complex mechanism than the latter.\n\n\nSIGNIFICANCE\nThis is the first report that TMS of SMI facilitates while that of MFC suppresses the central processing of pain perception. This raises the possibility of using TMS as a therapeutic device to control pain.", "title": "" }, { "docid": "30f7c423ac49cfcd19a46b487d660c9d", "text": "This letter presents two different waveguide-to-microstrip transition designs for the 76-81 GHz frequency band. Both transitions are fabricated on a grounded single layer substrate using a standard printed circuit board (PCB) fabrication process. A coplanar patch antenna and a feed technique at the non-radiating edge are used for the impedance transformation. In the first design, a conventional WR-10 waveguide is connected. In the second design, a WR-10 waveguide flange with an additional inductive waveguide iris is employed to improve the bandwidth. Both designs were developed for the integration of multi-channel array systems allowing an element spacing of λ0/2 or less. Measurement results of the first transition without the iris show a bandwidth of 8.5 GHz (11%) for 10 dB return loss and a minimum insertion loss (IL) of 0.35 dB. The transition using the iris increases the bandwidth to 12 GHz (15%) for 10 dB return loss and shows a minimum insertion loss of 0.6 dB at 77 GHz.", "title": "" }, { "docid": "dc813db85741a56d0f47044b9c2276d0", "text": "We study the complexity required for the implementation of multi-agent contracts under a variety of solution concepts. A contract is a mapping from strategy profiles to outcomes. Practical implementation of a contract requires it to be ''simple'', an illusive concept that needs to be formalized. A major source of complexity is the burden involving verifying the contract fulfillment (for example in a court of law). Contracts which specify a small number of outcomes are easier to verify and are less prone to disputes. We therefore measure the complexity of a contract by the number of outcomes it specifies. Our approach is general in the sense that all strategic interaction represented by a normal form game are allowed. The class of solution concepts we consider is rather exhaustive and includes Nash equilibrium with both pure and mixed strategies, dominant strategy implementation, iterative elimination of dominated strategies and strong equilibria.\n Some interesting insights can be gained from our analysis: Firstly, our results indicate that the complexity of implementation is independent of the size of the strategy spaces of the players but for some solution concepts grows with the number of players. Second, the complexity of {\\em unique} implementation is sometimes slightly larger, but not much larger than non-unique implementation. Finally and maybe surprisingly, for most solution concepts implementation with optimal cost usually does not require higher complexity than the complexity necessary for implementation at all.", "title": "" }, { "docid": "7f6b4a74f88d5ae1a4d21948aac2e260", "text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.", "title": "" }, { "docid": "6b57fc913894f639e023dfaf3f156003", "text": "The actions of an autonomous vehicle on the road affect and are affected by those of other drivers, whether overtaking, negotiating a merge, or avoiding an accident. This mutual dependence, best captured by dynamic game theory, creates a strong coupling between the vehicle’s planning and its predictions of other drivers’ behavior, and constitutes an open problem with direct implications on the safety and viability of autonomous driving technology. Unfortunately, dynamic games are too computationally demanding to meet the real-time constraints of autonomous driving in its continuous state and action space. In this paper, we introduce a novel game-theoretic trajectory planning algorithm for autonomous driving, that enables real-time performance by hierarchically decomposing the underlying dynamic game into a long-horizon “strategic” game with simplified dynamics and full information structure, and a short-horizon “tactical” game with full dynamics and a simplified information structure. The value of the strategic game is used to guide the tactical planning, implicitly extending the planning horizon, pushing the local trajectory optimization closer to global solutions, and, most importantly, quantitatively accounting for the autonomous vehicle and the human driver’s ability and incentives to influence each other. In addition, our approach admits non-deterministic models of human decisionmaking, rather than relying on perfectly rational predictions. Our results showcase richer, safer, and more effective autonomous behavior in comparison to existing techniques.", "title": "" }, { "docid": "f1c2af06078b6b5c802d773a72fc22ad", "text": "Virtual environments have the potential to become important new research tools in environment behavior research. They could even become the future (virtual) laboratories, if reactions of people to virtual environments are similar to those in real environments. The present study is an exploration of the comparability of research findings in real and virtual environments. In the study, 101 participants explored an identical space, either in reality or in a computer-simulated environment. Additionally, the presence of plants in the space was manipulated, resulting in a 2 (environment) 2 (plants) between-subjects design. Employing a broad set of measurements, we found mixed results. Performances on size estimations and a cognitive mapping task were significantly better in the real environment. Factor analyses of bipolar adjectives indicated that, although four dimensions were similar for both environments, a fifth dimension of environmental assessmenttermedarousalwas absent in the virtual environment. In addition, we found significant differences on the scores of four of the scales. However, no significant interactions appeared between environment and plants. Experience of and behavior in virtual environments have similarities to that in real environments, but there are important differences as well. We conclude that this is not only a necessary, but also a very interesting research subject for environmental psychology.", "title": "" }, { "docid": "3ef6a2d1c125d5c7edf60e3ceed23317", "text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.", "title": "" }, { "docid": "9211b376005ca17615a326883c13458b", "text": "In this paper, we describe the ATR multilingual speech-to-speech translation (S2ST) system, which is mainly focused on translation between English and Asian languages (Japanese and Chinese). There are three main modules of our S2ST system: large-vocabulary continuous speech recognition, machine text-to-text (T2T) translation, and text-to-speech synthesis. All of them are multilingual and are designed using state-of-the-art technologies developed at ATR. A corpus-based statistical machine learning framework forms the basis of our system design. We use a parallel multilingual database consisting of over 600 000 sentences that cover a broad range of travel-related conversations. Recent evaluation of the overall system showed that speech-to-speech translation quality is high, being at the level of a person having a Test of English for International Communication (TOEIC) score of 750 out of the perfect score of 990.", "title": "" }, { "docid": "3d126bfd0404d2c57f3bf49b9e612889", "text": "S Extended Abstracts. Proceedings of the 6th International Workshop/12th L. H. Gray Workshop: Microbeam Probes of Cellular Radiation Response – 278 ABUNDANCE Analysis of Extra-Terrestrial Materials by Muon Capture: Developing a New Technique for the Armory – 139 Global Mapping of Elemental Abundance on Lunar Surface by SELENE GammaRay Spectrometer – 362 Lunar and Planetary Science XXXVI, Part 12 – 381 Modal Abundances of Carbon in Ureilites: Implications for the Petrogenesis of Ureilites – 403 New Results of Metal/Silicate Partitioning of Ni and Co at Elevated Pressures and Temperatures – 156 Revised Thorium Abundances for Lunar Red Spots – 345 The Earth/Mars Dichotomy in Mg/Si and Al/Si Ratios: Is It Real? – 404 ACCELERATED LIFE TESTS Accelerated Concept Exploration of Future Combat Systems Using Evolutionary Algorithms and Enterprise Software – 257 ACCELERATORS Cupronickel Rotating Band Pion Production Target for Muon Colliders – 266 Electron Model of an FFAG Muon Accelerator – 270 Gas Lasers for Strong-Field Applications – 274 High Power RF Coupler Design for Muon Cooling RF Cavities – 265 Higher Order Hard Edge End Field Effects – 270 High-Intensity, High Charge-State Heavy Ion Sources – 274 ICOOL: A Simulation Code for Ionization Cooling of Muon Beams – 267 Instrumentation Channel for the MUCOOL Experiment – 269 Muon Colliders Ionization Cooling and Solenoids – 269 Muon Colliders: The Ultimate Neutrino Beamlines – 267 Potential Hazards from Neutrino Radiation at Muon Colliders – 267 RF Accelerating Structure for the Muon Cooling Experiment – 268 RHIC Beam Loss Monitor System Initial Operation – 265 Simulation, Generation, and Characterization of High Brightness Electron Source at 1 GV/m Gradient – 265 Studies for Muon Colliders at Center-ofMass Energies of 10 TeV and 100 TeV – 267 Targetry for a Mu+MuCollider – 268 Towards Advanced Electron Beam Brightness Enhancement and Conditioning – 276 V123 Beam Synchronous Encoder Module – 266 ACCESS CONTROL Underwater Acoustic Networks: Evaluation of the Impact of Media Access Control on Latency, in a Delay Constrained Network – 262 ACCIDENTS Genesis: Removing Contamination from Sample Collectors – 319 ACCUMULATORS Genesis: Removing Contamination from Sample Collectors – 319 ACCURACY Accuracy of Western North Pacific Tropical Cyclone Intensity Guidance – 171 Analysis of the Predictive Accuracy of the Recruiter Assessment Battery – 229 Observations in Improved Geolocation Accuracy Based on Signal-Dependent and Non-Signal Dependent Errors – 123 Relative Accuracy of Several LowDispersion Finite-Difference TimeDomain Schemes – 75 The Design of High-Order, Leap-Frog Integrators for Maxwell’s Equations – 247 ACETYL COMPOUNDS Interactions of Subsymptomatic Doses of Sarin with Pyridostigmine -Neurochemical, Behavioral, and Physiological Effects – 185 Low-Level Effects of VX Vapor Exposure on Pupil Size and Cholinesterase Levels in Rats – 35 ACETYLCHOLINE Interactions of Subsymptomatic Doses of Sarin with Pyridostigmine -Neurochemical, Behavioral, and Physiological Effects – 185 ACHONDRITES FeO-rich Xenoliths in the Staroye Pesyanoe Aubrite – 383 NWA 2736: An Unusual New Graphitebearing Aubrite – 396 Petrology and Multi-Isotopic Composition of Olivine Diogenite NWA 1877: A Mantle Peridotite in the Proposed HEDO Group of Meteorites – 331 Potassium-bearing Iron-Nickel Sulfides in Nature and High-Pressure Experiments: Geochemical Consequences of Potassium in the Earth’s Core – 157 ACOUSTIC ATTENUATION Improved Acoustic Blanket Developed and Tested – 283 ACOUSTIC EMISSION Acoustic Emission Based Surveillance System for Prediction of Stress Fractures – 207", "title": "" }, { "docid": "eb44e4ac9f1a3345df85ced155909661", "text": "Domain adaptation (DA) attempts to enhance the generalization capability of classifier through narrowing the gap of the distributions across domains. This paper focuses on unsupervised domain adaptation where labels are not available in target domain. Most existing approaches explore the domaininvariant features shared by domains but ignore the discriminative information of source domain. To address this issue, we propose a discriminative domain adaptation method (DDA) to reduce domain shift by seeking a common latent subspace jointly using supervised sparse coding (SSC) and discriminative regularization term. Particularly, DDA adapts SSC to yield discriminative coefficients of target data and further unites with discriminative regularization term to induce a common latent subspace across domains. We show that both strategies can boost the ability of transferring knowledge from source to target domain. Experiments on two real world datasets demonstrate the effectiveness of our proposed method over several existing state-of-the-art domain adaptation methods.", "title": "" }, { "docid": "c76cfe38185146f60a416eedac962750", "text": "OBJECTIVE\nRepeated public inquiries into child abuse tragedies in Britain demonstrate the level of public concern about the services designed to protect children. These inquiries identify faults in professionals' practice but the similarities in their findings indicate that they are having insufficient impact on improving practice. This study is based on the hypothesis that the recurrent errors may be explicable as examples of the typical errors of human reasoning identified by psychological research.\n\n\nMETHODS\nThe sample comprised all child abuse inquiry reports published in Britain between 1973 and 1994 (45 in total). Using a content analysis and a framework derived from psychological research on reasoning, a study was made of the reasoning of the professionals involved and the findings of the inquiries.\n\n\nRESULTS\nIt was found that professionals based assessments of risk on a narrow range of evidence. It was biased towards the information readily available to them, overlooking significant data known to other professionals. The range was also biased towards the more memorable data, that is, towards evidence that was vivid, concrete, arousing emotion and either the first or last information received. The evidence was also often faulty, due, in the main, to biased or dishonest reporting or errors in communication. A critical attitude to evidence was found to correlate with whether or not the new information supported the existing view of the family. A major problem was that professionals were slow to revise their judgements despite a mounting body of evidence against them.\n\n\nCONCLUSIONS\nErrors in professional reasoning in child protection work are not random but predictable on the basis of research on how people intuitively simplify reasoning processes in making complex judgements. These errors can be reduced if people are aware of them and strive consciously to avoid them. Aids to reasoning need to be developed that recognize the central role of intuitive reasoning but offer methods for checking intuitive judgements more rigorously and systematically.", "title": "" } ]
scidocsrr
f9c29e7897c148e59f5399c3e82b882a
Secure Distributed Deduplication Systems with Improved Reliability
[ { "docid": "d9c244815775043d47b09cbb79a7b122", "text": "Cloud storage is an emerging service model that enables individuals and enterprises to outsource the storage of data backups to remote cloud providers at a low cost. However, cloud clients must enforce security guarantees of their outsourced data backups. We present Fade Version, a secure cloud backup system that serves as a security layer on top of today's cloud storage services. Fade Version follows the standard version-controlled backup design, which eliminates the storage of redundant data across different versions of backups. On top of this, Fade Version applies cryptographic protection to data backups. Specifically, it enables fine-grained assured deletion, that is, cloud clients can assuredly delete particular backup versions or files on the cloud and make them permanently inaccessible to anyone, while other versions that share the common data of the deleted versions or files will remain unaffected. We implement a proof-of-concept prototype of Fade Version and conduct empirical evaluation atop Amazon S3. We show that Fade Version only adds minimal performance overhead over a traditional cloud backup service that does not support assured deletion.", "title": "" }, { "docid": "528b17b55172cbf22e77a14db4334ba6", "text": "Recently, Halevi et al. (CCS '11) proposed a cryptographic primitive called proofs of ownership (PoW) to enhance security of client-side deduplication in cloud storage. In a proof of ownership scheme, any owner of the same file F can prove to the cloud storage that he/she owns file F in a robust and efficient way, in the bounded leakage setting where a certain amount of efficiently-extractable information about file F is leaked. Following this work, we propose a secure client-side deduplication scheme, with the following advantages: our scheme protects data confidentiality (and some partial information) against both outside adversaries and honest-but-curious cloud storage server, while Halevi et al. trusts cloud storage server in data confidentiality; our scheme is proved secure w.r.t. any distribution with sufficient min-entropy, while Halevi et al. (the last and the most practical construction) is particular to a specific type of distribution (a generalization of \"block-fixing\" distribution) of input files.\n The cost of our improvements is that we adopt a weaker leakage setting: We allow a bounded amount one-time leakage of a target file before our scheme starts to execute, while Halevi et al. allows a bounded amount multi-time leakage of the target file before and after their scheme starts to execute. To the best of our knowledge, previous works on client-side deduplication prior Halevi et al. do not consider any leakage setting.", "title": "" } ]
[ { "docid": "bed50a61cb79e20ff13243a9ddf8151c", "text": "Conventional copy-move forgery detection methods mostly make use of hand-crafted features to conduct feature extraction and patch matching. However, the discriminative capability and the invariance to particular transformations of hand-crafted features are not good enough, which imposes restrictions on the performance of copy-move forgery detection. To solve this problem, we propose to utilize Convolutional Kernel Network to conduct copy-move forgery detection. Convolutional Kernel Network is a kind of data-driven local descriptor with the deep convolutional architecture. It can achieve competitive performance for its excellent discriminative capability. To well adapt to the condition of copy-move forgery detection, three significant improvements are made: First of all, our Convolutional Kernel Network is reconstructed for GPU. The GPU-based reconstruction results in high efficiency and makes it possible to apply to thousands of patches matching in copy-move forgery detection. Second, a segmentation-based keypoint distribution strategy is proposed to generate homogeneous distributed keypoints. Last but not least, an adaptive oversegmentation method is adopted. Experiments on the publicly available datasets are conducted to testify the state-of-the-art performance of the proposed method.", "title": "" }, { "docid": "b5ab4c11feee31195fdbec034b4c99d9", "text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends", "title": "" }, { "docid": "efbaec32e42bdb9f12341d6be588a985", "text": "Bacterial quorum sensing (QS) is a density dependent communication system that regulates the expression of certain genes including production of virulence factors in many pathogens. Bioactive plant extract/compounds inhibiting QS regulated gene expression may be a potential candidate as antipathogenic drug. In this study anti-QS activity of peppermint (Mentha piperita) oil was first tested using the Chromobacterium violaceum CVO26 biosensor. Further, the findings of the present investigation revealed that peppermint oil (PMO) at sub-Minimum Inhibitory Concentrations (sub-MICs) strongly interfered with acyl homoserine lactone (AHL) regulated virulence factors and biofilm formation in Pseudomonas aeruginosa and Aeromonas hydrophila. The result of molecular docking analysis attributed the QS inhibitory activity exhibited by PMO to menthol. Assessment of ability of menthol to interfere with QS systems of various Gram-negative pathogens comprising diverse AHL molecules revealed that it reduced the AHL dependent production of violacein, virulence factors, and biofilm formation indicating broad-spectrum anti-QS activity. Using two Escherichia coli biosensors, MG4/pKDT17 and pEAL08-2, we also confirmed that menthol inhibited both the las and pqs QS systems. Further, findings of the in vivo studies with menthol on nematode model Caenorhabditis elegans showed significantly enhanced survival of the nematode. Our data identified menthol as a novel broad spectrum QS inhibitor.", "title": "" }, { "docid": "003be771526441c38f91f96b7ecb802f", "text": "Robotics research and education have gained significant attention in recent years due to increased development and commercial deployment of industrial and service robots. A majority of researchers working on robot grasping and object manipulation tend to utilize commercially available robot-manipulators equipped with various end effectors for experimental studies. However, commercially available robotic grippers are often expensive and are not easy to modify for specific purposes. To extend the choice of robotic end effectors freely available to researchers and educators, we present an open-source low-cost three-finger robotic gripper platform for research and educational purposes. The 3-D design model of the gripper is presented and manufactured with a minimal number of 3-D-printed components and an off-the-shelf servo actuator. An underactuated finger and gear train mechanism, with an overall gripper assembly design, are described in detail, followed by illustrations and a discussion of the gripper grasping performance and possible gripper platform modifications. The presented open-source gripper platform computer-aided design model is released for downloading on the authors research lab website (<;uri xlink:href=\"http://www.alaris.kz\" xlink:type=\"simple\">www.alaris.kz<;/uri>) and can be utilized by robotics researchers and educators as a design platform to build their own robotic end effector solutions for research and educational purposes.", "title": "" }, { "docid": "7973cb32f19b61b0cc88671e4939e32b", "text": "Trolling behaviors are extremely diverse, varying by context, tactics, motivations, and impact. Definitions, perceptions of, and reactions to online trolling behaviors vary. Since not all trolling is equal or deviant, managing these behaviors requires context sensitive strategies. This paper describes appropriate responses to various acts of trolling in context, based on perceptions of college students in North America. In addition to strategies for dealing with deviant trolling, this paper illustrates the complexity of dealing with socially and politically motivated trolling.", "title": "" }, { "docid": "bfa178f35027a55e8fd35d1c87789808", "text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.", "title": "" }, { "docid": "e5c76ea59f7de3a2351823347b4b126c", "text": "We present a deformation-driven approach to topology-varying 3D shape correspondence. In this paradigm, the best correspondence between two shapes is the one that results in a minimal-energy, possibly topology-varying, deformation that transforms one shape to conform to the other while respecting the correspondence. Our deformation model, called GeoTopo transform, allows both geometric and topological operations such as part split, duplication, and merging, leading to fine-grained and piecewise continuous correspondence results. The key ingredient of our correspondence scheme is a deformation energy that penalizes geometric distortion, encourages structure preservation, and simultaneously allows topology changes. This is accomplished by connecting shape parts using structural rods, which behave similarly to virtual springs but simultaneously allow the encoding of energies arising from geometric, structural, and topological shape variations. Driven by the combined deformation energy, an optimal shape correspondence is obtained via a pruned beam search. We demonstrate our deformation-driven correspondence scheme on extensive sets of man-made models with rich geometric and topological variation and compare the results to state-of-the-art approaches.", "title": "" }, { "docid": "a5b1c9d83283153cb46f062efec49f10", "text": "We present our experience with QUIC, an encrypted, multiplexed, and low-latency transport protocol designed from the ground up to improve transport performance for HTTPS traffic and to enable rapid deployment and continued evolution of transport mechanisms. QUIC has been globally deployed at Google on thousands of servers and is used to serve traffic to a range of clients including a widely-used web browser (Chrome) and a popular mobile video streaming app (YouTube). We estimate that 7% of Internet traffic is now QUIC. We describe our motivations for developing a new transport, the principles that guided our design, the Internet-scale process that we used to perform iterative experiments on QUIC, performance improvements seen by our various services, and our experience deploying QUIC globally. We also share lessons about transport design and the Internet ecosystem that we learned from our deployment.", "title": "" }, { "docid": "8b0850f168b0dc0493589eeb4be05eb5", "text": "Feature models describe the common and variable characteristics of a product line. Their advantages are well recognized in product line methods. Unfortunately, creating a feature model for an existing project is time-consuming and requires substantial effort from a modeler.\n We present procedures for reverse engineering feature models based on a crucial heuristic for identifying parents - the major challenge of this task. We also automatically recover constructs such as feature groups, mandatory features, and implies/excludes edges. We evaluate the technique on two large-scale software product lines with existing reference feature models--the Linux and eCos kernels--and FreeBSD, a project without a feature model. Our heuristic is effective across all three projects by ranking the correct parent among the top results for a vast majority of features. The procedures effectively reduce the information a modeler has to consider from thousands of choices to typically five or less.", "title": "" }, { "docid": "9384859ce11d5cb3de135ce156fef73c", "text": "Endosymbiosis is a mutualistic, parasitic or commensal symbiosis in which one symbiont is living within the body of another organism. Such symbiotic relationship with free-living amoebae and arthropods has been reported with a large biodiversity of microorganisms, encompassing various bacterial clades and to a lesser extent some fungi and viruses. By contrast, current knowledge on symbionts of nematodes is still mainly restricted to Wolbachia and its interaction with filarial worms that lead to increased pathogenicity of the infected nematode. In this review article, we aim to highlight the main characteristics of symbionts in term of their ecology, host cell interactions, parasitism and co-evolution, in order to stimulate future research in a field that remains largely unexplored despite the availability of modern tools.", "title": "" }, { "docid": "ba9de90efb41ef69e64a6880e420e0ac", "text": "The emergence of chronic inflammation during obesity in the absence of overt infection or well-defined autoimmune processes is a puzzling phenomenon. The Nod-like receptor (NLR) family of innate immune cell sensors, such as the nucleotide-binding domain, leucine-rich–containing family, pyrin domain–containing-3 (Nlrp3, but also known as Nalp3 or cryopyrin) inflammasome are implicated in recognizing certain nonmicrobial originated 'danger signals' leading to caspase-1 activation and subsequent interleukin-1β (IL-1β) and IL-18 secretion. We show that calorie restriction and exercise-mediated weight loss in obese individuals with type 2 diabetes is associated with a reduction in adipose tissue expression of Nlrp3 as well as with decreased inflammation and improved insulin sensitivity. We further found that the Nlrp3 inflammasome senses lipotoxicity-associated increases in intracellular ceramide to induce caspase-1 cleavage in macrophages and adipose tissue. Ablation of Nlrp3 in mice prevents obesity-induced inflammasome activation in fat depots and liver as well as enhances insulin signaling. Furthermore, elimination of Nlrp3 in obese mice reduces IL-18 and adipose tissue interferon-γ (IFN-γ) expression, increases naive T cell numbers and reduces effector T cell numbers in adipose tissue. Collectively, these data establish that the Nlrp3 inflammasome senses obesity-associated danger signals and contributes to obesity-induced inflammation and insulin resistance.", "title": "" }, { "docid": "fe517545fc4dcc7bde881b7c96e66ecc", "text": "Smoothness is characteristic of coordinated human movements, and stroke patients' movements seem to grow more smooth with recovery. We used a robotic therapy device to analyze five different measures of movement smoothness in the hemiparetic arm of 31 patients recovering from stroke. Four of the five metrics showed general increases in smoothness for the entire patient population. However, according to the fifth metric, the movements of patients with recent stroke grew less smooth over the course of therapy. This pattern was reproduced in a computer simulation of recovery based on submovement blending, suggesting that progressive blending of submovements underlies stroke recovery.", "title": "" }, { "docid": "1991322dce13ee81885f12322c0e0f79", "text": "The quality of the interpretation of the sentiment in the online buzz in the social media and the online news can determine the predictability of financial markets and cause huge gains or losses. That is why a number of researchers have turned their full attention to the different aspects of this problem lately. However, there is no well-rounded theoretical and technical framework for approaching the problem to the best of our knowledge. We believe the existing lack of such clarity on the topic is due to its interdisciplinary nature that involves at its core both behavioral-economic topics as well as artificial intelligence. We dive deeper into the interdisciplinary nature and contribute to the formation of a clear frame of discussion. We review the related works that are about market prediction based on onlinetext-mining and produce a picture of the generic components that they all have. We, furthermore, compare each system with the rest and identify their main differentiating factors. Our comparative analysis of the systems expands onto the theoretical and technical foundations behind each. This work should help the research community to structure this emerging field and identify the exact aspects which require further research and are of special significance. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0bd30308a11711f1dc71b8ff8ae8e80c", "text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.", "title": "" }, { "docid": "d44a76f19aa8292b156914e821b1361d", "text": "Current concepts in the steps of upper limb development and the way the limb is patterned along its 3 spatial axes are reviewed. Finally, the embryogenesis of various congenital hand anomalies is delineated with an emphasis on the pathogenetic basis for each anomaly.", "title": "" }, { "docid": "946a7243eb00f84d7ce1a804f4a86d51", "text": "This paper seeks to contribute to the growing literature on children and computer programming by focusing on a programming language for children in Kindergarten through second grade. Sixty-two students were exposed to a 6-week curriculum using ScartchJr. They learned foundational programming concepts and applied those concepts to create personally meaningful projects using the ScratchJr programming app. This paper addresses the following research question: Which ScratchJr programming blocks do young children choose to use in their own projects after they have learned them all through a tailored programming curriculum? Data was collected in the form of the students’ combined 977 projects, and analyzed for patterns and differences across grades. This paper summarizes findings and suggests potential directions for future research. Implications for the use of ScratchJr as an introductory programming language for young children are also discussed.", "title": "" }, { "docid": "1fd9db81e41fc3b9a76a52cc9a0618c1", "text": "Semantic parsing is a rich fusion of the logical and the statistical worlds.", "title": "" }, { "docid": "9097bf29a9ad2b33919e0667d20bf6d7", "text": "Object detection, though gaining popularity, has largely been limited to detection from the ground or from satellite imagery. Aerial images, where the target may be obfuscated from the environmental conditions, angle-of-attack, and zoom level, pose a more significant challenge to correctly detect targets in. This paper describes the implementation of a regional convolutional neural network to locate and classify objects across several categories in complex, aerial images. Our current results show promise in detecting and classifying objects. Further adjustments to the network and data input should increase the localization and classification accuracies.", "title": "" }, { "docid": "3fd14fcfe8240456bc38d5492c3510a4", "text": "This paper presents a study on adjacent channel interference in millimeter-wave small cell systems based on IEEE 802.11ad/WiGig. It includes hardware prototype development, interference measurements, and performance evaluation of an interference suppression technique. The access point prototype employs three RF modules with 120° beam steering capability, thus enabling 360° coverage. Using the developed prototype, interference measurements were performed and the packet error degradation due to adjacent channel interference was observed. To mitigate the performance degradation, an interference suppression technique using a two stream receiver architecture was applied. The subsequent measurements showed improvement in EVM and also expansion of the cell's coverage area, demonstrating the effectiveness of the applied technique for small cell systems using IEEE 802.11ad/WiGig.", "title": "" }, { "docid": "e975d09cb8ae84d709bce78328c77da8", "text": "Topic models are a family of statistical-based algorithms to summarize, explore and index large collections of text documents. After a decade of research led by computer scientists, topic models have spread to social science as a new generation of data-driven social scientists have searched for tools to explore large collections of unstructured text. Recently, social scientists have contributed to topic model literature with developments in causal inference and tools for handling the problem of multi-modality. In this paper, I provide a literature review on the evolution of topic modeling including extensions for document covariates, methods for evaluation and interpretation, and advances in interactive visualizations along with each aspect’s relevance and application for social science research. Keywords—computational social science, computer-assisted text analysis, visual analytics, structural topic model", "title": "" } ]
scidocsrr
87f245c9a2145313c26326b3afda0f85
An intelligent content-based image retrieval system for clinical decision support in brain tumor diagnosis
[ { "docid": "6101b3c76db195a68fc46cb99c0cda1c", "text": "We review two clustering algorithms (hard c-means and single linkage) and three indexes of crisp cluster validity (Hubert's statistics, the Davies-Bouldin index, and Dunn's index). We illustrate two deficiencies of Dunn's index which make it overly sensitive to noisy clusters and propose several generalizations of it that are not as brittle to outliers in the clusters. Our numerical examples show that the standard measure of interset distance (the minimum distance between points in a pair of sets) is the worst (least reliable) measure upon which to base cluster validation indexes when the clusters are expected to form volumetric clouds. Experimental results also suggest that intercluster separation plays a more important role in cluster validation than cluster diameter. Our simulations show that while Dunn's original index has operational flaws, the concept it embodies provides a rich paradigm for validation of partitions that have cloud-like clusters. Five of our generalized Dunn's indexes provide the best validation results for the simulations presented.", "title": "" }, { "docid": "999eda741a3c132ac8640e55721b53bb", "text": "This paper presents an overview of color and texture descriptors that have been approved for the Final Committee Draft of the MPEG-7 standard. The color and texture descriptors that are described in this paper have undergone extensive evaluation and development during the past two years. Evaluation criteria include effectiveness of the descriptors in similarity retrieval, as well as extraction, storage, and representation complexities. The color descriptors in the standard include a histogram descriptor that is coded using the Haar transform, a color structure histogram, a dominant color descriptor, and a color layout descriptor. The three texture descriptors include one that characterizes homogeneous texture regions and another that represents the local edge distribution. A compact descriptor that facilitates texture browsing is also defined. Each of the descriptors is explained in detail by their semantics, extraction and usage. Effectiveness is documented by experimental results.", "title": "" } ]
[ { "docid": "a9fd8529dc3511dbf10ca76e776e35c1", "text": "Several works have separated the pressure waveform p in systemic arteries into reservoir p(r) and excess p(exc) components, p = p(r) + p(exc), to improve pulse wave analysis, using windkessel models to calculate the reservoir pressure. However, the mechanics underlying this separation and the physical meaning of p(r) and p(exc) have not yet been established. They are studied here using the time-domain, inviscid and linear one-dimensional (1-D) equations of blood flow in elastic vessels. Solution of these equations in a distributed model of the 55 larger human arteries shows that p(r) calculated using a two-element windkessel model is space-independent and well approximated by the compliance-weighted space-average pressure of the arterial network. When arterial junctions are well-matched for the propagation of forward-travelling waves, p(r) calculated using a three-element windkessel model is space-dependent in systole and early diastole and is made of all the reflected waves originated at the terminal (peripheral) reflection sites, whereas p(exc) is the sum of the rest of the waves, which are obtained by propagating the left ventricular flow ejection without any peripheral reflection. In addition, new definitions of the reservoir and excess pressures from simultaneous pressure and flow measurements at an arbitrary location are proposed here. They provide valuable information for pulse wave analysis and overcome the limitations of the current two- and three-element windkessel models to calculate p(r).", "title": "" }, { "docid": "faa8bb95a4b05bed78dbdfaec1cd147c", "text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.", "title": "" }, { "docid": "0f37f7306f879ca0b5d35516a64818fb", "text": "Much of empirical corporate finance focuses on sources of the demand for various forms of capital, not the supply. Recently, this has changed. Supply effects of equity and credit markets can arise from a combination of three ingredients: investor tastes, limited intermediation, and corporate opportunism. Investor tastes when combined with imperfectly competitive intermediaries lead prices and interest rates to deviate from fundamental values. Opportunistic firms respond by issuing securities with high prices and investing the proceeds. A link between capital market prices and corporate finance can in principle come from either supply or demand. This framework helps to organize empirical approaches that more precisely identify and quantify supply effects through variation in one of these three ingredients. Taken as a whole, the evidence shows that shifting equity and credit market conditions play an important role in dictating corporate finance and investment. 181 A nn u. R ev . F in . E co n. 2 00 9. 1: 18 120 5. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by H ar va rd U ni ve rs ity o n 02 /1 1/ 14 . F or p er so na l u se o nl y.", "title": "" }, { "docid": "46fdb284160db9b9b10fed2745cd1f59", "text": "The TCB shall be found resistant to penetration. Near flawless penetration testing is a requirement for high-rated secure systems — those rated above B1 based on the Trusted Computer System Evaluation Criteria (TCSEC) and its Trusted Network and Database Interpretations (TNI and TDI). Unlike security functional testing, which demonstrates correct behavior of the product's advertised security controls, penetration testing is a form of stress testing which exposes weaknesses — that is, flaws — in the trusted computing base (TCB). This essay describes the Flaw Hypothesis Methodology (FHM), the earliest comprehensive and widely used method for conducting penetrations testing. It reviews motivation for penetration testing and penetration test planning, which establishes the goals, ground rules, and resources available for testing. The TCSEC defines \" flaw \" as \" an error of commission, omission, or oversight in a system that allows protection mechanisms to be bypassed. \" This essay amplifies the definition of a flaw as a demonstrated unspecified capability that can be exploited to violate security policy. The essay provides an overview of FHM and its analogy to a heuristic-based strategy game. The 10 most productive ways to generate hypothetical flaws are described as part of the method, as are ways to confirm them. A review of the results and representative generic flaws discovered over the past 20 years is presented. The essay concludes with the assessment that FHM is applicable to the European ITSEC and with speculations about future methods of penetration analysis using formal methods, that is, mathematically 270 Information Security specified design, theorems, and proofs of correctness of the design. One possible development could be a rigorous extension of FHM to be integrated into the development process. This approach has the potential of uncovering problems early in the design , enabling iterative redesign. A security threat exists when there are the opportunity, motivation, and technical means to attack: the when, why, and how. FHM deals only with the \" how \" dimension of threats. It is a requirement for high-rated secure systems (for example, TCSEC ratings above B1) that penetration testing be completed without discovery of security flaws in the evaluated product, as part of a product or system evaluation [DOD85, NCSC88b, NCSC92]. Unlike security functional testing, which demonstrates correct behavior of the product's advertised security controls, penetration testing is a form of stress testing, which exposes weaknesses or flaws in the trusted computing base (TCB). It has …", "title": "" }, { "docid": "2de69420e8062f267b64bcf3342bd8b0", "text": "This paper describes a direct-sequence spread-spectrum superregenerative receiver using a PN code synchronization loop based on the tan-dither technique. The receiver minimizes overall complexity by using a single signal processing path for data detection and PN code synchronization. An analytical study on the loop dynamics is presented, and the conditions for optimum performance are examined. Experimental results in the 433 MHz European ISM band confirm the receiver ability to perform acquisition and tracking, achieving a sensitivity of -103 dBm and an input dynamic range of 65 dB.", "title": "" }, { "docid": "d58425a613f9daea2677d37d007f640e", "text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.", "title": "" }, { "docid": "4de597faec62e1f6091cb72a721bc5ea", "text": "In this paper, we propose a unified facial beautification framework with respect to skin homogeneity, lighting, and color. A novel region-aware mask is constructed for skin manipulation, which can automatically select the edited regions with great precision. Inspired by the state-of-the-art edit propagation techniques, we present an adaptive edge-preserving energy minimization model with a spatially variant parameter and a high-dimensional guided feature space for mask generation. Using region-aware masks, our method facilitates more flexible and accurate facial skin enhancement while the complex manipulations are simplified considerably. In our beautification framework, a portrait is decomposed into smoothness, lighting, and color layers by an edge-preserving operator. Next, facial landmarks and significant features are extracted as input constraints for mask generation. After three region-aware masks have been obtained, a user can perform facial beautification simply by adjusting the skin parameters. Furthermore, the combinations of parameters can be optimized automatically, depending on the data priors and psychological knowledge. We performed both qualitative and quantitative evaluation for our method using faces with different genders, races, ages, poses, and backgrounds from various databases. The experimental results demonstrate that our technique is superior to previous methods and comparable to commercial systems, for example, PicTreat, Portrait+, and Portraiture.", "title": "" }, { "docid": "edb17cb58e7fd5862c84b53e9c9f2915", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.chb.2011.12.003 ⇑ Corresponding author. Tel.: +49 40 41346826; fax E-mail addresses: sabine.trepte@uni-hamburg.de ( uni-hamburg.de (L. Reinecke), keno.juechems@stu Juechems). Online gaming has gained millions of users around the globe, which have been shown to virtually connect, to befriend, and to accumulate online social capital. Today, as online gaming has become a major leisure time activity, it seems worthwhile asking for the underlying factors of online social capital acquisition and whether online social capital increases offline social support. In the present study, we proposed that the online game players’ physical and social proximity as well as their mutual familiarity influence bridging and bonding social capital. Physical proximity was predicted to positively influence bonding social capital online. Social proximity and familiarity were hypothesized to foster both online bridging and bonding social capital. Additionally, we hypothesized that both social capital dimensions are positively related to offline social support. The hypotheses were tested with regard to members of e-sports clans. In an online survey, participants (N = 811) were recruited via the online portal of the Electronic Sports League (ESL) in several countries. The data confirmed all hypotheses, with the path model exhibiting an excellent fit. The results complement existing research by showing that online gaming may result in strong social ties, if gamers engage in online activities that continue beyond the game and extend these with offline activities. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a4164039cea3951982373edebd53d636", "text": "Vehicle detection with orientation estimation in aerial images has received widespread interest as it is important for intelligent traffic management. This is a challenging task, not only because of the complex background and relatively small size of the target, but also the various orientations of vehicles in aerial images captured from the top view. The existing methods for oriented vehicle detection need several post-processing steps to generate final detection results with orientation, which are not efficient enough. Moreover, they can only get discrete orientation information for each target. In this paper, we present an end-to-end single convolutional neural network to generate arbitrarily-oriented detection results directly. Our approach, named Oriented_SSD (Single Shot MultiBox Detector, SSD), uses a set of default boxes with various scales on each feature map location to produce detection bounding boxes. Meanwhile, offsets are predicted for each default box to better match the object shape, which contain the angle parameter for oriented bounding boxes’ generation. Evaluation results on the public DLR Vehicle Aerial dataset and Vehicle Detection in Aerial Imagery (VEDAI) dataset demonstrate that our method can detect both the location and orientation of the vehicle with high accuracy and fast speed. For test images in the DLR Vehicle Aerial dataset with a size of 5616× 3744, our method achieves 76.1% average precision (AP) and 78.7% correct direction classification at 5.17 s on an NVIDIA GTX-1060.", "title": "" }, { "docid": "fea10e9e5bf2c930d609d3fb48f1efaf", "text": "Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent and highly expressive SRL model is Markov Logic Networks (MLNs), but the expressivity comes at the cost of learning complexity. Most of the current methods for learning MLN structure follow a two-step approach where first they search through the space of possible clauses (i.e. structures) and then learn weights via gradient descent for these clauses. We present a functional-gradient boosting algorithm to learn both the weights (in closed form) and the structure of the MLN simultaneously. Moreover most of the learning approaches for SRL apply the closed-world assumption, i.e., whatever is not observed is assumed to be false in the world. We attempt to open this assumption. We extend our algorithm for MLN structure learning to handle missing data by using an EM-based approach and show this algorithm can also be used to learn Relational Dependency Networks and relational policies. Our results in many domains demonstrate that our approach can effectively learn MLNs even in the presence of missing data.", "title": "" }, { "docid": "59597ab549189c744aae774259f84745", "text": "This paper addresses the problem of multi-view people occupancy map estimation. Existing solutions either operate per-view, or rely on a background subtraction preprocessing. Both approaches lessen the detection performance as scenes become more crowded. The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases. Although deep learning algorithms have proven to excel on remarkably numerous computer vision tasks, such a method has not been applied yet to this problem. In large part this is due to the lack of large-scale multi-camera data-set. The core of our method is an architecture which makes use of monocular pedestrian data-set, available at larger scale than the multi-view ones, applies parallel processing to the multiple video streams, and jointly utilises it. Our end-to-end deep learning method outperforms existing methods by large margins on the commonly used PETS 2009 data-set. Furthermore, we make publicly available a new three-camera HD data-set.", "title": "" }, { "docid": "6fd511ffcdb44c39ecad1a9f71a592cc", "text": "s Providing Supporting Policy Compositional Operators Functional Composition Network Layered Abstract Topologies Topological Decomposition Packet Extensible Headers Policy & Network Abstractions Pyretic (Contributions)", "title": "" }, { "docid": "a2adeb9448c699bbcbb10d02a87e87a5", "text": "OBJECTIVE\nTo quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity.\n\n\nMETHODS\nThis study used a content analysis of 127 apps from Apple's (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores.\n\n\nRESULTS\nApps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores.\n\n\nCONCLUSION\nIt is not unexpected that apps contained only minimal theoretical content, given that app developers come from a variety of backgrounds and many are not trained in the application of health behavior theory. The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive. There is an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps. These future collaborations between health behavior change experts and app developers could foster apps superior in both theory and programming possibly resulting in better health outcomes.", "title": "" }, { "docid": "e045619ede30efb3338e6278f23001d7", "text": "Particle filtering has become a standard tool for non-parametric estimation in computer vision tracking applications. It is an instance of stochastic search. Each particle represents a possible state of the system. Higher concentration of particles at any given region of the search space implies higher probabilities. One of its major drawbacks is the exponential growth in the number of particles for increasing dimensions in the search space. We present a graph based filtering framework for hierarchical model tracking that is capable of substantially alleviate this issue. The method relies on dividing the search space in subspaces that can be estimated separately. Low correlated subspaces may be estimated with parallel, or serial, filters and have their probability distributions combined by a special aggregator filter. We describe a new algorithm to extract parameter groups, which define the subspaces, from the system model. We validate our method with different graph structures within a simple hand tracking experiment with both synthetic and real data", "title": "" }, { "docid": "38c2508c0da3826f767336ae46cac505", "text": "Caricature generation is an interesting yet challenging task. The primary goal is to generate a plausible caricature with reasonable exaggerations given a face image. Conventional caricature generation approaches mainly use low-level geometric transformations such as image warping to generate exaggerated images, which lack richness and diversity in terms of content and style. The recent progress in generative adversarial networks (GANs) makes it possible to learn an image-to-image transformation from data so as to generate diverse output images. However, directly applying GAN-based models to this task leads to unsatisfactory results due to the large variance in the caricature distribution. Moreover, some models require pixel-wisely paired training data which largely limits their usage scenarios. In this paper, we model caricature generation as a weakly paired image-to-image translation task, and propose CariGAN to address these issues. Specifically, to enforce reasonable exaggeration and facial deformation, facial landmarks are adopted as an additional condition to constrain the generated image. Furthermore, an image fusion mechanism is designed to encourage our model to focus on the key facial parts so that more vivid details in these regions can be generated. Finally, a diversity loss is proposed to encourage the model to produce diverse results to help alleviate the “mode collapse” problem of the conventional GAN-based models. Extensive experiments on a large-scale “WebCaricature” dataset show that the proposed CariGAN can generate more plausible caricatures with larger diversity compared with the state-of-the-art models.", "title": "" }, { "docid": "466c0d9436e1f1878aaafa2297022321", "text": "Acetic acid was used topically at concentrations of between 0.5% and 5% to eliminate Pseudomonas aeruginosa from the burn wounds or soft tissue wounds of 16 patients. In-vitro studies indicated the susceptibility of P. aeruginosa to acetic acid; all strains exhibited a minimum inhibitory concentration of 2 per cent. P. aeruginosa was eliminated from the wounds of 14 of the 16 patients within two weeks of treatment. Acetic acid was shown to be an inexpensive and efficient agent for the elimination of P. aeruginosa from burn and soft tissue wounds.", "title": "" }, { "docid": "188d9e1b0244aa7f68610dab9d852ab9", "text": "We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user’s unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.", "title": "" }, { "docid": "48c8ee0758da2897513b2b5a18ebe7db", "text": "The Internet is smoothly migrating from an Internet of people towards an Internet of Things (IoT). By 2020, it is expected to have 50 billion things connected to the Internet. However, such a migration induces a strong level of complexity when handling interoperability between the heterogeneous Internet things, e.g., RFIDs (Radio Frequency Identification), mobile handheld devices, and wireless sensors. In this context, a couple of standards have been already set, e.g., IPv6, 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks), and M2M (Machine to Machine communications). In this paper, we focus on the integration of wireless sensor networks into IoT, and shed further light on the subtleties of such integration. We present a real-world test bed deployment where wireless sensors are used to control electrical appliances in a smart building. Encountered problems are highlighted and suitable solutions are presented.", "title": "" }, { "docid": "584456ef251fbf31363832fc82bd3d42", "text": "Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baselinelike 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.", "title": "" }, { "docid": "575da85b3675ceaec26143981dbe9b53", "text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
505ece06e901cc2ffdacebd166dfcb5a
SphereFace: Deep Hypersphere Embedding for Face Recognition
[ { "docid": "c9ecb6ac5417b5fea04e5371e4250361", "text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "title": "" } ]
[ { "docid": "62b103e6316c82f51e5c8da090dd19a9", "text": "Data visualization systems have predominantly been developed for WIMP-based direct manipulation interfaces. Only recently have other forms of interaction begun to appear, such as natural language or touch-based interaction, though usually operating only independently. Prior evaluations of natural language interfaces for visualization have indicated potential value in combining direct manipulation and natural language as complementary interaction techniques. We hypothesize that truly multimodal interfaces for visualization, those providing users with freedom of expression via both natural language and touch-based direct manipulation input, may provide an effective and engaging user experience. Unfortunately, however, little work has been done in exploring such multimodal visualization interfaces. To address this gap, we have created an architecture and a prototype visualization system called Orko that facilitates both natural language and direct manipulation input. Specifically, Orko focuses on the domain of network visualization, one that has largely relied on WIMP-based interfaces and direct manipulation interaction, and has little or no prior research exploring natural language interaction. We report results from an initial evaluation study of Orko, and use our observations to discuss opportunities and challenges for future work in multimodal network visualization interfaces.", "title": "" }, { "docid": "9f94cb21b75a6a1c91bc7a46ff242978", "text": "Two requirements engineering techniques, i* and e3 value, work together to explore commercial e-services from a strategic-goal and profitability perspective. We demonstrate our approach using a case study on Internet radio", "title": "" }, { "docid": "8bb30efa3f14fa0860d1e5bc1265c988", "text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U", "title": "" }, { "docid": "b657aeceeee6c29330cf45dcc40d6198", "text": "A small form-factor 60-GHz SiGe BiCMOS radio with two antennas-in-package is presented. The fully-integrated feature-rich transceiver provides a complete RF solution for mobile WiGig/IEEE 802.11ad applications.", "title": "" }, { "docid": "341e0b7d04b333376674dac3c0888f50", "text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.", "title": "" }, { "docid": "b4e3d2f5e4bb1238cb6f4dad5c952c4c", "text": "Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.", "title": "" }, { "docid": "309a20834f17bd87e10f8f1c051bf732", "text": "Tamper-resistant cryptographic processors are becoming the standard way to enforce data-usage policies. Their origins lie with military cipher machines and PIN processing in banking payment networks, expanding in the 1990s into embedded applications: token vending machines for prepayment electricity and mobile phone credit. Major applications such as GSM mobile phone identification and pay TV set-top boxes have pushed low-cost cryptoprocessors toward ubiquity. In the last five years, dedicated crypto chips have been embedded in devices such as game console accessories and printer ink cartridges, to control product and accessory after markets. The \"Trusted Computing\" initiative will soon embed cryptoprocessors in PCs so they can identify each other remotely. This paper surveys the range of applications of tamper-resistant hardware and the array of attack and defense mechanisms which have evolved in the tamper-resistance arms race.", "title": "" }, { "docid": "51677dc68fac623815681ff45a91f1aa", "text": "A business process is a collection of activities to create more business values and its continuous improvement aligned with business goals is essential to survive in fast changing business environment. However, it is quite challenging to find out whether a change of business processes positively affects business goals or not, if there are problems in the changing, what the reasons of the problems are, what solutions exist for the problems and which solutions should be selected. Big data analytics along with a goal-orientation which helps find out insights from a large volume of data in a goal concept opens up a new way for an effective business process reengineering. In this paper, we suggest a novel modeling framework which consists of a conceptual modeling language, a process and a tool for effective business processes reengineering using big data analytics and a goal-oriented approach. The modeling language defines important concepts for business process reengineering with metamodels and shows the concepts with complementary views: Business Goal-Process-Big Analytics Alignment View, Transformational Insight View and Big Analytics Query View. Analyzers hypothesize problems and solutions of business processes by using the modeling language, and the problems and solutions will be validated by the results of Big Analytics Queries which supports not only standard SQL operation, but also analytics operation such as prediction. The queries are run in an execution engine of our tool on top of Spark which is one of big data processing frameworks. In a goal-oriented spirit, all concepts not only business goals and business processes, but also big analytics queries are considered as goals, and alternatives are explored and selections are made among the alternatives using trade-off analysis. To illustrate and validate our approach, we use an automobile logistics example, then compare previous work.", "title": "" }, { "docid": "22e9784442e3db65919c43362d2a9ac9", "text": "Multivariate gait data have traditionally been challenging to analyze. Part 1 of this review explored applications of fuzzy, multivariate statistical and fractal methods to gait data analysis. Part 2 extends this critical review to the applications of artificial neural networks and wavelets to gait data analysis. The review concludes with a practical guide to the selection of alternative gait data analysis methods. Neural networks are found to be the most prevalent non-traditional methodology for gait data analysis in the last 10 years. Interpretation of multiple gait signal interactions and quantitative comparisons of gait waveforms are identified as important data analysis topics in need of further research.", "title": "" }, { "docid": "5a232c84b76758acd1a44d42aaa3c064", "text": "The OpenStreetMap (OSM) project, founded in 2004, has gathered an exceptional amount of interest in recent years and counts as one of the most impressive sources of Volunteered Geographic Information (VGI) on the Internet. In total, more than half a million members had registered for the project by the end of 2011. However, while this number of contributors seems impressive, questions remain about the individual contributions that have been made by the project members. This research article contains several studies regarding the contributions by the community of the project. The results show that only 38% (192,000) of the registered members carried out at least one edit in the OSM database and that only 5% (24,000) of all members actively contributed to the project in a more productive way. The majority of the members are located in Europe (72%) and each member has an activity area whose size may range from one soccer field up to more than 50 km 2 . In addition to several more analyses conducted for this article, predictions will be made about how this newly acquired knowledge can be used for future research.", "title": "" }, { "docid": "a8c4b84175074e654cf1facfc65bde50", "text": "We propose monotonic classification with selection of monotonic features as a defense against evasion attacks on classifiers for malware detection. The monotonicity property of our classifier ensures that an adversary will not be able to evade the classifier by adding more features. We train and test our classifier on over one million executables collected from VirusTotal. Our secure classifier has 62% temporal detection rate at a 1% false positive rate. In comparison with a regular classifier with unrestricted features, the secure malware classifier results in a drop of approximately 13% in detection rate. Since this degradation in performance is a result of using a classifier that cannot be evaded, we interpret this performance hit as the cost of security in classifying malware.", "title": "" }, { "docid": "d2a14d9acd47a5de5fc4098565962cb4", "text": "Above 500V class superjunction (SJ) MOSFETs fabricated by deep-trench etching and epitaxial growth are investigated. These SJ-MOSFETs show the lowest specific on-resistance (RonA) of 21.3mOmegacm2 at a breakdown voltage (VB) of 540V, among reported trench-filling type of devices in the same voltage class. These RonA-VB trade-off characteristics are accomplished by optimizing doping concentrations of n- and p- column regions. In addition, low reverse biased leakage current has been achieved by filling deep trenches with defect-free single crystal silicon", "title": "" }, { "docid": "5f6d142860a4bd9ff1fa9c4be9f17890", "text": "Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl’s algorithm for singly-connected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable which breaks it. The main advantage of this algorithm is that it computes the probability directly on the original network instead of building a cluster tree, and this can save time when debugging a model and when the sparsity of evidence allows a pruning of the network. The algorithm is also advantageous when some families in the network interact through AND/OR gates. A parallel implementation of the algorithm with a processor for each node is possible even in the case of multiply-connected networks.", "title": "" }, { "docid": "7f84e215df3d908249bde3be7f2b3cab", "text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.", "title": "" }, { "docid": "a346607a5e2e6c48e07e3e34a2ec7b0d", "text": "The development and professionalization of a video game requires tools for analyzing the practice of the players and teams, their tactics and strategies. These games are very popular and by nature numerical, they provide many tracks that we analyzed in terms of team play. We studied Defense of the Ancients (DotA), a Multiplayer Online Battle Arena (MOBA), where two teams battle in a game very similar to rugby or American football. Through topological measures – area of polygon described by the players, inertia, diameter, distance to the base – that are independent of the exact nature of the game, we show that the outcome of the match can be relevantly predicted. Mining e-sport’s tracks is opening interest in further application of these tools for analyzing real time sport. © 2014. Published by Elsevier B.V. Selection and/or peer review under responsibility of American Applied Science Research Institute", "title": "" }, { "docid": "df3d91489c8c39ffb36f4c09a132c7d6", "text": "In this paper, we introduce a wheel-based cable climbing robot system developed for maintenance of the suspension bridges. The robot consists of three parts: a wheel based driving mechanism, adhesion mechanism, and safe landing mechanism. The driving mechanism is a combination of pantograph mechanism, and wheels driven by motors. In addition, we propose a special design of safe landing mechanism which can assure the safety of the robot on the cables when the power is lost. Finally, the proposed robotic system is manufactured and validated in the indoor experimental environments.", "title": "" }, { "docid": "63a9b11aac50821b0b0186a2c8c7ac0b", "text": "We can perceive pitch in whispered speech, although fundamental frequency (F0) does not exist physically or phonetically due to the lack of vocal-fold vibration. This study was carried out to determine how people generate such an unvoiced pitch. We conducted experiments in which speakers uttered five whispered Japanese vowels in accordance with the pitch of a guide pure tone. From the results, we derived a multiple regression function to convert the outputs of a mel-scaled filter bank of whispered speech into the perceived pitch value. Next, using this estimated pitch value as F0, we constructed a system for conversion of whispered speech to normal speech. Since the pitch varies with time according to the spectral shape, it was expected that the pitch accent would be kept by this conversion. Indeed, auditory experiments demonstrated that the correctly perceived rate of Japanese word accent was increased from 55.5% to 72.0% compared with that when a constant F0 was used.", "title": "" }, { "docid": "a283639ea8830be287650e6fc24ed082", "text": "Telephone networks first appeared more than a hundred years ago, long beforetransistors were invented. They, therefore, form the oldest large scale networkthat has grown to touch over 7 billion people. Telephony is now merging manycomplex technologies and because numerous services enabled by these technologiescan be monetized, telephony attracts a lot of fraud. In 2015, a telecom fraudassociation study estimated that the loss of revenue due to global telecom fraudwas worth 38 billion US dollars per year. Because of the convergence oftelephony with the Internet, fraud in telephony networks can also have anegative impact on security of online services. However, there is littleacademic work on this topic, in part because of the complexity of such networksand their closed nature. This paper aims to systematically explorefraud in telephony networks. Our taxonomy differentiates the root causes, thevulnerabilities, the exploitation techniques, the fraud types and finally theway fraud benefits fraudsters. We present an overview of eachof these and use CAller NAMe (CNAM) revenue share fraud as aconcrete example to illustrate how our taxonomy helps in better understandingthis fraud and to mitigate it.", "title": "" }, { "docid": "c221568e2ed4d6192ab04119046c4884", "text": "An efficient Ultra-Wideband (UWB) Frequency Selective Surface (FSS) is presented to mitigate the potential harmful effects of Electromagnetic Interference (EMI) caused by the radiations emitted by radio devices. The proposed design consists of circular and square elements printed on the opposite surfaces of FR4 substrate of 3.2 mm thickness. It ensures better angular stability by up to 600, bandwidth has been significantly enhanced by up to 16. 21 GHz to provide effective shielding against X-, Ka- and K-bands. While signal attenuation has also been improved remarkably in the desired band compared to the results presented in the latest research. Theoretical results are presented for TE and TM polarization for normal and oblique angles of incidence.", "title": "" }, { "docid": "6f4479d224c1546040bee39d50eaba55", "text": "Bag-of-words (BOW) is now the most popular way to model text in statistical machine learning approaches in sentiment analysis. However, the performance of BOW sometimes remains limited due to some fundamental deficiencies in handling the polarity shift problem. We propose a model called dual sentiment analysis (DSA), to address this problem for sentiment classification. We first propose a novel data expansion technique by creating a sentiment-reversed review for each training and test review. On this basis, we propose a dual training algorithm to make use of original and reversed training reviews in pairs for learning a sentiment classifier, and a dual prediction algorithm to classify the test reviews by considering two sides of one review. We also extend the DSA framework from polarity (positive-negative) classification to 3-class (positive-negative-neutral) classification, by taking the neutral reviews into consideration. Finally, we develop a corpus-based method to construct a pseudo-antonym dictionary, which removes DSA's dependency on an external antonym dictionary for review reversion. We conduct a wide range of experiments including two tasks, nine datasets, two antonym dictionaries, three classification algorithms, and two types of features. The results demonstrate the effectiveness of DSA in supervised sentiment classification.", "title": "" } ]
scidocsrr
683a8093dcba1d6e7e1ef6bb5e6d0b1b
Advanced Local Binary Patterns for Remote Sensing Image Retrieval
[ { "docid": "4f58172c8101b67b9cd544b25d09f2e2", "text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.", "title": "" } ]
[ { "docid": "59fa6669df5bb9281c821ba3ea564cf9", "text": "The outfits people wear contain latent fashion concepts capturing styles, seasons, events, and environments. Fashion theorists have proposed that these concepts are shaped by design elements such as color, material, and silhouette. A dress may be \"bohemian\" because of its pattern, material, trim, or some combination of them: it is not always clear how low-level elements translate to high-level styles. In this paper, we use polylingual topic modeling to learn latent fashion concepts jointly in two languages capturing these elements and styles. Using this latent topic formation we can translate between these two languages through topic space, exposing the elements of fashion style. We train the polylingual topic model (PLTM) on a set of more than half a million outfits collected from Polyvore, a popular fashion-based social net- work. We present novel, data-driven fashion applications that allow users to express their needs in natural language just as they would to a real stylist and produce tailored item recommendations for these style needs.", "title": "" }, { "docid": "db6bba69b5bd316da640b03749db1918", "text": "[1] Pore pressure changes are rigorously included in Coulomb stress calculations for fault interaction studies. These are considered changes under undrained conditions for analyzing very short term postseismic response. The assumption that pore pressure is proportional to faultnormal stress leads to the widely used concept of an effective friction coefficient. We provide an exact expression for undrained fault zone pore pressure changes to evaluate the validity of that concept. A narrow fault zone is considered whose poroelastic parameters are different from those in the surrounding medium, which is assumed to be elastically isotropic. We use conditions for mechanical equilibrium of stress and geometric compatibility of strain to express the effective normal stress change within the fault as a weighted linear combination of mean stress and faultnormal stress changes in the surroundings. Pore pressure changes are determined by fault-normal stress changes when the shear modulus within the fault zone is significantly smaller than in the surroundings but by mean stress changes when the elastic mismatch is small. We also consider an anisotropic fault zone, introducing a Skempton tensor for pore pressure changes. If the anisotropy is extreme, such that fluid pressurization under constant stress would cause expansion only in the fault-normal direction, then the effective friction coefficient concept applies exactly. We finally consider moderately longer timescales than those for undrained response. A sufficiently permeable fault may come to local pressure equilibrium with its surroundings even while that surrounding region may still be undrained, leading to pore pressure change determined by mean stress changes in those surroundings.", "title": "" }, { "docid": "ae287f0cce2d1652c7579c02b4692acf", "text": "Recent studies have shown that multiple brain areas contribute to different stages and aspects of procedural learning. On the basis of a series of studies using a sequence-learning task with trial-and-error, we propose a hypothetical scheme in which a sequential procedure is acquired independently by two cortical systems, one using spatial coordinates and the other using motor coordinates. They are active preferentially in the early and late stages of learning, respectively. Both of the two systems are supported by loop circuits formed with the basal ganglia and the cerebellum, the former for reward-based evaluation and the latter for processing of timing. The proposed neural architecture would operate in a flexible manner to acquire and execute multiple sequential procedures.", "title": "" }, { "docid": "be9ee800d10e2df666a794787d6061fd", "text": "IT governance is becoming a critical area for the improvement and business continuity in organizations. The value of the services, as a product of IT projects, must be improved. A good definition of IT projects, according to business needs, helps to improve product value. For this purpose, some frameworks and standards have been developed by internationally recognized institutions. Among them project management framework “PRINCE2” and the IT governance standard ISO/IEC 38500 are highlighted. The application of these frameworks and standards has meant figures have improved, but they are far from covering the expectations of organizations. A IT project approach that integrates the needs of business and technology that supports it is a key to achieve the expected success for IT projects in organizations. This paper presents a summary of the study performed to find out how PRINCE2 meets the expectations of IT governance according to ISO/IEC 38500. The study could help knowing and improving and expectations of success in PRINCE 2 projects.", "title": "" }, { "docid": "3962a6ca8200000b650d210dae7899ec", "text": "Mental fatigue is often characterized by reduced motivation for effortful activity and impaired task performance. We used subjective, behavioral (performance), and psychophysiological (P3, pupil diameter) measures during an n-back task to investigate the link between mental fatigue and task disengagement. After 2 h, we manipulated the rewards to examine a possible reengagement effect. Analyses showed that, with increasing fatigue and time-on-task, performance, P3 amplitude, and pupil diameter decreased. After increasing the rewards, all measures reverted to higher levels. Multilevel analysis revealed positive correlations between the used measures with time-on-task. We interpret these results as support for a strong link between task disengagement and mental fatigue.", "title": "" }, { "docid": "d662536cbd7dca2ce06b3e1e44362776", "text": "Internet of Things (IoT) devices such as the Amazon Echo e a smart speaker developed by Amazon e are undoubtedly great sources of potential digital evidence due to their ubiquitous use and their always-on mode of operation, constituting a human-life's black box. The Amazon Echo in particular plays a centric role for the cloud-based intelligent virtual assistant (IVA) Alexa developed by Amazon Lab126. The Alexaenabled wireless smart speaker is the gateway for all voice commands submitted to Alexa. Moreover, the IVA interacts with a plethora of compatible IoT devices and third-party applications that leverage cloud resources. Understanding the complex cloud ecosystem that allows ubiquitous use of Alexa is paramount on supporting digital investigations when need raises. This paper discusses methods for digital forensics pertaining to the IVA Alexa's ecosystem. The primary contribution of this paper consists of a new efficient approach of combining cloud-native forensics with client-side forensics (forensics for companion devices), to support practical digital investigations. Based on a deep understanding of the targeted ecosystem, we propose a proof-of-concept tool, CIFT, that supports identification, acquisition and analysis of both native artifacts from the cloud and client-centric artifacts from local devices (mobile applications", "title": "" }, { "docid": "04045b75734098639cc77dbf1f922f54", "text": "Using Brinley plots, this meta-analysis provides a quantitative examination of age differences in eight verbal span tasks. The main conclusions are these: (a) there are age differences in all verbal span tasks; (b) the data support the conclusion that working memory span is more age sensitive than short-term memory span; and (c) there is a linear relationship between span of younger adults and span of older adults. A linear model indicates the presence of three distinct functions, in increasing order of size of age effects: simple storage span; backward digit span; and working memory span.", "title": "" }, { "docid": "25d14017403c96eceeafcbda1cbdfd2c", "text": "We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks (NNs). Our model is a three-layer NN that learns to encode the non-pivot features of an input example into a lowdimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation. The low-dimensional representation is then employed in a learning algorithm for the task. Moreover, we show how to inject pre-trained word embeddings into our model in order to improve generalization across examples with similar pivot features. We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines.1", "title": "" }, { "docid": "50044f80063441c9477acc40ac07e19a", "text": "Natural Language Inference (NLI) is fundamental to many Natural Language Processing (NLP) applications including semantic search and question answering. The NLI problem has gained significant attention due to the release of large scale, challenging datasets. Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect to a given hypothesis. Surprisingly, the use of methods based on structured knowledge – a central topic in artificial intelligence – has not received much attention vis-a-vis the NLI problem. While there are many open knowledge bases that contain various types of reasoning information, their use for NLI has not been well explored. To address this, we present a combination of techniques that harness external knowledge to improve performance on the NLI problem in the science questions domain. We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem. Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.", "title": "" }, { "docid": "91d6b92dabf0f35007bd6f3b677789f4", "text": "We aim at summarizing answers in community question-answering (CQA). While most previous work focuses on factoid question-answering, we focus on the non-factoid question-answering. Unlike factoid CQA, non-factoid question-answering usually requires passages as answers. The shortness, sparsity and diversity of answers form interesting challenges for summarization. To tackle these challenges, we propose a sparse coding-based summarization strategy that includes three core ingredients: short document expansion, sentence vectorization, and a sparse-coding optimization framework. Specifically, we extend each answer in a question-answering thread to a more comprehensive representation via entity linking and sentence ranking strategies. From answers extended in this manner, each sentence is represented as a feature vector trained from a short text convolutional neural network model. We then use these sentence representations to estimate the saliency of candidate sentences via a sparse-coding framework that jointly considers candidate sentences and Wikipedia sentences as reconstruction items. Given the saliency vectors for all candidate sentences, we extract sentences to generate an answer summary based on a maximal marginal relevance algorithm. Experimental results on a benchmark data collection confirm the effectiveness of our proposed method in answer summarization of non-factoid CQA, and moreover, its significant improvement compared to state-of-the-art baselines in terms of ROUGE metrics.", "title": "" }, { "docid": "a6a7770857964e96f98bd4021d38f59f", "text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.", "title": "" }, { "docid": "6c9f3107fbf14f5bef1b8edae1b9d059", "text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.", "title": "" }, { "docid": "1eab5897252dae2313210c666c3dce8c", "text": "Bone marrow angiogenesis plays an important role in the pathogenesis and progression in multiple myeloma. Recent studies have shown that proteasome inhibitor bortezomib (Velcade, formerly PS-341) can overcome conventional drug resistance in vitro and in vivo; however, its antiangiogenic activity in the bone marrow milieu has not yet been defined. In the present study, we examined the effects of bortezomib on the angiogenic phenotype of multiple myeloma patient-derived endothelial cells (MMEC). At clinically achievable concentrations, bortezomib inhibited the proliferation of MMECs and human umbilical vein endothelial cells in a dose-dependent and time-dependent manner. In functional assays of angiogenesis, including chemotaxis, adhesion to fibronectin, capillary formation on Matrigel, and chick embryo chorioallantoic membrane assay, bortezomib induced a dose-dependent inhibition of angiogenesis. Importantly, binding of MM.1S cells to MMECs triggered multiple myeloma cell proliferation, which was also abrogated by bortezomib in a dose-dependent fashion. Bortezomib triggered a dose-dependent inhibition of vascular endothelial growth factor (VEGF) and interleukin-6 (IL-6) secretion by the MMECs, and reverse transcriptase-PCR confirmed drug-related down-regulation of VEGF, IL-6, insulin-like growth factor-I, Angiopoietin 1 (Ang1), and Ang2 transcription. These data, therefore, delineate the mechanisms of the antiangiogenic effects of bortezomib on multiple myeloma cells in the bone marrow milieu.", "title": "" }, { "docid": "12d564ad22b33ee38078f18a95ed670f", "text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER.", "title": "" }, { "docid": "a01abbced99f14ae198c6abef6454126", "text": "Coreference Resolution September 2014 Present Kevin Clark, Christopher Manning Stanford University Developed coreference systems that build up coreference chains with agglomerative clustering. These models are more accurate than the mention-pair systems commonly used in prior work. Developed neural coreference systems that do not require the large number of complex hand-engineered features commonly found in statistical coreference systems. Applied imitation and reinforcement learning to directly optimize coreference systems for evaluation metrics instead of relying on hand-tuned heuristic loss functions. Made substantial advancements to the current state-of-the-art for English and Chinese coreference. Publicly released all models through Stanford’s CoreNLP.", "title": "" }, { "docid": "872f224c2dbf06a335eee267bac4ec79", "text": "Shallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on two large-scale image recognition tasks: ImageNet and CIFAR-10. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems leads to a CNN that exceeds AlexNet performance on ImageNet. Extending our training methodology to construct individual layers by solving 2-and-3-hidden layer auxiliary problems, we obtain an 11-layer network that exceeds VGG-11 on ImageNet obtaining 89.8% top-5 single crop.To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. We conduct a wide range of experiments to study the properties this induces on the intermediate layers.", "title": "" }, { "docid": "056918d2a113e8cf3b582e04de9f4092", "text": "Multioperator tasks often require complex cognitive processing at the team level. Many team cognitive processes, such as situation assessment and coordination, are thought to rely on team knowledge. Team knowledge is multifaceted and comprises relatively generic knowledge in the form of team mental models and more specific team situation models. In this methodological review paper, we review recent efforts to measure team knowledge in the context of mapping specific methods onto features of targeted team knowledge. Team knowledge features include type, homogeneity versus heterogeneity, and rate of knowledge change. Measurement features include knowledge elicitation method, team metric, and aggregation method. When available, we highlight analytical conclusions or empirical data that support a connection between team knowledge and measurement method. In addition, we present empirical results concerning the relation between team knowledge and performance for each measurement method and identify research and methodological needs. Addressing issues surrounding the measurement of team knowledge is a prerequisite to understanding team cognition and its relation to team performance and to designing training programs or devices to facilitate team cognition.", "title": "" }, { "docid": "5e4914e0eea3658f39a18feb655d955d", "text": "Taylor [Taylor, D.H., 1964. Drivers' galvanic skin response and the risk of accident. Ergonomics 7, 439-451] argued that drivers attempt to maintain a constant level of anxiety when driving which Wilde [Wilde, G.J.S., 1982. The theory of risk homeostasis: implications for safety and health. Risk Anal. 2, 209-225] interpreted to be coupled to subjective estimates of the probability of collision. This theoretical paper argues that what drivers attempt to maintain is a level of task difficulty. Naatanen and Summala [Naatanen, R., Summala, H., 1976. Road User Behaviour and Traffic Accidents. North Holland/Elsevier, Amsterdam, New York] similarly rejected the concept of statistical risk as a determinant of driver behaviour, but in so doing fell back on the learning process to generate a largely automatised selection of appropriate safety margins. However it is argued here that driver behaviour cannot be acquired and executed principally in such S-R terms. The concept of task difficulty is elaborated within the framework of the task-capability interface (TCI) model, which describes the dynamic interaction between the determinants of task demand and driver capability. It is this interaction which produces different levels of task difficulty. Implications of the model are discussed regarding variation in performance, resource allocation, hierarchical decision-making and the interdependence of demand and capability. Task difficulty homeostasis is proposed as a key sub-goal in driving and speed choice is argued to be the primary solution to the problem of keeping task difficulty within selected boundaries. The relationship between task difficulty and mental workload and calibration is clarified. Evidence is cited in support of the TCI model, which clearly distinguishes task difficulty from estimates of statistical risk. However, contrary to expectation, ratings of perceived risk depart from ratings of statistical risk but track difficulty ratings almost perfectly. It now appears that feelings of risk may inform driver decision making, as Taylor originally suggested, but not in terms of risk of collision, but rather in terms of task difficulty. Finally risk homeostasis is presented as a special case of task difficulty homeostasis.", "title": "" }, { "docid": "e5d323fe9bf2b5800043fa0e4af6849a", "text": "A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.", "title": "" } ]
scidocsrr
cde2002be84c9d1cabe9dd58392e5d6c
Multimodal Memory Modelling for Video Captioning
[ { "docid": "f33410ddc62c2c8479d7c68978b39fff", "text": "In this paper, we introduce Key-Value Memory Networks to a multimodal setting and a novel key-addressing mechanism to deal with sequence-to-sequence models. The proposed model naturally decomposes the problem of video captioning into vision and language segments, dealing with them as key-value pairs. More specifically, we learn a semantic embedding (v) corresponding to each frame (k) in the video, thereby creating (k, v) memory slots. We propose to find the next step attention weights conditioned on the previous attention distributions for the key-value memory slots in the memory addressing schema. Exploiting this flexibility of the framework, we additionally capture spatial dependencies while mapping from the visual to semantic embedding. Experiments done on the Youtube2Text dataset demonstrate usefulness of recurrent key-addressing, while achieving competitive scores on BLEU@4, METEOR metrics against state-of-the-art models.", "title": "" }, { "docid": "4f58d355a60eb61b1c2ee71a457cf5fe", "text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).", "title": "" }, { "docid": "cd45dd9d63c85bb0b23ccb4a8814a159", "text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization", "title": "" }, { "docid": "b5fea029d64084089de8e17ae9debffc", "text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.", "title": "" } ]
[ { "docid": "68c32704e81e51e8cbdf401266ff1225", "text": "There are two types of information in each handwritten word image: explicit information which can be easily read or derived directly, such as lexical content or word length, and implicit attributes such as the author’s identity. Whether features learned by a neural network for one task can be used for another task remains an open question. In this paper, we present a deep adaptive learning method for writer identification based on single-word images using multi-task learning. An auxiliary task is added to the training process to enforce the emergence of reusable features. Our proposed method transfers the benefits of the learned features of a convolutional neural network from an auxiliary task such as explicit content recognition to the main task of writer identification in a single procedure. Specifically, we propose a new adaptive convolutional layer to exploit the learned deep features. A multi-task neural network with one or several adaptive convolutional layers is trained end-to-end, to exploit robust generic features for a specific main task, i.e., writer identification. Three auxiliary tasks, corresponding to three explicit attributes of handwritten word images (lexical content, word length and character attributes), are evaluated. Experimental results on two benchmark datasets show that the proposed deep adaptive learning method can improve the performance of writer identification based on singleword images, compared to non-adaptive and simple linear-adaptive approaches.", "title": "" }, { "docid": "d763198d3bfb1d30b153e13245c90c08", "text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.", "title": "" }, { "docid": "cac6da8b7ee88f95196651920a64486c", "text": "The classification of food images is an interesting and challenging problem since the high variability of the image content which makes the task difficult for current state-of-the-art classification methods. The image representation to be employed in the classification engine plays an important role. We believe that texture features have been not properly considered in this application domain. This paper points out, through a set of experiments, that textures are fundamental to properly recognize different food items. For this purpose the bag of visual words model (BoW) is employed. Images are processed with a bank of rotation and scale invariant filters and then a small codebook of Textons is built for each food class. The learned class-based Textons are hence collected in a single visual dictionary. The food images are represented as visual words distributions (Bag of Textons) and a Support Vector Machine is used for the classification stage. The experiments demonstrate that the image representation based on Bag of Textons is more accurate than existing (and more complex) approaches in classifying the 61 classes of the Pittsburgh Fast-Food Image Dataset.", "title": "" }, { "docid": "e022d5b292d391e201d15e8b2317bc30", "text": "This article describes the most prominent approaches to apply artificial intelligence technologies to information retrieval (IR). Information retrieval is a key technology for knowledge management. It deals with the search for information and the representation, storage and organization of knowledge. Information retrieval is concerned with search processes in which a user needs to identify a subset of information which is relevant for his information need within a large amount of knowledge. The information seeker formulates a query trying to describe his information need. The query is compared to document representations which were extracted during an indexing phase. The representations of documents and queries are typically matched by a similarity function such as the Cosine. The most similar documents are presented to the users who can evaluate the relevance with respect to their problem (Belkin, 2000). The problem to properly represent documents and to match imprecise representations has soon led to the application of techniques developed within Artificial Intelligence to information retrieval.", "title": "" }, { "docid": "0cfc3e43029e9f19513dd54bf7e3c6a6", "text": "Stereo image matching is one of the research areas in computer vision. In stereo image matching, technological developments advances from area based matching techniques to the feature based matching techniques. In this paper, we present a Harris corner detection algorithm for stereo image feature matching. This is an intensity based feature matching algorithm and it controls the strong and weak corners with the help of threshold value. Generally image processing algorithms are simulated in software but to do hardware co-simulation, here the model based design is implemented in Xilinx System Generator. Further the architecture is synthesized on the Xilinx Virtex - 5 FPGA. Simulation results are included in this paper to verify the performance of proposed system. Complexity level is minimized in model based design than that of script level design.", "title": "" }, { "docid": "e724db907bb466c108b5322a2df073da", "text": "CRISPR/Cas9 is a versatile genome-editing technology that is widely used for studying the functionality of genetic elements, creating genetically modified organisms as well as preclinical research of genetic disorders. However, the high frequency of off-target activity (≥50%)-RGEN (RNA-guided endonuclease)-induced mutations at sites other than the intended on-target site-is one major concern, especially for therapeutic and clinical applications. Here, we review the basic mechanisms underlying off-target cutting in the CRISPR/Cas9 system, methods for detecting off-target mutations, and strategies for minimizing off-target cleavage. The improvement off-target specificity in the CRISPR/Cas9 system will provide solid genotype-phenotype correlations, and thus enable faithful interpretation of genome-editing data, which will certainly facilitate the basic and clinical application of this technology.", "title": "" }, { "docid": "f09c6cf181c19e7ddd64121f2e9d368c", "text": "Authentication of biometric system is vulnerable to impostor attacks. Recent research considers face anti-spoofing as a binary classification problem. To differentiate between genuine access and fake attacks, many systems are trained and the number of counter measures is gradually increasing. In this paper, we propose a novel technique for face anti-spoofing. This method is based on Spatio-temporal information to distinguish between legitimate access and impostor videos or video sequences of picture attacks. The idea is to utilize convolutional neural network (CNN) with handcrafted technique such as LBP-TOP for feature extraction and training of the classifier. Proposed approach requires no preprocessing steps such as face detection and refining face regions or enlarging the original images with particular re-scaling ratios. CNN itself cannot learn temporal features but for face anti-spoofing spatio-temporal features are important. We cascade LBP-TOP with CNN to extract spatio-temporal features from video sequences and capture the most discriminative clues between genuine access and impostor attacks. Extensive experiments are conducted on two very challenging datasets: CASIA and REPLAY-ATTACK which are publically available and achieved high competitive score compared with state-of-art techniques results.", "title": "" }, { "docid": "091ca4353f7ce0d75c8f586006873de0", "text": "Preparing for a proteomic experiment will require a number of important decisions. Because of the complexity of most samples, one of the first important decisions is how to separate proteins prior to analysis by the mass spectrometer. There are two basic approaches; the first approach is gel-based electrophoresis that typically separate proteins based on molecular weight and/or isoelectric point. The second approach is non-gel-based or liquid chromatography that typically separates peptides based on hydrophobicity. We discuss some of the pros and cons of each separation method to allow the proper alignment of research objectives and scientific methodologies.", "title": "" }, { "docid": "6ad57a4d0cb814f6302e6c0c2dc01eaf", "text": "This paper presents a detailed description of a particular class of deterministic single product maritime inventory routing problems (MIRPs), which we call deep-sea MIRPs with inventory tracking at every port. This class involves vessel travel times between ports that are significantly longer than the time spent in port and require inventory levels at all ports to be monitored throughout the planning horizon. After providing a comprehensive literature survey of this class, we introduce a core model for it cast as a mixed-integer linear program. This formulation is quite general and incorporates assumptions and families of constraints that are most prevalent in practice. We also discuss other modeling features commonly found in the literature and how they can be incorporated into the core model. We then offer a unified discussion of some of the most common advanced techniques used for improving the bounds of these problems. Finally, we present a library, called MIRPLib, of publicly available test problem instances for MIRPs with inventory tracking at every port. Despite a growing interest in combined routing and inventory management problems in a maritime setting, no data sets are publicly available, which represents a significant “barrier to entry” for those interested in related research. Our main goal for MIRPLib is to help maritime inventory routing gain maturity as an important and interesting class of planning problems. As a means to this end, we (1) make available benchmark instances for this particular class of MIRPs; (2) provide the mixed-integer linear programming community with a set of optimization problem instances from the maritime transportation domain in LP and MPS format; and (3) provide a template for other researchers when specifying characteristics of MIRPs arising in other settings. Best known computational results are reported for each instance.", "title": "" }, { "docid": "63c550438679c0353c2f175032a73369", "text": "Large screens or projections in public and private settings have become part of our daily lives, as they enable the collaboration and presentation of information in many diverse ways. When discussing the shown information with other persons, we often point to a displayed object with our index finger or a laser pointer in order to talk about it. Although mobile phone-based interactions with remote screens have been investigated intensively in the last decade, none of them considered such direct pointing interactions for application in everyday tasks. In this paper, we present the concept and design space of PointerPhone which enables users to directly point at objects on a remote screen with their mobile phone and interact with them in a natural and seamless way. We detail the design space and distinguish three categories of interactions including low-level interactions using the mobile phone as a precise and fast pointing device, as well as an input and output device. We detail the category of widgetlevel interactions. Further, we demonstrate versatile high-level interaction techniques and show their application in a collaborative presentation scenario. Based on the results of a qualitative study, we provide design implications for application designs.", "title": "" }, { "docid": "f400b94dd5f4d4210bd6873b44697e3a", "text": "A system for monitoring and forecasting urban air pollution is presented in this paper. The system uses low-cost air-quality monitoring motes that are equipped with an array of gaseous and meteorological sensors. These motes wirelessly communicate to an intelligent sensing platform that consists of several modules. The modules are responsible for receiving and storing the data, preprocessing and converting the data into useful information, forecasting the pollutants based on historical information, and finally presenting the acquired information through different channels, such as mobile application, Web portal, and short message service. The focus of this paper is on the monitoring system and its forecasting module. Three machine learning (ML) algorithms are investigated to build accurate forecasting models for one-step and multi-step ahead of concentrations of ground-level ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2). These ML algorithms are support vector machines, M5P model trees, and artificial neural networks (ANN). Two types of modeling are pursued: 1) univariate and 2) multivariate. The performance evaluation measures used are prediction trend accuracy and root mean square error (RMSE). The results show that using different features in multivariate modeling with M5P algorithm yields the best forecasting performances. For example, using M5P, RMSE is at its lowest, reaching 31.4, when hydrogen sulfide (H2S) is used to predict SO2. Contrarily, the worst performance, i.e., RMSE of 62.4, for SO2 is when using ANN in univariate modeling. The outcome of this paper can be significantly useful for alarming applications in areas with high air pollution levels.", "title": "" }, { "docid": "b6fdde5d6baeb546fd55c749af14eec1", "text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.", "title": "" }, { "docid": "9d316fae0354f3eb28540ea013b4f8a4", "text": "Natural language makes considerable use of recurrent formulaic patterns of words. This article triangulates the construct of formula from corpus linguistic, psycholinguistic, and educational perspectives. It describes the corpus linguistic extraction of pedagogically useful formulaic sequences for academic speech and writing. It determines English as a second language (ESL) and English for academic purposes (EAP) instructors’ evaluations of their pedagogical importance. It summarizes three experiments which show that different aspects of formulaicity affect the accuracy and fluency of processing of these formulas in native speakers and in advanced L2 learners of English. The language processing tasks were selected to sample an ecologically valid range of language processing skills: spoken and written, production and comprehension. Processing in all experiments was affected by various corpus-derived metrics: length, frequency, and mutual information (MI), but to different degrees in the different populations. For native speakers, it is predominantly the MI of the formula which determines processability; for nonnative learners of the language, it is predominantly the frequency of the formula. The implications of these findings are discussed for (a) the psycholinguistic validity of corpus-derived formulas, (b) a model of their acquisition, (c) ESL and EAP instruction and the prioritization of which formulas to teach.", "title": "" }, { "docid": "31dc81fe6b9e3e795498ddbfd41426f6", "text": "A Bloom filter is a very compact data structure that supports approximate membership queries on a set, allowing false positives.\n We propose several new variants of Bloom filters and replacements with similar functionality. All of them have a better cache-efficiency and need less hash bits than regular Bloom filters. Some use SIMD functionality, while the others provide an even better space efficiency. As a consequence, we get a more flexible trade-off between false-positive rate, space-efficiency, cache-efficiency, hash-efficiency, and computational effort. We analyze the efficiency of Bloom filters and the proposed replacements in detail, in terms of the false-positive rate, the number of expected cache-misses, and the number of required hash bits. We also describe and experimentally evaluate the performance of highly tuned implementations. For many settings, our alternatives perform better than the methods proposed so far.", "title": "" }, { "docid": "8448f57118fb3db90a4f793cbebc1bc8", "text": "Motivated by increased concern over energy consumption in modern data centers, we propose a new, distributed computing platform called Nano Data Centers (NaDa). NaDa uses ISP-controlled home gateways to provide computing and storage services and adopts a managed peer-to-peer model to form a distributed data center infrastructure. To evaluate the potential for energy savings in NaDa platform we pick Video-on-Demand (VoD) services. We develop an energy consumption model for VoD in traditional and in NaDa data centers and evaluate this model using a large set of empirical VoD access data. We find that even under the most pessimistic scenarios, NaDa saves at least 20% to 30% of the energy compared to traditional data centers. These savings stem from energy-preserving properties inherent to NaDa such as the reuse of already committed baseline power on underutilized gateways, the avoidance of cooling costs, and the reduction of network energy consumption as a result of demand and service co-localization in NaDa.", "title": "" }, { "docid": "39bc8559589f388bb6eca16a1b3b2e87", "text": "This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pretrained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e. which filters (or object parts) are used for prediction and how much they contribute in the prediction. To conduct such a quantitative explanation of a CNN, our method learns explicit representations of object parts in high conv-layers of the CNN and mines potential decision modes memorized in fully-connected layers. The decision tree organizes these potential decision modes in a coarse-to-fine manner. Experiments have demonstrated the effectiveness of the proposed method.", "title": "" }, { "docid": "c7da5f98e7c7705877e976710fd6204a", "text": "The past decades witness FLOPS (Floating-point Operations per Second) as an important computation-centric performance metric. However, for datacenter (in short, DC) computing workloads, such as Internet services or big data analytics, previous work reports that they have extremely low floating point operation intensity, and the average FLOPS efficiency is only 0.1%, while the average IPC is 1.3 (the theoretic IPC is 4 on the Intel Xeon E5600 platform). Furthermore, we reveal that the traditional FLOPS based Roofline performance model is not suitable for modern DC workloads, and gives misleading information for system optimization. These observations imply that FLOPS is inappropriate for evaluating DC computer systems. To address the above issue, we propose a new computation-centric metric BOPs (Basic OPerations) that measures the efficient work defined by the source code, includes floatingpoint operations and the arithmetic, logical, comparing, and array addressing parts of integer operations. We define BOPS as the average number of BOPs per second, and propose replacing FLOPS with BOPS to measure DC computer systems. On the basis of BOPS, we propose a new Roofline performance model for DC computing, which we call DC-Roofline model, with which we optimize DC workloads with the improvement varying from 119% to 325%.", "title": "" }, { "docid": "90dc36628f9262157ea8722d82830852", "text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.", "title": "" }, { "docid": "9dadd96558791417495a5e1afa031851", "text": "INTRODUCTION\nLittle information is available on malnutrition-related factors among school-aged children ≥5 years in Ethiopia. This study describes the prevalence of stunting and thinness and their related factors in Libo Kemkem and Fogera, Amhara Regional State and assesses differences between urban and rural areas.\n\n\nMETHODS\nIn this cross-sectional study, anthropometrics and individual and household characteristics data were collected from 886 children. Height-for-age z-score for stunting and body-mass-index-for-age z-score for thinness were computed. Dietary data were collected through a 24-hour recall. Bivariate and backward stepwise multivariable statistical methods were employed to assess malnutrition-associated factors in rural and urban communities.\n\n\nRESULTS\nThe prevalence of stunting among school-aged children was 42.7% in rural areas and 29.2% in urban areas, while the corresponding figures for thinness were 21.6% and 20.8%. Age differences were significant in both strata. In the rural setting, fever in the previous 2 weeks (OR: 1.62; 95% CI: 1.23-2.32), consumption of food from animal sources (OR: 0.51; 95% CI: 0.29-0.91) and consumption of the family's own cattle products (OR: 0.50; 95% CI: 0.27-0.93), among others factors were significantly associated with stunting, while in the urban setting, only age (OR: 4.62; 95% CI: 2.09-10.21) and years of schooling of the person in charge of food preparation were significant (OR: 0.88; 95% CI: 0.79-0.97). Thinness was statistically associated with number of children living in the house (OR: 1.28; 95% CI: 1.03-1.60) and family rice cultivation (OR: 0.64; 95% CI: 0.41-0.99) in the rural setting, and with consumption of food from animal sources (OR: 0.26; 95% CI: 0.10-0.67) and literacy of head of household (OR: 0.24; 95% CI: 0.09-0.65) in the urban setting.\n\n\nCONCLUSION\nThe prevalence of stunting was significantly higher in rural areas, whereas no significant differences were observed for thinness. Various factors were associated with one or both types of malnutrition, and varied by type of setting. To effectively tackle malnutrition, nutritional programs should be oriented to local needs.", "title": "" }, { "docid": "b7e78ca489cdfb8efad03961247e12f2", "text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling", "title": "" } ]
scidocsrr
1b7808c3a73d4f9da1759988a9a83d4b
Approximation algorithms for the unit disk cover problem in 2D and 3D
[ { "docid": "9c1f7c4fc30a10f306354f83f6b8d9cd", "text": "A unified and powerful approach is presented for devising polynomial approximation schemes for many strongly NP-complete problems. Such schemes consist of families of approximation algorithms for each desired performance bound on the relative error ε > &Ogr;, with running time that is polynomial when ε is fixed. Though the polynomiality of these algorithms depends on the degree of approximation ε being fixed, they cannot be improved, owing to a negative result stating that there are no fully polynomial approximation schemes for strongly NP-complete problems unless NP = P.\nThe unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems. The method of using the technique and how it varies with problem parameters are illustrated. A similar technique, independently devised by B. S. Baker, was shown to be applicable for covering and packing problems on planar graphs.", "title": "" } ]
[ { "docid": "6a1d534737dcbe75ff7a7ac975bcc5ec", "text": "Crime is one of the most important social problems in the country, affecting public safety, children development, and adult socioeconomic status. Understanding what factors cause higher crime is critical for policy makers in their efforts to reduce crime and increase citizens' life quality. We tackle a fundamental problem in our paper: crime rate inference at the neighborhood level. Traditional approaches have used demographics and geographical influences to estimate crime rates in a region. With the fast development of positioning technology and prevalence of mobile devices, a large amount of modern urban data have been collected and such big data can provide new perspectives for understanding crime. In this paper, we used large-scale Point-Of-Interest data and taxi flow data in the city of Chicago, IL in the USA. We observed significantly improved performance in crime rate inference compared to using traditional features. Such an improvement is consistent over multiple years. We also show that these new features are significant in the feature importance analysis.", "title": "" }, { "docid": "83187228617d62fb37f99cf107c7602a", "text": "A very important class of spatial queries consists of nearestneighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree’s hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I/O complexity than their best competitors.", "title": "" }, { "docid": "073704cb56e476ccda947cdb465c0d69", "text": "Submarine search-evasion path planning aims to acquire an evading route for a submarine so as to avoid the detection of hostile anti-submarine searchers such as helicopters, aircraft and surface ships. In this paper, we propose a numerical optimization model of search-evasion path planning for invading submarines. We use the Artificial Bee Colony (ABC) algorithm, which has been confirmed to be competitive compared to many other nature-inspired algorithms, to solve this numerical optimization problem. In this work, several search-evasion cases in the two-dimensional plane have been carefully studied, in which the anti-submarine vehicles are equipped with sensors with circular footprints that allow them to detect invading submarines within certain radii. An invading submarine is assumed to be able to acquire the real-time locations of all the anti-submarine searchers in the combat field. Our simulation results show the efficacy of our proposed dynamic route optimization model for the submarine search-evasion path planning mission.", "title": "" }, { "docid": "0e521af53f9faf4fee38843a22ec2185", "text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.", "title": "" }, { "docid": "4d11eca5601f5128801a8159a154593a", "text": "Polymorphic malware belong to the class of host based threats which defy signature based detection mechanisms. Threat actors use various code obfuscation methods to hide the code details of the polymorphic malware and each dynamic iteration of the malware bears different and new signatures therefore makes its detection harder by signature based antimalware programs. Sandbox based detection systems perform syntactic analysis of the binary files to find known patterns from the un-encrypted segment of the malware file. Anomaly based detection systems can detect polymorphic threats but generate enormous false alarms. In this work, authors present a novel cognitive framework using semantic features to detect the presence of polymorphic malware inside a Microsoft Windows host using a process tree based temporal directed graph. Fractal analysis is performed to find cognitively distinguishable patterns of the malicious processes containing polymorphic malware executables. The main contributions of this paper are; the presentation of a graph theoretic approach for semantic characterization of polymorphism in the operating system's process tree, and the cognitive feature extraction of the polymorphic behavior for detection over a temporal process space.", "title": "" }, { "docid": "4fca4df310f5c2501477d1699fc2781a", "text": "The publication of fake reviews by parties with vested interests has become a severe problem for consumers who use online product reviews in their decision making. To counter this problem a number of methods for detecting these fake reviews, termed opinion spam, have been proposed. However, to date, many of these methods focus on analysis of review text, making them unsuitable for many review systems where accompanying text is optional, or not possible. Moreover, these approaches are often computationally expensive, requiring extensive resources to handle text analysis over the scale of data typically involved. In this paper, we consider opinion spammers manipulation of average ratings for products, focusing on differences between spammer ratings and the majority opinion of honest reviewers. We propose a lightweight, effective method for detecting opinion spammers based on these differences. This method uses binomial regression to identify reviewers having an anomalous proportion of ratings that deviate from the majority opinion. Experiments on real-world and synthetic data show that our approach is able to successfully identify opinion spammers. Comparison with the current state-of-the-art approach, also based only on ratings, shows that our method is able to achieve similar detection accuracy while removing the need for assumptions regarding probabilities of spam and non-spam reviews and reducing the heavy computation required for learning.", "title": "" }, { "docid": "18738a644f88af299d9e94157f804812", "text": "Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter (tweets) have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features.", "title": "" }, { "docid": "abbdc23d1c8833abda16f477dddb45fd", "text": "Recently introduced generative adversarial networks (GANs) have been shown numerous promising results to generate realistic samples. In the last couple of years, it has been studied to control features in synthetic samples generated by the GAN. Auxiliary classifier GAN (ACGAN), a conventional method to generate conditional samples, employs a classification layer in discriminator to solve the problem. However, in this paper, we demonstrate that the auxiliary classifier can hardly provide good guidance for training of the generator, where the classifier suffers from overfitting. Since the generator learns from classification loss, such a problem has a chance to hinder the training. To overcome this limitation, here, we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from the discriminator, the classifier can be trained with data augmentation technique, which can support to make a fine classifier. Evaluated with the CIFAR-10 dataset, ControlGAN outperforms AC-WGAN-GP which is an improved version of the ACGAN, where Inception score of the ControlGAN is 8.61 ± 0.10. Furthermore, we demonstrate that the ControlGAN can generate intermediate features and opposite features for interpolated input and extrapolated input labels that are not used in the training process. It implies that the ControlGAN can significantly contribute to the variety of generated samples.", "title": "" }, { "docid": "c7aea5f8b17f8f56fa8980c41573e28f", "text": "In the 1960s, ablative stereotactic surgery was employed for a variety of movement disorders and psychiatric conditions. Although largely abandoned in the 1970s because of highly effective drugs, such as levodopa for Parkinson's disease (PD), and a reaction against psychosurgery, the field has undergone a virtual renaissance, guided by a better understanding of brain circuitry and the circuit abnormalities underlying movement disorders such as PD and neuropsychiatric conditions, such as obsessive compulsive disorder. High-frequency electrical deep brain stimulation (DBS) of specific targets, introduced in the early 1990s for tremor, has gained widespread acceptance because of its less invasive, reversible, and adjustable features and is now utilized for an increasing number of brain disorders. This review summarizes the rationale behind DBS and the use of this technique for a variety of movement disorders and neuropsychiatric diseases.", "title": "" }, { "docid": "345328749b90f990e2f67415a067957a", "text": "Astrocyte swelling represents the major factor responsible for the brain edema associated with fulminant hepatic failure (FHF). The edema may be of such magnitude as to increase intracranial pressure leading to brain herniation and death. Of the various agents implicated in the generation of astrocyte swelling, ammonia has had the greatest amount of experimental support. This article reviews mechanisms of ammonia neurotoxicity that contribute to astrocyte swelling. These include oxidative stress and the mitochondrial permeability transition (MPT). The involvement of glutamine in the production of cell swelling will be highlighted. Evidence will be provided that glutamine induces oxidative stress as well as the MPT, and that these events are critical in the development of astrocyte swelling in hyperammonemia.", "title": "" }, { "docid": "56d0609fe4e68abbce27124dd5291033", "text": "Existing works indicate that the absence of explicit discourse connectives makes it difficult to recognize implicit discourse relations. In this paper we attempt to overcome this difficulty for implicit relation recognition by automatically inserting discourse connectives between arguments with the use of a language model. Then we propose two algorithms to leverage the information of these predicted connectives. One is to use these predicted implicit connectives as additional features in a supervised model. The other is to perform implicit relation recognition based only on these predicted connectives. Results on Penn Discourse Treebank 2.0 show that predicted discourse connectives help implicit relation recognition and the first algorithm can achieve an absolute average f-score improvement of 3% over a state of the art baseline system.", "title": "" }, { "docid": "1571fbb923755323e32ac7d023bd1025", "text": "Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.", "title": "" }, { "docid": "6a04b4da4e77decf5f783e2edcc81d5b", "text": "Document-level sentiment classification is an important NLP task. The state of the art shows that attention mechanism is particularly effective on document-level sentiment classification. Despite the success of previous attention mechanism, it neglects the correlations among inputs (e.g., words in a sentence), which can be useful for improving the classification result. In this paper, we propose a novel Adaptive Attention Network (AAN) to explicitly model the correlations among inputs. Our AAN has a two-layer attention hierarchy. It first learns an attention score for each input. Given each input’s embedding and attention score, it then computes a weighted sum over all the words’ embeddings. This weighted sum is seen as a “context” embedding, aggregating all the inputs. Finally, to model the correlations among inputs, it computes another attention score for each input, based on the input embedding and the context embedding. These new attention scores are our final output of AAN. In document-level sentiment classification, we apply AAN to model words in a sentence and sentences in a review. We evaluate AAN on three public data sets, and show that it outperforms state-of-the-art baselines.", "title": "" }, { "docid": "142f47f01a81b7978f65ea63460d98e5", "text": "The developers of StarDog OWL/RDF DBMS have pioneered a new use of OWL as a schema language for RDF databases. This is achieved by adding integrity constraints (IC), also expressed in OWL syntax, to the traditional “open-world” OWL axioms. The new database paradigm requires a suitable visual schema editor. We propose here a two-level approach for integrated visual UML-style editing of extended OWL+IC ontologies: (i) introduce the notion of ontology splitter that can be used in conjunction with any OWL editor, and (ii) offer a custom graphical notation for axiom level annotations on the basis of compact UML-style OWL ontology editor OWLGrEd.", "title": "" }, { "docid": "5325beaeca7307b20d18b0ce79a2819e", "text": "It is becoming increasingly necessary for organizations to build a Cyber Threat Intelligence (CTI) platform to fight against sophisticated attacks. To reduce the risk of cyber attacks, security administrators and/or analysts can use a CTI platform to aggregate relevant threat information about adversaries, targets and vulnerabilities, analyze it and share key observations from the analysis with collaborators. In this paper, we introduce CyTIME (Cyber Threat Intelligence ManagEment framework) which is a framework for managing CTI data. CyTIME can periodically collect CTI data from external CTI data repositories via standard interfaces such as Trusted Automated Exchange of Indicator Information (TAXII). In addition, CyTIME is designed to automatically generate security rules without human intervention to mitigate discovered new cybersecurity threats in real time. To show the feasibility of CyTIME, we performed experiments to measure the time to complete the task of generating the security rule corresponding to a given CTI data. We used 1,000 different CTI files related to network attacks. Our experiment results demonstrate that CyTIME automatically generates security rules and store them into the internal database within 12.941 seconds on average (max = 13.952, standard deviation = 0.580).", "title": "" }, { "docid": "4464ba333313f77e986d4f9a04d5af61", "text": "Despite the recent success of deep learning for many speech processing tasks, single-microphone, speaker-independent speech separation remains challenging for two main reasons. The first reason is the arbitrary order of the target and masker speakers in the mixture permutation problem, and the second is the unknown number of speakers in the mixture output dimension problem. We propose a novel deep learning framework for speech separation that addresses both of these issues. We use a neural network to project the time-frequency representation of the mixture signal into a high-dimensional embedding space. A reference point attractor is created in the embedding space to represent each speaker which is defined as the centroid of the speaker in the embedding space. The time-frequency embeddings of each speaker are then forced to cluster around the corresponding attractor point which is used to determine the time-frequency assignment of the speaker. We propose three methods for finding the attractors for each source in the embedding space and compare their advantages and limitations. The objective function for the network is standard signal reconstruction error which enables end-to-end operation during both training and test phases. We evaluated our system using the Wall Street Journal dataset WSJ0 on two and three speaker mixtures and report comparable or better performance than other state-of-the-art deep learning methods for speech separation.", "title": "" }, { "docid": "90dc36628f9262157ea8722d82830852", "text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.", "title": "" }, { "docid": "f407ea856f2d00dca1868373e1bd9e2f", "text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.", "title": "" }, { "docid": "9b0ed9c60666c36f8cf33631f791687d", "text": "The central notion of Role-Based Access Control (RBAC) is that users do not have discretionary access to enterprise objects. Instead, access permissions are administratively associated with roles, and users are administratively made members of appropriate roles. This idea greatly simplifies management of authorization while providing an opportunity for great flexibility in specifying and enforcing enterprisespecific protection policies. Users can be made members of roles as determined by their responsibilities and qualifications and can be easily reassigned from one role to another without modifying the underlying access structure. Roles can be granted new permissions as new applications and actions are incorporated, and permissions can be revoked from roles as needed. Some users and vendors have recognized the potential benefits of RBAC without a precise definition of what RBAC constitutes. Some RBAC features have been implemented in commercial products without a frame of reference as to the functional makeup and virtues of RBAC [1]. This lack of definition makes it difficult for consumers to compare products and for vendors to get credit for the effectiveness of their products in addressing known security problems. To correct these deficiencies, a number of government sponsored research efforts are underway to define RBAC precisely in terms of its features and the benefits it affords. This research includes: surveys to better understand the security needs of commercial and government users [2], the development of a formal RBAC model, architecture, prototype, and demonstrations to validate its use and feasibility. As a result of these efforts, RBAC systems are now beginning to emerge. The purpose of this paper is to provide additional insight as to the motivations and functionality that might go behind the official RBAC name.", "title": "" }, { "docid": "fb2ce776c503168e82cc3ffac9c205dd", "text": "Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.", "title": "" } ]
scidocsrr
b609955c49d4b41f3a21ebd551cea617
Image Captioning with Deep Bidirectional LSTMs
[ { "docid": "c879ee3945592f2e39bb3306602bb46a", "text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.", "title": "" }, { "docid": "4301af5b0c7910480af37f01847fb1fe", "text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.", "title": "" } ]
[ { "docid": "73e804508e6ff5d9709be369640a2985", "text": "Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.", "title": "" }, { "docid": "030979f15cbbbea7c648f45166095322", "text": "We investigated grapheme–colour synaesthesia and found that: (1) The induced colours led to perceptual grouping and pop-out, (2) a grapheme rendered invisible through ‘crowding’ or lateral masking induced synaesthetic colours — a form of blindsight — and (3) peripherally presented graphemes did not induce colours even when they were clearly visible. Taken collectively, these and other experiments prove conclusively that synaesthesia is a genuine perceptual phenomenon, not an effect based on memory associations from childhood or on vague metaphorical speech. We identify different subtypes of number–colour synaesthesia and propose that they are caused by hyperconnectivity between colour and number areas at different stages in processing; lower synaesthetes may have cross-wiring (or cross-activation) within the fusiform gyrus, whereas higher synaesthetes may have cross-activation in the angular gyrus. This hyperconnectivity might be caused by a genetic mutation that causes defective pruning of connections between brain maps. The mutation may further be expressed selectively (due to transcription factors) in the fusiform or angular gyri, and this may explain the existence of different forms of synaesthesia. If expressed very diffusely, there may be extensive cross-wiring between brain regions that represent abstract concepts, which would explain the link between creativity, metaphor and synaesthesia (and the higher incidence of synaesthesia among artists and poets). Also, hyperconnectivity between the sensory cortex and amygdala would explain the heightened aversion synaesthetes experience when seeing numbers printed in the ‘wrong’ colour. Lastly, kindling (induced hyperconnectivity in the temporal lobes of temporal lobe epilepsy [TLE] patients) may explain the purported higher incidence of synaesthesia in these patients . We conclude with a synaesthesia-based theory of the evolution of language. Thus, our experiments on synaesthesia and our theoretical framework attempt to link several seemingly unrelated facts about the human mind. Far from being a mere curiosity, synaesthesia may provide a window into perception, thought and language. Journal of Consciousness Studies, 8, No. 12, 2001, pp. 3–34 Correspondence: Center for Brain and Cognition, University of California, San Diego, 9500 Gilman Dr. 0109, La Jolla, CA 92093-0109, e-mail: vramacha@ucsd.edu", "title": "" }, { "docid": "04549adc3e956df0f12240c4d9c02bd7", "text": "Gamification, applying game mechanics to nongame contexts, has recently become a hot topic across a wide range of industries, and has been presented as a potential disruptive force in education. It is based on the premise that it can promote motivation and engagement and thus contribute to the learning process. However, research examining this assumption is scarce. In a set of studies we examined the effects of points, a basic element of gamification, on performance in a computerized assessment of mastery and fluency of basic mathematics concepts. The first study, with adult participants, found no effect of the point manipulation on accuracy of responses, although the speed of responses increased. In a second study, with 6e8 grade middle school participants, we found the same results for the two aspects of performance. In addition, middle school participants' reactions to the test revealed higher likeability ratings for the test under the points condition, but only in the first of the two sessions, and perceived effort during the test was higher in the points condition, but only for eighth grade students. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b1e2b2b18be40a22d506ee13bb5a43be", "text": "Single Shot MultiBox Detector (SSD) is one of the fastest algorithms in the current object detection field, which uses fully convolutional neural network to detect all scaled objects in an image. Deconvolutional Single Shot Detector (DSSD) is an approach which introduces more context information by adding the deconvolution module to SSD. And the mean Average Precision (mAP) of DSSD on PASCAL VOC2007 is improved from SSD’s 77.5% to 78.6%. Although DSSD obtains higher mAP than SSD by 1.1%, the frames per second (FPS) decreases from 46 to 11.8. In this paper, we propose a single stage end-to-end image detection model called ESSD to overcome this dilemma. Our solution to this problem is to cleverly extend better context information for the shallow layers of the best single stage (e.g. SSD) detectors. Experimental results show that our model can reach 79.4% mAP, which is higher than DSSD and SSD by 0.8 and 1.9 points respectively. Meanwhile, our testing speed is 25 FPS in Titan X GPU which is more than double the original DSSD.", "title": "" }, { "docid": "24a4fb7f87d6ee75aa26aeb6b77f68bb", "text": "Networked learning is much more ambitious than previous approaches of ICT-support in education. It is therefore more difficult to evaluate the effectiveness and efficiency of the networked learning activities. Evaluation of learners’ interactions in networked learning environments is a difficult, resource and expertise demanding task. Educators participating in online learning environments, have very little support by integrated tools to evaluate students’ activities and identify learners’ online browsing behavior and interactions. As a consequence, educators are in need for non-intrusive and automatic ways to get feedback from learners’ progress in order to better follow their learning process and appraise the online course effectiveness. They also need specialized tools for authoring, delivering, gathering and analysing data for evaluating the learning effectiveness of networked learning courses. Thus, the aim of this paper is to propose a new set of services for the evaluator and lecturer so that he/she can easily evaluate the learners’ progress and produce evaluation reports based on learners’ behaviour within a Learning Management System. These services allow the evaluator to easily track down the learners’ online behavior at specific milestones set up, gather feedback in an automatic way and present them in a comprehensive way. The innovation of the proposed set of services lies on the effort to adopt/adapt some of the web usage mining techniques combining them with the use of semantic description of networked learning tasks", "title": "" }, { "docid": "119dd3f27e69d9ddc66aef9f0f0a30b8", "text": "Current business process modelling tools support neither restricting names nor using ontologies to describe process artefacts. This lack results in creating non-consistent process models which are difficult to understand, compare, evaluate and re-use, etc. Within this article we argue that the Business Functions Ontology (BFO) developed within the SUPER project may be effectively used while modelling processes as a mean for annotating them and thus help to avoid some of the above mentioned problems. We show the BFO structure as well as an example of its practical application within a tool for business process development.", "title": "" }, { "docid": "f6669d0b53dd0ca789219874d35bf14e", "text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.", "title": "" }, { "docid": "227e82cb3f0fbede7c3e6278b5c7e8a8", "text": "Naturally occurring variations in maternal care alter the expression of genes that regulate behavioral and endocrine responses to stress, as well as hippocampal synaptic development. These effects form the basis for the development of stable, individual differences in stress reactivity and certain forms of cognition. Maternal care also influences the maternal behavior of female offspring, an effect that appears to be related to oxytocin receptor gene expression, and which forms the basis for the intergenerational transmission of individual differences in stress reactivity. Patterns of maternal care that increase stress reactivity in offspring are enhanced by stressors imposed on the mother. These findings provide evidence for the importance of parental care as a mediator of the effects of environmental adversity on neural development.", "title": "" }, { "docid": "2cbf690c565c6a201d4d8b6bda20b766", "text": "Visualizations that can handle flat files, or simple table data are most often used in data mining. In this paper we survey most visualizations that can handle more than three dimensions and fit our definition of Table Visualizations. We define Table Visualizations and some additional terms needed for the Table Visualization descriptions. For a preliminary evaluation of some of these visualizations see “Benchmark Development for the Evaluation of Visualization for Data Mining” also included in this volume. Data Sets Used Most of the datasets for the visualization examples are either the automobile or the Iris flower dataset. Nearly every data mining package comes with at least one of these two datasets. The datasets are available UC Irvine Machine Learning Repository [Uci97]. • Iris Plant Flowers – from Fischer 1936, physical measurements from three types of flowers. • Car (Automobile) – data concerning cars manufactured in America, Japan and Europe from 1970 to 1982 Definition of Table Visualizations A two-dimensional table of data is defined by M rows and N columns. A visualization of this data is termed a Table Visualization. In our definition, we define the columns to be the dimensions or the variates (also called fields or attributes), and the rows to be the data records. The data records are sometimes called ndimensional points, or cases. For a more thorough discussion of the table model, see [Car99]. This very general definition only rules out some structured or hierarchical data. In the most general case, a visualization maps certain dimensions to certain features in the visualization. In geographical, scientific, and imaging visualizations, the spatial dimensions are normally assigned to the appropriate X, Y or Z spatial dimension. In a typical information visualization there is no inherent spatial dimension, but quite often the dimension mapped to height and width on the screen has a dominating effect. For example in a scatter plot of four-dimensional data one could map two features to the Xand Y-axis and the other two features to the color and shape of the plotted points. The dimensions assigned to the Xand Y-axis would dominate many aspects of analysis, such as clustering and outlier detection. Some Table Visualizations such as Parallel Coordinates, Survey Plots, or Radviz, treat all of the data dimensions equally. We call these Regular Table Visualizations (RTVs). The data in a Table Visualizations is discrete. The data can be represented by different types, such as integer, real, categorical, nominal, etc. In most visualizations all data is converted to a real type before rendering the visualization. We are concerned with issues that arise from the various types of data, and use the more general term “Table Visualization.” These visualizations can also be called “Array Visualizations” because all the data are of the same type. Table Visualization data is not hierarchical. It does not explicitly contain internal structure or links. The data has a finite size (N and M are bounded). The data can be viewed as M points having N dimensions or features. The order of the table can sometimes be considered another dimension, which is an ordered sequence of integer values from 1 to M. If the table represents points in some other sequence such as a time series, that information should be represented as another column.", "title": "" }, { "docid": "53a55e8aa8b3108cdc8d015eabb3476d", "text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.", "title": "" }, { "docid": "7ac2f63821256491f45e2a9666333853", "text": "Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ . We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection.", "title": "" }, { "docid": "8dfd91ceadfcceea352975f9b5958aaf", "text": "The bag-of-words representation commonly used in text analysis can be analyzed very efficiently and retains a great deal of useful information, but it is also troublesome because the same thought can be expressed using many different terms or one term can have very different meanings. Dimension reduction can collapse together terms that have the same semantics, to identify and disambiguate terms with multiple meanings and to provide a lower-dimensional representation of documents that reflects concepts instead of raw terms. In this chapter, we survey two influential forms of dimension reduction. Latent semantic indexing uses spectral decomposition to identify a lower-dimensional representation that maintains semantic properties of the documents. Topic modeling, including probabilistic latent semantic indexing and latent Dirichlet allocation, is a form of dimension reduction that uses a probabilistic model to find the co-occurrence patterns of terms that correspond to semantic topics in a collection of documents. We describe the basic technologies in detail and expose the underlying mechanism. We also discuss recent advances that have made it possible to apply these techniques to very large and evolving text collections and to incorporate network structure or other contextual information.", "title": "" }, { "docid": "adc3e6b7768f79f9a7da2e597e033f6d", "text": "We present an investigation into the relation between design principles in Japanese gardens, and their associated perceptual effects. This leads to the realization that a set of design principles described in a Japanese gardening text by Shingen (1466), shows many parallels to the visual effects of perceptual grouping, studied by the Gestalt school of psychology. Guidelines for composition of rock clusters closely relate to perception of visual figure. Garden design elements are arranged into patterns that simplify figure-ground segmentation, while seemingly balancing the visual salience of subparts and the global arrangement. Visual ‘ground’ is analyzed via medial axis transformation (MAT), often associated with shape perception in humans. MAT analysis reveals implicit structure in the visual ground of a quintessential rock garden design. The MAT structure enables formal comparison of structure of figure and ground. They share some aesthetic qualities, with interesting differences. Both contain naturalistic asymmetric, self-similar, branching structures. While the branching pattern of the ground converges towards the viewer, that of the figure converges in the opposite direction.", "title": "" }, { "docid": "6bb318e50887e972cbfe52936c82c26f", "text": "We model the photo cropping problem as a cascade of attention box regression and aesthetic quality classification, based on deep learning. A neural network is designed that has two branches for predicting attention bounding box and analyzing aesthetics, respectively. The predicted attention box is treated as an initial crop window where a set of cropping candidates are generated around it, without missing important information. Then, aesthetics assessment is employed to select the final crop as the one with the best aesthetic quality. With our network, cropping candidates share features within full-image convolutional feature maps, thus avoiding repeated feature computation and leading to higher computation efficiency. Via leveraging rich data for attention prediction and aesthetics assessment, the proposed method produces high-quality cropping results, even with the limited availability of training data for photo cropping. The experimental results demonstrate the competitive results and fast processing speed (5 fps with all steps).", "title": "" }, { "docid": "5e71dbae22dabf2f6c25e5db46fb01ed", "text": "A Hamiltonian walk of a connected graph is a shortest closed walk that passes through every vertex at least once, and the length of a Hamiltonian walk is the total number of edges traversed by the walk. We show that every maximal planar graph with p ( 2 3) vertices has a Hamiltonian cycle or a Hamiltonian walk of length 5 3(p 3)/2.", "title": "" }, { "docid": "3e8f290f9d19996feb6551cde8815307", "text": "Simplification of IT services is an imperative of the times we are in. Large legacy behemoths that exist at financial institutions are a result of years of patch work development on legacy landscapes that have developed in silos at various lines of businesses (LOBs). This increases costs -- for running financial services, changing the services as well as providing services to customers. We present here a basic guide to what constitutes complexity of IT landscape at financial institutions, what simplification means, and opportunities for simplification and how it can be carried out. We also explain a 4-phase approach to planning and executing Simplification of IT services at financial institutions.", "title": "" }, { "docid": "d4f836dc81cce657a54f1540f9c7e304", "text": "This paper describes the development work on single and multi beam laser grooving technology for 40nm node low-k/ULK semiconductor device. A Nd:YAG ultraviolet (UV) laser diode operating at a wavelength of 355 nm was used in this study. The effects of single and multi beam laser micromachining parameters, i.e. laser power, laser frequency, feed speed, and defocus amount were investigated. The laser processed die samples were thoroughly inspected and characterized. This includes the die edge and die sidewall grooving quality, the grooving shape/profile and the laser grooving depth analysis. Die strength is important and critical. Die damage from thermal and ablation caused by the laser around the die peripheral weakens the mechanical strength within the die, causing a reduction in die strength. The strength of a laser grooved die was improved by optimizing the laser process parameter. High power optical microscopy, Scanning Electron Microscopy (SEM), and focused ion beam (FIB) were the inspection tools/methods used in this study. Package reliability and stressing were carried out to confirm the robustness of the multi beam laser grooving process parameter and condition in a mass production environment. The dicing defects caused by the laser were validated by failure analysis. The advantages and limitations of conventional single beam compared to multi beam laser grooving process were also discussed. It was concluded that, multi beam laser grooving is possibly one of the best solutions to consider for dicing quality and throughput improvements for low-k/ULK wafer dicing. The multi beam laser process is a feasible, efficient, and cost effective process compared to the conventional single beam laser ablation process.", "title": "" }, { "docid": "c7f0856c282d1039e44ba6ef50948d32", "text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.", "title": "" }, { "docid": "595635a707252c05e026eb6287d352b9", "text": "In real-life, it is easier to provide a visual cue when asking a question about a possibly unfamiliar topic, for example, asking the question, \"Where was this crop circle found?\". Providing an image of the instance is far more convenient than texting a verbose description of the visual properties, especially when the name of the query instance is not known. Nevertheless, having to identify the visual instance before processing the question and eventually returning the answer makes multimodal question-answering technically challenging. This paper addresses the problem of visual-to-text naming through the paradigm of answering-by-search in a two-stage computational framework, which is composed out of instance search (IS) and similar question ranking (QR). In IS, names of the instances are inferred from similar visual examples searched through a million-scale image dataset. For recalling instances of non-planar and non-rigid shapes, spatial configurations that emphasize topology consistency while allowing for local variations in matches have been incorporated. In QR, the candidate names of the instance are statistically identified from search results and directly utilized to retrieve similar questions from community-contributed QA (cQA) archives. By parsing questions into syntactic trees, a fuzzy matching between the inquirer's question and cQA questions is performed to locate answers and recommend related questions to the inquirer. The proposed framework is evaluated on a wide range of visual instances (e.g., fashion, art, food, pet, logo, and landmark) over various QA categories (e.g., factoid, definition, how-to, and opinion).", "title": "" }, { "docid": "db7a27dfe392005139fc44677a862bc7", "text": "LPWAN is a type of wireless telecommunication network designed to allow long range communications with relaxed requirements on data rate and latency between the core network and a high-volume of battery-operated devices. This article first reviews the leading LPWAN technologies on both unlicensed spectrum (SIGFOX, and LoRa) and licensed spectrum (LTE-M and NB-IoT). Although these technologies differ in many aspects, they do have one thing in common: they all utilize the narrow-band transmission mechanism as a leverage to achieve three fundamental goals, that is, high system capacity, long battery life, and wide coverage. This article introduces an effective bandwidth concept that ties these goals together with the transmission bandwidth, such that these contradicting goals are balanced for best overall system performance.", "title": "" } ]
scidocsrr
82e86caca75d07f397df9e4a82016950
ByteFreq: Malware clustering using byte frequency
[ { "docid": "5694ebf4c1f1e0bf65dd7401d35726ed", "text": "Data collection is not a big issue anymore with available honeypot software and setups. However malware collections gathered from these honeypot systems often suffer from massive sample counts, data analysis systems like sandboxes cannot cope with. Sophisticated self-modifying malware is able to generate new polymorphic instances of itself with different message digest sums for each infection attempt, thus resulting in many different samples stored for the same specimen. Scaling analysis systems that are fed by databases that rely on sample uniqueness based on message digests is only feasible to a certain extent. In this paper we introduce a non cryptographic, fast to calculate hash function for binaries in the Portable Executable format that transforms structural information about a sample into a hash value. Grouping binaries by hash values calculated with the new function allows for detection of multiple instances of the same polymorphic specimen as well as samples that are broken e.g. due to transfer errors. Practical evaluation on different malware sets shows that the new function allows for a significant reduction of sample counts.", "title": "" }, { "docid": "b4c5ddab0cb3e850273275843d1f264f", "text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.", "title": "" } ]
[ { "docid": "7a51add2b313095506035e2ef39e4d18", "text": "Many Stochastic Optimal Control (SOC) approaches rely on samples to either obtain an estimate of the value function or a linearisation of the underlying system model. However, these approaches typically neglect the fact that the accuracy of the policy update depends on the closeness of the resulting trajectory distribution to these samples. The greedy operator does not consider such closeness constraint to the samples. Hence, the greedy operator can lead to oscillations or even instabilities in the policy updates. Such undesired behaviour is likely to result in an inferior performance of the estimated policy. We reuse inspiration from the reinforcement learning community and relax the greedy operator used in SOC with an information theoretic bound that limits the `distance' of two subsequent trajectory distributions in a policy update. The introduced bound ensures a smooth and stable policy update. Our method is also well suited for model-based reinforcement learning, where we estimate the system dynamics model from data. As this model is likely to be inaccurate, it might be dangerous to exploit the model greedily. Instead, our bound ensures that we generate new data in the vicinity of the current data, such that we can improve our estimate of the system dynamics model. We show that our approach outperforms several state of the art approaches on challenging simulated robot control tasks.", "title": "" }, { "docid": "b7d76bc189aa2e99886abcaddce7d61d", "text": "Currently, face recognition system is growing sustainably on a larger scope. A few years ago, face recognition was used as a personal identification with a limited scope, now this technology has grown in the field of security, in terms of preventing fraudsters, criminals, and terrorists. In addition, face recognition is also used in detecting how tired a driver is, reducing the occurrence of road accidents, as well as in marketing, advertising, health, and others. Many method are developed to give the best accuracy in face recognition. Deep learning approach become trend in this field because of stunning results, and fast computation. However, the problem about accuracy, complexity, and scalability become a challenges in face recognition. This paper focus on recognizing the importance of this technology, how to achieve high accuracy with low complexity. Deep learning and non-deep learning methods are discussed and compared to analyze their advantages and disadvantages. From critical analysis using experiment with YALE dataset, non-deep learning algorithm can reach up to 90.6% for low-high complexity and 94.67% in deep learning method for low-high complexity. Genetic algorithm combining with CNN and SVM was an optimization method for overcome accuracy and complexity problems.", "title": "" }, { "docid": "4d396614420b24265d05b265b7ae6cd5", "text": "The objective of this study was to characterise the antagonistic activity of cellular components of potential probiotic bacteria isolated from the gut of healthy rohu (Labeo rohita), a tropical freshwater fish, against the fish pathogen, Aeromonas hydrophila. Three potential probiotic strains (referred to as R1, R2, and R5) were screened using a well diffusion, and their antagonistic activity against A. hydrophila was determined. Biochemical tests and 16S rRNA gene analysis confirmed that R1, R2, and R5 were Lactobacillus plantarum VSG3, Pseudomonas aeruginosa VSG2, and Bacillus subtilis VSG1, respectively. Four different fractions of cellular components (i.e. the whole-cell product, heat-killed whole-cell product [HKWCP], intracellular product [ICP], and extracellular product) of these selected strains were effective in an in vitro sensitivity test against 6 A. hydrophila strains. Among the cellular components, the ICP of R1, HKWCP of R2, and ICP of R5 exhibited the strongest antagonistic activities, as evidenced by their inhibition zones. The antimicrobial compounds from these selected cellular components were partially purified by thin-layer and high-performance liquid chromatography, and their properties were analysed. The ranges of pH stability of the purified compounds were wide (3.0-10.0), and compounds were thermally stable up to 90 °C. Considering these results, isolated probiotic strains may find potential applications in the prevention and treatment of aquatic aeromonosis.", "title": "" }, { "docid": "b32d6bc2d14683c4bf3557dad560edca", "text": "In this paper, we describe the fabrication and testing of a stretchable fabric sleeve with embedded elastic strain sensors for state reconstruction of a soft robotic joint. The strain sensors are capacitive and composed of graphite-based conductive composite electrodes and a silicone elastomer dielectric. The sensors are screenprinted directly into the fabric sleeve, which contrasts the approach of pre-fabricating sensors and subsequently attaching them to a host. We demonstrate the capabilities of the sensor-embedded fabric sleeve by determining the joint angle and end effector position of a soft pneumatic joint with similar accuracy to a traditional IMU. Furthermore, we show that the sensory sleeve is capable of capturing more complex material states, such as fabric buckling and non-constant curvatures along linkages and joints.", "title": "" }, { "docid": "191b5477cd8ba0cc26a0f4a51604dc85", "text": "In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called \" sleeping beauties \" , SBs) or have presented single publications as cases of SBs. Most recently, Ke et al. (2015) proposed the so called \" beauty coefficient \" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term \" smart girl \" (SG) is suggested to differentiate instant credit or \" flashes in the pan \" from SBs. While SG and SB are qualitatively defined, the dynamic citation angle β is introduced in this study as a simple way for identifying SGs and SBs quantitatively – complementing the beauty coefficient B. The citation angles for all articles from 1980 (n=166870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.", "title": "" }, { "docid": "cebe12b8c990bfc68fa485b5046651c9", "text": "The retention of existing IT employees is crucial due to the expected shortage of the IT labor force in the U.S., Canada, and European countries. While much of the extant IT turnover literature implicitly assumes that IT employees are homogeneous, we contend that they are a diverse group and that exploring the group in depth would reveal further insights into why employees turnover. We examined a sample of employees by IT job type in a turnover model of the antecedents and impacts of perceived organizational support (POS), which is another infrequently studied concept in the literature but is a potentially important predictor of turnover. A survey of 302 IT employees at a large U.S.-based company showed that these employees are in fact diverse. The relationships between role ambiguity and POS and work schedule flexibility and POS were found to be significant for managerial employees, but not for technically-oriented employees. The relationship between career accommodations and POS, however, was found to be significant for technically-oriented employees, but not managerial employees. As a whole, this study suggests that by combining all IT employees together in our analyses, we may forego some of the unique insights about these employees that we can otherwise cultivate to strengthen the bond between the organization and its employees and to enhance our existing IT turnover literature. The results of this study provide implications for organizations on how they can better balance the tactics they use to retain their valued IT employees. IT managers can be in a better position to focus on building relationships with their employees based on what is generally important to those employees.", "title": "" }, { "docid": "dbd8c2e36deb9c17818b2031502857ba", "text": "This paper presents the mechanical design for a new five fingered, twenty degree-of-freedom dexterous hand patterned after human anatomy and actuated by Shape Memory Alloy artificial muscles. Two experimental prototypes of a finger, one fabricated by traditional means and another fabricated by rapid prototyping techniques, are described and used to evaluate the design. An important aspect of the Rapid Prototype technique used here is that this multi-articulated hand will be fabricated in one step, without requiring assembly, while maintaining its desired mobility. The use of Shape Memory Alloy actuators combined with the rapid fabrication of the non-assembly type hand, reduce considerably its weight and fabrication time. Therefore, the focus of this paper is the mechanical design of a dexterous hand that combines Rapid Prototype techniques and smart actuators. The type of robotic hand described in this paper can be utilized for applications requiring low weight, compactness, and dexterity such as prosthetic devices, space and planetary exploration.", "title": "" }, { "docid": "05a07644824dd85eb2251a642c506d18", "text": "BACKGROUND\nWe present a method utilizing Healthcare Cost and Utilization Project (HCUP) dataset for predicting disease risk of individuals based on their medical diagnosis history. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare.\n\n\nMETHODS\nWe employed the National Inpatient Sample (NIS) data, which is publicly available through Healthcare Cost and Utilization Project (HCUP), to train random forest classifiers for disease prediction. Since the HCUP data is highly imbalanced, we employed an ensemble learning approach based on repeated random sub-sampling. This technique divides the training data into multiple sub-samples, while ensuring that each sub-sample is fully balanced. We compared the performance of support vector machine (SVM), bagging, boosting and RF to predict the risk of eight chronic diseases.\n\n\nRESULTS\nWe predicted eight disease categories. Overall, the RF ensemble learning method outperformed SVM, bagging and boosting in terms of the area under the receiver operating characteristic (ROC) curve (AUC). In addition, RF has the advantage of computing the importance of each variable in the classification process.\n\n\nCONCLUSIONS\nIn combining repeated random sub-sampling with RF, we were able to overcome the class imbalance problem and achieve promising results. Using the national HCUP data set, we predicted eight disease categories with an average AUC of 88.79%.", "title": "" }, { "docid": "5928efbaaa1ec64bfaab575f1bce6bd5", "text": "Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.", "title": "" }, { "docid": "674477f1d9ed9699ad582967c5bac290", "text": "We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models.", "title": "" }, { "docid": "37b1f275438471b89a226877a1783a6b", "text": "This paper presents the implementation of a wearable wireless sensor network aimed at monitoring harmful gases in industrial environments. The proposed solution is based on a customized wearable sensor node using a low-power low-rate wireless personal area network (LR-WPAN) communications protocol, which as a first approach measures CO₂ concentration, and employs different low power strategies for appropriate energy handling which is essential to achieving long battery life. These wearables nodes are connected to a deployed static network and a web-based application allows data storage, remote control and monitoring of the complete network. Therefore, a complete and versatile remote web application with a locally implemented decision-making system is accomplished, which allows early detection of hazardous situations for exposed workers.", "title": "" }, { "docid": "abd84676cb7d0d96f41461c344585a18", "text": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.", "title": "" }, { "docid": "07e54849ceae5e425b106619e760e522", "text": "In this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CIFAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer.", "title": "" }, { "docid": "f7239ce387f17b279263e6bdaff612d0", "text": "Purpose – This survey aims to study and analyze current techniques and methods for context-aware web service systems, to discuss future trends and propose further steps on making web services systems context-aware. Design/methodology/approach – The paper analyzes and compares existing context-aware web service-based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi-organization support and level of web services implementation. Findings – Supporting context-aware web service-based systems is increasing. It is hard to find a truly context-aware web service-based system that is interoperable and secure, and operates on multi-organizational environments. Various issues, such as distributed context management, context-aware service modeling and engineering, context reasoning and quality of context, security and privacy issues have not been well addressed. Research limitations/implications – The number of systems analyzed is limited. Furthermore, the survey is based on published papers. Therefore, up-to-date information and development might not be taken into account. Originality/value – Existing surveys do not focus on context-awareness techniques for web services. This paper helps to understand the state of the art in context-aware techniques for web services that can be employed in the future of services which is built around, amongst others, mobile devices, web services, and pervasive environments.", "title": "" }, { "docid": "5fcda05ef200cd326ecb9c2412cf50b3", "text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.", "title": "" }, { "docid": "d8802a7fcdbd306bd474f3144bc688a4", "text": "Shape from defocus (SFD) is one of the most popular techniques in monocular 3D vision. While most SFD approaches require two or more images of the same scene captured at a fixed view point, this paper presents an efficient approach to estimate absolute depth from a single defocused image. Instead of directly measuring defocus level of each pixel, we propose to design a sequence of aperture-shape filters to segment a defocused image by defocus level. A boundary-weighted belief propagation algorithm is employed to obtain a smooth depth map. We also give an estimation of depth error. Extensive experiments show that our approach outperforms the state-of-the-art single-image SFD approaches both in precision of the estimated absolute depth and running time.", "title": "" }, { "docid": "274186e87674920bfe98044aa0208320", "text": "Message routing in mobile delay tolerant networks inherently relies on the cooperation between nodes. In most existing routing protocols, the participation of nodes in the routing process is taken as granted. However, in reality, nodes can be unwilling to participate. We first show in this paper the impact of the unwillingness of nodes to participate in existing routing protocols through a set of experiments. Results show that in the presence of even a small proportion of nodes that do not forward messages, performance is heavily degraded. We then analyze two major reasons of the unwillingness of nodes to participate, i.e., their rational behavior (also called selfishness) and their wariness of disclosing private mobility information. Our main contribution in this paper is to survey the existing related research works that overcome these two issues. We provide a classification of the existing approaches for protocols that deal with selfish behavior. We then conduct experiments to compare the performance of these strategies for preventing different types of selfish behavior. For protocols that preserve the privacy of users, we classify the existing approaches and provide an analytical comparison of their security guarantees.", "title": "" }, { "docid": "e8c97daac0301310074698273d813772", "text": "Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.", "title": "" }, { "docid": "f57d1d12d8a1932610ac4bf9bf5372d6", "text": "The CXXC active-site motif of thiol-disulfide oxidoreductases is thought to act as a redox rheostat, the sequence of which determines its reduction potential and functional properties. We tested this idea by selecting for mutants of the CXXC motif in a reducing oxidoreductase (thioredoxin) that complement null mutants of a very oxidizing oxidoreductase, DsbA. We found that altering the CXXC motif affected not only the reduction potential of the protein, but also its ability to function as a disulfide isomerase and also impacted its interaction with folding protein substrates and reoxidants. It is surprising that nearly all of our thioredoxin mutants had increased activity in disulfide isomerization in vitro and in vivo. Our results indicate that the CXXC motif has the remarkable ability to confer a large number of very specific properties on thioredoxin-related proteins.", "title": "" }, { "docid": "361dc8037ebc30cd2f37f4460cf43569", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" } ]
scidocsrr
2b2d2bc749d9a78c6ee815fcccab5239
Visualizing timelines: evolutionary summarization via iterative reinforcement between text and image streams
[ { "docid": "6215c6ca6826001291314405ea936dda", "text": "This paper describes a text mining tool that performs two tasks, namely document clustering and text summarization. These tasks have, of course, their corresponding counterpart in “conventional” data mining. However, the textual, unstructured nature of documents makes these two text mining tasks considerably more difficult than their data mining counterparts. In our system document clustering is performed by using the Autoclass data mining algorithm. Our text summarization algorithm is based on computing the value of a TF-ISF (term frequency – inverse sentence frequency) measure for each word, which is an adaptation of the conventional TF-IDF (term frequency – inverse document frequency) measure of information retrieval. Sentences with high values of TF-ISF are selected to produce a summary of the source text. The system has been evaluated on real-world documents, and the results are satisfactory.", "title": "" }, { "docid": "78976c627fb72db5393837169060a92a", "text": "Although many variants of language models have been proposed for information retrieval, there are two related retrieval heuristics remaining \"external\" to the language modeling approach: (1) proximity heuristic which rewards a document where the matched query terms occur close to each other; (2) passage retrieval which scores a document mainly based on the best matching passage. Existing studies have only attempted to use a standard language model as a \"black box\" to implement these heuristics, making it hard to optimize the combination parameters.\n In this paper, we propose a novel positional language model (PLM) which implements both heuristics in a unified language model. The key idea is to define a language model for each position of a document, and score a document based on the scores of its PLMs. The PLM is estimated based on propagated counts of words within a document through a proximity-based density function, which both captures proximity heuristics and achieves an effect of \"soft\" passage retrieval. We propose and study several representative density functions and several different PLM-based document ranking strategies. Experiment results on standard TREC test collections show that the PLM is effective for passage retrieval and performs better than a state-of-the-art proximity-based retrieval model.", "title": "" }, { "docid": "f2af56bef7ae8c12910d125a3b729e6a", "text": "We investigate an important and challenging problem in summary generation, i.e., Evolutionary Trans-Temporal Summarization (ETTS), which generates news timelines from massive data on the Internet. ETTS greatly facilitates fast news browsing and knowledge comprehension, and hence is a necessity. Given the collection of time-stamped web documents related to the evolving news, ETTS aims to return news evolution along the timeline, consisting of individual but correlated summaries on each date. Existing summarization algorithms fail to utilize trans-temporal characteristics among these component summaries. We propose to model trans-temporal correlations among component summaries for timelines, using inter-date and intra-date sentence dependencies, and present a novel combination. We develop experimental systems to compare 5 rival algorithms on 6 instinctively different datasets which amount to 10251 documents. Evaluation results in ROUGE metrics indicate the effectiveness of the proposed approach based on trans-temporal information.", "title": "" }, { "docid": "f0c1bfed4083e6f6e5748fdbe76bd42a", "text": "Multidocument extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a document. Centrality is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We are now considering an approach for computing sentence importance based on the concept of eigenvector centrality (prestige) that we call LexPageRank. In this model, a sentence connectivity matrix is constructed based on cosine similarity. If the cosine similarity between two sentences exceeds a particular predefined threshold, a corresponding edge is added to the connectivity matrix. We provide an evaluation of our method on DUC 2004 data. The results show that our approach outperforms centroid-based summarization and is quite successful compared to other summarization systems.", "title": "" }, { "docid": "2c6d8e232c2d609c5ff1577ae39a9bad", "text": "In this paper, we present a framework and a system that extracts events relevant to a query from a collection C of documents, and places such events along a timeline. Each event is represented by a sentence extracted from C, based on the assumption that \"important\" events are widely cited in many documents for a period of time within which these events are of interest. In our experiments, we used queries that are event types (\"earthquake\") and person names (e.g. \"George Bush\"). Evaluation was performed using G8 leader names as queries: comparison made by human evaluators between manually and system generated timelines showed that although manually generated timelines are on average more preferable, system generated timelines are sometimes judged to be better than manually constructed ones.", "title": "" } ]
[ { "docid": "39430478909e5818b242e0b28db419f0", "text": "BACKGROUND\nA modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined.\n\n\nAIM\nThe purpose of the current study was to evaluate the concurrent and predictive validity of the mBBS for individuals with IVD.\n\n\nMETHOD\nFifty-four individuals with IVD and Gross Motor Functioning Classification System (GMFCS) Levels I and II participated in this study. The mBBS, the Centre of Gravity (COG), the Comfortable Walking Speed (CWS), and the Barthel Index (BI) were assessed during one session in order to determine the concurrent validity. The percentage of explained variance was determined by analyzing the squared multiple correlation between the mBBS and the BI, COG, CWS, GMFCS, and age, gender, level of intellectual disability, presence of epilepsy, level of visual impairment, and presence of hearing impairment. Furthermore, an overview of the degree of dependence between the mBBS, BI, CWS, and COG was obtained by graphic modelling. Predictive validity of mBBS was determined with respect to the number of falling incidents during 26 weeks and evaluated with Zero-inflated regression models using the explanatory variables of mBBS, BI, COG, CWS, and GMFCS.\n\n\nRESULTS\nThe results demonstrated that two significant explanatory variables, the GMFCS Level and the BI, and one non-significant variable, the CWS, explained approximately 60% of the mBBS variance. Graphical modelling revealed that BI was the most important explanatory variable for mBBS moreso than COG and CWS. Zero-inflated regression on the frequency of falling incidents demonstrated that the mBBS was not predictive, however, COG and CWS were.\n\n\nCONCLUSIONS\nThe results indicated that the concurrent validity as well as the predictive validity of mBBS were low for persons with IVD.", "title": "" }, { "docid": "32ae0b0c5b3ca3a7ede687872d631d29", "text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)", "title": "" }, { "docid": "b0fcdc52d4a1bc1f8e6c4b8940d7a17f", "text": "Convolutional neural networks (CNNs) are deployed in a wide range of image recognition, scene segmentation and object detection applications. Achieving state of the art accuracy in CNNs often results in large models and complex topologies that require significant compute resources to complete in a timely manner. Binarised neural networks (BNNs) have been proposed as an optimised variant of CNNs, which constrain the weights and activations to +1 or —1 and thus offer compact models and lower computational complexity per operation. This paper presents a high performance BNN accelerator on the Intel®Xeon+FPGA™ platform. The proposed accelerator is designed to take advantage of the Xeon+FPGA system in a way that a specialised FPGA architecture can be targeted for the most compute intensive parts of the BNN whilst other parts of the topology can be handled by the Xeon™ CPU. The implementation is evaluated by comparing the raw compute performance and energy efficiency for key layers in standard CNN topologies against an Nvidia Titan X Pascal GPU and other published FPGA BNN accelerators. The results show that our single-package integrated Arria™ 10 FPGA accelerator coupled with a high-end Xeon CPU can offer comparable performance and better energy efficiency than a high-end discrete Titan X GPU card. In addition, our solution delivers the best performance compared to previous BNN FPGA implementations.", "title": "" }, { "docid": "20f379e3b4f62c4d319433bb76f3a490", "text": "We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages.", "title": "" }, { "docid": "770f31265aa7107a0890275a54089bc1", "text": "The analytic hierarchy process (AHP) provides a structure on decision-making processes where there are a limited numbers of choices but each has a number of attributes. This paper explores the use of AHP for deciding on car purchase. In the context of shopping, it is important to include elements that provide attributes that make consumer decision-making easier, comfortable and therefore, lead to a car purchase. As the car market becomes more competitive, there is a greater demand for innovation that provides better customer service and strategic competition in the business management. This paper presents a new methodological extension of the AHP by focusing on two issues. One combines pairwise comparison with a spreadsheet method using a 5-point rating scale. The other applies the group weight to a reciprocal consistency ratio. Three newly formed car models of midsize are used to show how the method allows choice to be prioritized and analyzed statistically. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "9902a306ff4c633f30f6d9e56aa8335c", "text": "The bank director was pretty upset noticing Joe, the system administrator, spending his spare time playing Mastermind, an old useless game of the 70ies. He had fought the instinct of telling him how to better spend his life, just limiting to look at him in disgust long enough to be certain to be noticed. No wonder when the next day the director fell on his chair astonished while reading, on the newspaper, about a huge digital fraud on the ATMs of his bank, with millions of Euros stolen by a team of hackers all around the world. The article mentioned how the hackers had ‘played with the bank computers just like playing Mastermind’, being able to disclose thousands of user PINs during the one-hour lunch break. That precise moment, a second before falling senseless, he understood the subtle smile on Joe’s face the day before, while training at his preferred game, Mastermind.", "title": "" }, { "docid": "7cef2fac422d9fc3c3ffbc130831b522", "text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.", "title": "" }, { "docid": "8a5e4a6f418975f352a6b9e3d8958d50", "text": "BACKGROUND\nDysphagia is associated with poor outcome in stroke patients. Studies investigating the association of dysphagia and early dysphagia screening (EDS) with outcomes in patients with acute ischemic stroke (AIS) are rare. The aims of our study are to investigate the association of dysphagia and EDS within 24 h with stroke-related pneumonia and outcomes.\n\n\nMETHODS\nOver a 4.5-year period (starting November 2007), all consecutive AIS patients from 15 hospitals in Schleswig-Holstein, Germany, were prospectively evaluated. The primary outcomes were stroke-related pneumonia during hospitalization, mortality, and disability measured on the modified Rankin Scale ≥2-5, in which 2 indicates an independence/slight disability to 5 severe disability.\n\n\nRESULTS\nOf 12,276 patients (mean age 73 ± 13; 49% women), 9,164 patients (74%) underwent dysphagia screening; of these patients, 55, 39, 4.7, and 1.5% of patients had been screened for dysphagia within 3, 3 to <24, 24 to ≤72, and >72 h following admission. Patients who underwent dysphagia screening were likely to be older, more affected on the National Institutes of Health Stroke Scale score, and to have higher rates of neurological symptoms and risk factors than patients who were not screened. A total of 3,083 patients (25.1%; 95% CI 24.4-25.8) had dysphagia. The frequency of dysphagia was higher in patients who had undergone dysphagia screening than in those who had not (30 vs. 11.1%; p < 0.001). During hospitalization (mean 9 days), 1,271 patients (10.2%; 95% CI 9.7-10.8) suffered from stroke-related pneumonia. Patients with dysphagia had a higher rate of pneumonia than those without dysphagia (29.7 vs. 3.7%; p < 0.001). Logistic regression revealed that dysphagia was associated with increased risk of stroke-related pneumonia (OR 3.4; 95% CI 2.8-4.2; p < 0.001), case fatality during hospitalization (OR 2.8; 95% CI 2.1-3.7; p < 0.001) and disability at discharge (OR 2.0; 95% CI 1.6-2.3; p < 0.001). EDS within 24 h of admission appeared to be associated with decreased risk of stroke-related pneumonia (OR 0.68; 95% CI 0.52-0.89; p = 0.006) and disability at discharge (OR 0.60; 95% CI 0.46-0.77; p < 0.001). Furthermore, dysphagia was independently correlated with an increase in mortality (OR 3.2; 95% CI 2.4-4.2; p < 0.001) and disability (OR 2.3; 95% CI 1.8-3.0; p < 0.001) at 3 months after stroke. The rate of 3-month disability was lower in patients who had received EDS (52 vs. 40.7%; p = 0.003), albeit an association in the logistic regression was not found (OR 0.78; 95% CI 0.51-1.2; p = 0.2).\n\n\nCONCLUSIONS\nDysphagia exposes stroke patients to a higher risk of pneumonia, disability, and death, whereas an EDS seems to be associated with reduced risk of stroke-related pneumonia and disability.", "title": "" }, { "docid": "0c4a9ee404cec4176e9d0f41c6d73b15", "text": "A novel envelope detector structure is proposed in this paper that overcomes the traditional trade-off required in these circuits, improving both the tracking and keeping of the signal. The method relies on holding the signal by two capacitors, discharging one when the other is in hold mode and employing the held signals to form the output. Simulation results show a saving greater than 60% of the capacitor area for the same ripple (0.3%) and a release time constant (0.4¿s) much smaller than that obtained by the conventional circuits.", "title": "" }, { "docid": "02605f4044a69b70673121985f1bd913", "text": "A novel class of low-cost, small-footprint and high-gain antenna arrays is presented for W-band applications. A 4 × 4 antenna array is proposed and demonstrated using substrate-integrated waveguide (SIW) technology for the design of its feed network and longitudinal slots in the SIW top metallic surface to drive the array antenna elements. Dielectric cubes of low-permittivity material are placed on top of each 1 × 4 antenna array to increase the gain of the circular patch antenna elements. This new design is compared to a second 4 × 4 antenna array which, instead of dielectric cubes, uses vertically stacked Yagi-like parasitic director elements to increase the gain. Measured impedance bandwidths of the two 4 × 4 antenna arrays are about 7.5 GHz (94.2-101.8 GHz) at 18 ± 1 dB gain level, with radiation patterns and gains of the two arrays remaining nearly constant over this bandwidth. While the fabrication effort of the new array involving dielectric cubes is significantly reduced, its measured radiation efficiency of 81 percent is slightly lower compared to 90 percent of the Yagi-like design.", "title": "" }, { "docid": "4494d5b42c8daf6a45608159a748fd7d", "text": "A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.", "title": "" }, { "docid": "00dbe58bcb7d4415c01a07255ab7f365", "text": "The paper deals with a time varying vehicle-to-vehicle channel measurement in the 60 GHz millimeter wave (MMW) band using a unique time-domain channel sounder built from off-the-shelf components and standard measurement devices and employing Golay complementary sequences as the excitation signal. The aim of this work is to describe the sounder architecture, primary data processing technique, achievable system parameters, and preliminary measurement results. We measured the signal propagation between two passing vehicles and characterized the signal reflected by a car driving on a highway. The proper operation of the channel sounder is verified by a reference measurement performed with an MMW vector network analyzer in a rugged stationary office environment. The goal of the paper is to show the measurement capability of the sounder and its superior features like 8 GHz measuring bandwidth enabling high time resolution or good dynamic range allowing an analysis of weak multipath components.", "title": "" }, { "docid": "283449016e04bcfff09fca91da137dca", "text": "This paper proposes a depth hole filling method for RGBD images obtained from the Microsoft Kinect sensor. First, the proposed method labels depth holes based on 8-connectivity. For each labeled depth hole, the proposed method fills depth hole using the depth distribution of neighboring pixels of the depth hole. Then, we refine the hole filling result with cross-bilateral filtering. In experiments, by simply using the depth distribution of neighboring pixels, the proposed method improves the acquired depth map and reduces false filling caused by incorrect depth-color fusion.", "title": "" }, { "docid": "e4f26f4ed55e51fb2a9a55fd0f04ccc0", "text": "Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning", "title": "" }, { "docid": "dfa5343bbeffc89cdd86afb2e5b3d2ae", "text": "We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as “real” samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here.", "title": "" }, { "docid": "a6e18aa7f66355fb8407798a37f53f45", "text": "We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.", "title": "" }, { "docid": "6bd3568d195c0cd67e663d69d7ebca0c", "text": "Academic studies offer a generally positive portrait of the effect of customer relationship management (CRM) on firm performance, but practitioners question its value. The authors argue that a firm’s strategic commitments may be an overlooked organizational factor that influences the rewards for a firm’s investments in CRM. Using the context of online retailing, the authors consider the effects of two key strategic commitments of online retailers on the performance effect of CRM: their bricks-and-mortar experience and their online entry timing. They test the proposed model with a multimethod approach that uses manager ratings of firm CRM and strategic commitments and third-party customers’ ratings of satisfaction from 106 online retailers. The findings indicate that firms with moderate bricks-and-mortar experience are better able to leverage CRM for superior customer satisfaction outcomes than firms with either low or high bricks-and-mortar experience. Likewise, firms with moderate online experience are better able to leverage CRM into superior customer satisfaction outcomes than firms with either low or high online experience. These findings help resolve disparate results about the value of CRM, and they establish the importance of examining CRM within the strategic context of the firm.", "title": "" }, { "docid": "c699ede2caeb5953decc55d8e42c2741", "text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.", "title": "" }, { "docid": "7b104b14b4219ecc2d1d141fbf0e707b", "text": "As hospitals throughout Europe are striving exploit advantages of IT and network technologies, electronic medical records systems are starting to replace paper based archives. This paper suggests and describes an add-on service to electronic medical record systems that will help regular patients in getting insight to their diagnoses and medical record. The add-on service is based annotating polysemous and foreign terms with WordNet synsets. By exploiting the way that relationships between synsets are structured and described in WordNet, it is shown how patients can get interactive opportunities to generalize and understand their personal records.", "title": "" } ]
scidocsrr
b254576d52c370fb0664679c0535d81f
Why Total Quality Management Programs Do Not Persist: The Role of Management Quality and Implications for Leading a TQM Transformation
[ { "docid": "8ed122ede076474bdad5c8fa2c8fd290", "text": "Faced with changing markets and tougher competition, more and more companies realize that to compete effectively they must transform how they function. But while senior managers understand the necessity of change, they often misunderstand what it takes to bring it about. They assume that corporate renewal is the product of company-wide change programs and that in order to transform employee behavior, they must alter a company's formal structure and systems. Both these assumptions are wrong, say these authors. Using examples drawn from their four-year study of organizational change at six large corporations, they argue that change programs are, in fact, the greatest obstacle to successful revitalization and that formal structures and systems are the last thing a company should change, not the first. The most successful change efforts begin at the periphery of a corporation, in a single plant or division. Such efforts are led by general managers, not the CEO or corporate staff people. And these general managers concentrate not on changing formal structures and systems but on creating ad hoc organizational arrangements to solve concrete business problems. This focuses energy for change on the work itself, not on abstractions such as \"participation\" or \"culture.\" Once general managers understand the importance of this grass-roots approach to change, they don't have to wait for senior management to start a process of corporate renewal. The authors describe a six-step change process they call the \"critical path.\"", "title": "" } ]
[ { "docid": "c4577ac95efb55a07e0748a10a9d4658", "text": "This paper describes the design of a six-axis microelectromechanical systems (MEMS) force-torque sensor. A movable body is suspended by flexures that allow deflections and rotations along the x-, y-, and z-axes. The orientation of this movable body is sensed by seven capacitors. Transverse sensing is used for all capacitors, resulting in a high sensitivity. A batch fabrication process is described as capable of fabricating these multiaxis sensors with a high yield. The force sensor is experimentally investigated, and a multiaxis calibration method is described. Measurements show that the resolution is on the order of a micro-Newton and nano-Newtonmeter. This is the first six-axis MEMS force sensor that has been successfully developed.", "title": "" }, { "docid": "9b1a7f811d396e634e9cc5e34a18404e", "text": "We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.", "title": "" }, { "docid": "ab83fb07e4f9f70a3e4f22620ba551fc", "text": "OBJECTIVES:Biliary cannulation is frequently the most difficult component of endoscopic retrograde cholangiopancreatography (ERCP). Techniques employed to improve safety and efficacy include wire-guided access and the use of sphincterotomes. However, a variety of options for these techniques are available and optimum strategies are not defined. We assessed whether the use of endoscopist- vs. assistant-controlled wire guidance and small vs. standard-diameter sphincterotomes improves safety and/or efficacy of bile duct cannulation.METHODS:Patients were randomized using a 2 × 2 factorial design to initial cannulation attempt with endoscopist- vs. assistant-controlled wire systems (1:1 ratio) and small (3.9Fr tip) vs. standard (4.4Fr tip) sphincterotomes (1:1 ratio). The primary efficacy outcome was successful deep bile duct cannulation within 8 attempts. Sample size of 498 was planned to demonstrate a significant increase in cannulation of 10%. Interim analysis was planned after 200 patients–with a stopping rule pre-defined for a significant difference in the composite safety end point (pancreatitis, cholangitis, bleeding, and perforation).RESULTS:The study was stopped after the interim analysis, with 216 patients randomized, due to a significant difference in the safety end point with endoscopist- vs. assistant-controlled wire guidance (3/109 (2.8%) vs. 12/107 (11.2%), P=0.016), primarily due to a lower rate of post-ERCP pancreatitis (3/109 (2.8%) vs. 10/107 (9.3%), P=0.049). The difference in successful biliary cannulation for endoscopist- vs. assistant-controlled wire guidance was −0.5% (95% CI−12.0 to 11.1%) and for small vs. standard sphincerotome −0.9% (95% CI–12.5 to 10.6%).CONCLUSIONS:Use of the endoscopist- rather than assistant-controlled wire guidance for bile duct cannulation reduces complications of ERCP such as pancreatitis.", "title": "" }, { "docid": "c0b96de9ee7ab0295d2162338ff4c80f", "text": "PURPOSE\nTo uncover the genetic events leading to transformation of pediatric low-grade glioma (PLGG) to secondary high-grade glioma (sHGG).\n\n\nPATIENTS AND METHODS\nWe retrospectively identified patients with sHGG from a population-based cohort of 886 patients with PLGG with long clinical follow-up. Exome sequencing and array CGH were performed on available samples followed by detailed genetic analysis of the entire sHGG cohort. Clinical and outcome data of genetically distinct subgroups were obtained.\n\n\nRESULTS\nsHGG was observed in 2.9% of PLGGs (26 of 886 patients). Patients with sHGG had a high frequency of nonsilent somatic mutations compared with patients with primary pediatric high-grade glioma (HGG; median, 25 mutations per exome; P = .0042). Alterations in chromatin-modifying genes and telomere-maintenance pathways were commonly observed, whereas no sHGG harbored the BRAF-KIAA1549 fusion. The most recurrent alterations were BRAF V600E and CDKN2A deletion in 39% and 57% of sHGGs, respectively. Importantly, all BRAF V600E and 80% of CDKN2A alterations could be traced back to their PLGG counterparts. BRAF V600E distinguished sHGG from primary HGG (P = .0023), whereas BRAF and CDKN2A alterations were less commonly observed in PLGG that did not transform (P < .001 and P < .001 respectively). PLGGs with BRAF mutations had longer latency to transformation than wild-type PLGG (median, 6.65 years [range, 3.5 to 20.3 years] v 1.59 years [range, 0.32 to 15.9 years], respectively; P = .0389). Furthermore, 5-year overall survival was 75% ± 15% and 29% ± 12% for children with BRAF mutant and wild-type tumors, respectively (P = .024).\n\n\nCONCLUSION\nBRAF V600E mutations and CDKN2A deletions constitute a clinically distinct subtype of sHGG. The prolonged course to transformation for BRAF V600E PLGGs provides an opportunity for surgical interventions, surveillance, and targeted therapies to mitigate the outcome of sHGG.", "title": "" }, { "docid": "f4c718eb952fe6587557304c909494c9", "text": "The neutral theory of molecular evolution has been widely accepted and is the guiding principle for studying evolutionary genomics and the molecular basis of phenotypic evolution. Recent data on genomic evolution are generally consistent with the neutral theory. However, many recently published papers claim the detection of positive Darwinian selection via the use of new statistical methods. Examination of these methods has shown that their theoretical bases are not well established and often result in high rates of false-positive and false-negative results. When the deficiencies of these statistical methods are rectified, the results become largely consistent with the neutral theory. At present, genome-wide analyses of natural selection consist of collections of single-locus analyses. However, because phenotypic evolution is controlled by the interaction of many genes, the study of natural selection ought to take such interactions into account. Experimental studies of evolution will also be crucial.", "title": "" }, { "docid": "5dba3258382d9781287cdcb6b227153c", "text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.", "title": "" }, { "docid": "72f7c13f21c047e4dcdf256fbbbe1b74", "text": "Programming by Examples (PBE) has the potential to revolutionize end-user programming by enabling end users, most of whom are non-programmers, to create small scripts for automating repetitive tasks. However, examples, though often easy to provide, are an ambiguous specification of the user's intent. Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of the program that was synthesized by the system. We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. One of these models allows the user to effectively navigate between the huge set of programs that are consistent with the examples provided by the user. The other model uses active learning to ask directed example-based questions to the user on the test input data over which the user intends to run the synthesized program. Our user studies show that each of these models significantly reduces the number of errors in the performed task without any difference in completion time. Moreover, both models are perceived as useful, and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.", "title": "" }, { "docid": "bc8950644ded24618a65c4fcef302044", "text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.", "title": "" }, { "docid": "a06b76989c5a8df7406ed6c1c89387d2", "text": "Due to their specific characteristics, Unmanned Aeronautical Ad-hoc Networks (UAANETs) can be classified as a special kind of mobile ad-hoc networks. Due to the high mobility of Unmanned Aerial Vehicles (UAVs), designing a good routing protocol for UAANETs is challenging. Recently, a new protocol called Reactive-Greedy-Reactive (RGR) [1] has been proposed as a promising routing protocol in high mobility and density-variable scenarios. Although the RGR protocol improves the packet delivery ratio, the overhead and delay are higher when compared to AODV [1]. In this paper, a scoped flooding and mobility prediction based RGR protocol is proposed to improve the performance of RGR in UAANETs. Simulation results show that the new protocol can effectively enhance the performance of the RGR protocol in terms of packet delivery ratio, overhead, and delay.", "title": "" }, { "docid": "0403bb8e2b96e3ad1ebfbbc0fa9434a7", "text": "Sarcasm detection from text has gained increasing attention. While one thread of research has emphasized the importance of affective content in sarcasm detection, another avenue of research has explored the effectiveness of word representations. In this paper, we introduce a novel model for automated sarcasm detection in text, called Affective Word Embeddings for Sarcasm (AWES), which incorporates affective information into word representations. Extensive evaluation on sarcasm detection on six datasets across three domains of text (tweets, reviews and forum posts) demonstrates the effectiveness of the proposed model. The experimental results indicate that while sentiment affective representations yield best results on datasets comprising of short length text such as tweets, richer representations derived from fine-grained emotions are more suitable for detecting sarcasm from longer length documents such as product reviews and discussion forum posts.", "title": "" }, { "docid": "760403bb332465093386859841a62a5d", "text": "Learning to rank is a new statistical learning technology on creating a ranking model for sorting objects. The technology has been successfully applied to web search, and is becoming one of the key machineries for building search engines. Existing approaches to learning to rank, however, did not consider the cases in which there exists relationship between the objects to be ranked, despite of the fact that such situations are very common in practice. For example, in web search, given a query certain relationships usually exist among the the retrieved documents, e.g., URL hierarchy, similarity, etc., and sometimes it is necessary to utilize the information in ranking of the documents. This paper addresses the issue and formulates it as a novel learning problem, referred to as, 'learning to rank relational objects'. In the new learning task, the ranking model is defined as a function of not only the contents (features) of objects but also the relations between objects. The paper further focuses on one setting of the learning problem in which the way of using relation information is predetermined. It formalizes the learning task as an optimization problem in the setting. The paper then proposes a new method to perform the optimization task, particularly an implementation based on SVM. Experimental results show that the proposed method outperforms the baseline methods for two ranking tasks (Pseudo Relevance Feedback and Topic Distillation) in web search, indicating that the proposed method can indeed make effective use of relation information and content information in ranking.", "title": "" }, { "docid": "bd2af30c9bc44b64d91bd4cde32ca45d", "text": "The oneM2M standard is a global initiative led jointly by major standards organizations around the world in order to develop a unique architecture for M2M communications. Prior standards, and also oneM2M, while focusing on achieving interoperability at the communication level, do not achieve interoperability at the semantic level. An expressive ontology for IoT called IoT-O is proposed, making best use of already defined ontologies in specific domains such as sensor, observation, service, quantity kind, units, or time. IoT-O also defines some missing concepts relevant for IoT such as thing, node, actuator, and actuation. The extension of the oneM2M standard to support semantic data interoperability based on IoT-O is discussed. Finally, through comprehensive use cases, benefits of the extended standard are demonstrated, ranging from heterogeneous device interoperability to autonomic behavior achieved by automated reasoning.", "title": "" }, { "docid": "2cff48b7c30c310e0d334e5983ae8f1f", "text": "In this paper we introduce a low-latency monaural source separation framework using a Convolutional Neural Network (CNN). We use a CNN to estimate time-frequency soft masks which are applied for source separation. We evaluate the performance of the neural network on a database comprising of musical mixtures of three instruments: voice, drums, bass as well as other instruments which vary from song to song. The proposed architecture is compared to a Multilayer Perceptron (MLP), achieving on-par results and a significant improvement in processing time. The algorithm was submitted to source separation evaluation campaigns to test efficiency, and achieved competitive results.", "title": "" }, { "docid": "6bda457a005dbb2ff6abf84392d7b197", "text": "One of the major problems in developing media mix models is that the data that is generally available to the modeler lacks sufficient quantity and information content to reliably estimate the parameters in a model of even moderate complexity. Pooling data from different brands within the same product category provides more observations and greater variability in media spend patterns. We either directly use the results from a hierarchical Bayesian model built on the category dataset, or pass the information learned from the category model to a brand-specific media mix model via informative priors within a Bayesian framework, depending on the data sharing restriction across brands. We demonstrate using both simulation and real case studies that our category analysis can improve parameter estimation and reduce uncertainty of model prediction and extrapolation.", "title": "" }, { "docid": "a879b04fa12a7f26f4a9d30f4110183b", "text": "Due to the high volume of information and electronic documents on the Web, it is almost impossible for a human to study, research and analyze this volume of text. Summarizing the main idea and the major concept of the context enables the humans to read the summary of a large volume of text quickly and decide whether to further dig into details. Most of the existing summarization approaches have applied probability and statistics based techniques. But these approaches cannot achieve high accuracy. We observe that attention to the concept and the meaning of the context could greatly improve summarization accuracy, and due to the uncertainty that exists in the summarization methods, we simulate human like methods by integrating fuzzy logic with traditional statistical approaches in this study. The results of this study indicate that our approach can deal with uncertainty and achieve better results when compared with existing methods.", "title": "" }, { "docid": "d6a3c58bb07103db982906731ead87a4", "text": "We present a hybrid neural network and rule-based system that generates pop music. Music produced by pure rule-based systems often sounds mechanical. Music produced by machine learning sounds better, but still lacks hierarchical temporal structure. We restore temporal hierarchy by augmenting machine learning with a temporal production grammar, which generates the music’s overall structure and chord progressions. A compatible melody is then generated by a conditional variational recurrent autoencoder. The autoencoder is trained with eight-measure segments from a corpus of 10,000 MIDI files, each of which has had its melody track and chord progressions identified heuristically. The autoencoder maps melody into a multi-dimensional feature space, conditioned by the underlying chord progression. A melody is then generated by feeding a random sample from that space to the autoencoder’s decoder, along with the chord progression generated by the grammar. The autoencoder can make musically plausible variations on an existing melody, suitable for recurring motifs. It can also reharmonize a melody to a new chord progression, keeping the rhythm and contour. The generated music compares favorably with that generated by other academic and commercial software designed for the music-as-a-service industry.", "title": "" }, { "docid": "a17cf9c0d9be4f25b605b986b368445a", "text": "The amyloid-β peptide (Aβ) is a key protein in Alzheimer’s disease (AD) pathology. We previously reported in vitro evidence suggesting that Aβ is an antimicrobial peptide. We present in vivo data showing that Aβ expression protects against fungal and bacterial infections in mouse, nematode, and cell culture models of AD. We show that Aβ oligomerization, a behavior traditionally viewed as intrinsically pathological, may be necessary for the antimicrobial activities of the peptide. Collectively, our data are consistent with a model in which soluble Aβ oligomers first bind to microbial cell wall carbohydrates via a heparin-binding domain. Developing protofibrils inhibited pathogen adhesion to host cells. Propagating β-amyloid fibrils mediate agglutination and eventual entrapment of unatttached microbes. Consistent with our model, Salmonella Typhimurium bacterial infection of the brains of transgenic 5XFAD mice resulted in rapid seeding and accelerated β-amyloid deposition, which closely colocalized with the invading bacteria. Our findings raise the intriguing possibility that β-amyloid may play a protective role in innate immunity and infectious or sterile inflammatory stimuli may drive amyloidosis. These data suggest a dual protective/damaging role for Aβ, as has been described for other antimicrobial peptides.", "title": "" }, { "docid": "7bfbcf62f9ff94e80913c73e069ace26", "text": "This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.", "title": "" }, { "docid": "a31652c0236fb5da569ffbf326eb29e5", "text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6", "title": "" }, { "docid": "6616607ee5a856a391131c5e2745bc79", "text": "Project management (PM) landscaping is continually changing in the IT industry. Working with the small teams and often with the limited budgets, while facing frequent changes in the business requirements, project managers are under continuous pressure to deliver fast turnarounds. Following the demands of the IT project management, leaders in this industry are optimizing and adopting different and new more effective styles and strategies. This paper proposes a new hybrid way of managing IT projects, flexibly combining the traditional and the Agile method. Also, it investigates what is the necessary organizational transition in an IT company, required before converting from the traditional to the proposed new hybrid method.", "title": "" } ]
scidocsrr
b923c883cb2850d80c3c5f014ca43ac3
From TagME to WAT: a new entity annotator
[ { "docid": "9d918a69a2be2b66da6ecf1e2d991258", "text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.", "title": "" }, { "docid": "1aa51d3ef39773eb3250564ae87c6205", "text": "relatedness between terms using the links found within their corresponding Wikipedia articles. Unlike other techniques based on Wikipedia, WLM is able to provide accurate measures efficiently, using only the links between articles rather than their textual content. Before describing the details, we first outline the other systems to which it can be compared. This is followed by a description of the algorithm, and its evaluation against manually-defined ground truth. The paper concludes with a discussion of the strengths and weaknesses of the new approach. Abstract", "title": "" } ]
[ { "docid": "0e142b55b4faa59d424dbdb731b2aa28", "text": "We demonstrate ultrafast transistor-based photodetectors made from single- and few-layer graphene. The photoresponse does not degrade for optical intensity modulations up to 40 GHz, and further analysis suggests that the intrinsic bandwidth may exceed 500 GHz.", "title": "" }, { "docid": "f941c1f5e5acd9865e210b738ff1745a", "text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.", "title": "" }, { "docid": "902c6c4cc66b827f901648fd3ac2f6a9", "text": "In recent years, multiple neuromorphic architectures have been designed to execute cognitive applications that deal with image and speech analysis. These architectures have followed one of two approaches. One class of architectures is based on machine learning with artificial neural networks. A second class is focused on emulating biology with spiking neuron models, in an attempt to eventually approach the brain's accuracy and energy efficiency. A prominent example of the second class is IBM's TrueNorth processor that can execute large spiking networks on a low-power tiled architecture, and achieve high accuracy on a variety of tasks. However, as we show in this work, there are many inefficiencies in the TrueNorth design. We propose a new architecture, INXS, for spiking neural networks that improves upon the computational efficiency and energy efficiency of the TrueNorth design by 3,129× and 10× respectively. The architecture uses memristor crossbars to compute the effects of input spikes on several neurons in parallel. Digital units are then used to update neuron state. We show that the parallelism offered by crossbars is critical in achieving high throughput and energy efficiency.", "title": "" }, { "docid": "fe59da1f9d7d6d700ee7b3f65462560b", "text": "Sea–land segmentation and ship detection are two prevalent research domains for optical remote sensing harbor images and can find many applications in harbor supervision and management. As the spatial resolution of imaging technology improves, traditional methods struggle to perform well due to the complicated appearance and background distributions. In this paper, we unify the above two tasks into a single framework and apply the deep convolutional neural networks to predict pixelwise label for an input. Specifically, an edge aware convolutional network is proposed to parse a remote sensing harbor image into three typical objects, e.g., sea, land, and ship. Two innovations are made on top of the deep structure. First, we design a multitask model by simultaneously training the segmentation and edge detection networks. Hierarchical semantic features from the segmentation network are extracted to learn the edge network. Second, the outputs of edge pipeline are further employed to refine entire model by adding an edge aware regularization, which helps our method to yield very desirable results that are spatially consistent and well boundary located. It also benefits the segmentation of docked ships that are quite challenging for many previous methods. Experimental results on two datasets collected from Google Earth have demonstrated the effectiveness of our approach both in quantitative and qualitative performance compared with state-of-the-art methods.", "title": "" }, { "docid": "f2b7bd06fa849d5fd2fa0984e463acd0", "text": "5 Mona Bustami 1, Abdel-Ellah Al-Shudifat 2, Nagham Hussein 1, Mohannad Yacoub 3, 6 Eiad Atwa 4, Israr Sabri 5, Rania Abu-Hamdah 5, Walid Abu Rayyan 1, Tawfiq Arafat 1, 7 Adnan Badran 1 and Luay Abu-Qatouseh 1,* 8 1 Faculty of Pharmacy, University of Petra, Amman, 11914 Jordan; Mbustami@uop.edu.jo (M.B.); 9 naghamhussein93@gmail.com (N.H.); waburayyan@uop.edu.jo (W.A.R.); tarafat@uop.edu.jo (T.A.), 10 abadran@uop.edu.jo (A.B.) 11 2 Faculty of Medicine, The Hashemite University, Zarqa, 13115 Jordan; ashudaifat@hu.edu.jo 12 3 Medical and Diagnostic Laboratories, specialty Hospital, Amman, 11914 Jordan; m.yacoub@gmail.com 13 4 Al FAIHA/ Life Technologies Business Unit, Middle East Operation, Amman, 11914 Jordan; 14 Eiad_Atwa@yahoo.com 15 5 Faculty of Pharmacy, Nursing and Health Professions), Birzeit University, Birzeit, 627 Palestine; 16 isabri@birzeit.edu (I.S.); dean.health@birzeit.edu (R.A.-H.) 17 * Correspondence: labuqatouseh@uop.edu.jo; Tel.: +962-79-701-2082", "title": "" }, { "docid": "085f6b8b53bd2e7afb5558e5b0b0356a", "text": "Computer graphics applications controlled through natural gestures are gaining increasing popularity these days due to recent developments in low-cost tracking systems and gesture recognition technologies. Although interaction techniques through natural gestures have already demonstrated their benefits in manipulation, navigation and avatar-control tasks, effective selection with pointing gestures remains an open problem. In this paper we survey the state-of-the-art in 3D object selection techniques. We review important findings in human control models, analyze major factors influencing selection performance, and classify existing techniques according to a number of criteria. Unlike other components of the application’s user interface, pointing techniques need a close coupling with the rendering pipeline, introducing new elements to be drawn, and potentially modifying the object layout and the way the scene is rendered. Conversely, selection performance is affected by rendering issues such as visual feedback, depth perception, and occlusion management. We thus review existing literature paying special attention to those aspects in the boundary between computer graphics and human computer interaction.", "title": "" }, { "docid": "8aabafcfbb8a1b23e986fc9f4dbf5b01", "text": "OBJECTIVE\nTo examine the factors associated with the persistence of childhood gender dysphoria (GD), and to assess the feelings of GD, body image, and sexual orientation in adolescence.\n\n\nMETHOD\nThe sample consisted of 127 adolescents (79 boys, 48 girls), who were referred for GD in childhood (<12 years of age) and followed up in adolescence. We examined childhood differences among persisters and desisters in demographics, psychological functioning, quality of peer relations and childhood GD, and adolescent reports of GD, body image, and sexual orientation. We examined contributions of childhood factors on the probability of persistence of GD into adolescence.\n\n\nRESULTS\nWe found a link between the intensity of GD in childhood and persistence of GD, as well as a higher probability of persistence among natal girls. Psychological functioning and the quality of peer relations did not predict the persistence of childhood GD. Formerly nonsignificant (age at childhood assessment) and unstudied factors (a cognitive and/or affective cross-gender identification and a social role transition) were associated with the persistence of childhood GD, and varied among natal boys and girls.\n\n\nCONCLUSION\nIntensity of early GD appears to be an important predictor of persistence of GD. Clinical recommendations for the support of children with GD may need to be developed independently for natal boys and for girls, as the presentation of boys and girls with GD is different, and different factors are predictive for the persistence of GD.", "title": "" }, { "docid": "fe753c4be665700ac15509c4b831309c", "text": "Elements of Successful Digital Transformation12 New digital technologies, particularly what we refer to as SMACIT3 (social, mobile, analytics, cloud and Internet of things [IoT]) technologies, present both game-changing opportunities and existential threats to big old companies. GE’s “industrial internet” and Philips’ digital platform for personalized healthcare information represent bets made by big old companies attempting to cash", "title": "" }, { "docid": "7e12737b6c22fdafcace705052a7c45a", "text": "We consider the direct solution of general sparse linear systems baseds on a multifrontal method. The approach combines partial static scheduling of the task dependency graph during the symbolic factorization and distributed dynamic scheduling during the numerical factorization to balance the work among the processes of a distributed memory computer. We show that to address clusters of Symmetric Multi-Processor (SMP) architectures, and more generally non-uniform memory access multiprocessors, our algorithms for both the static and the dynamic scheduling need to be revisited to take account of the non-uniform cost of communication. The performance analysis on an IBM SP3 with 16 processors per SMP node and up to 128 processors shows that we can significantly reduce both the amount of inter-node communication and the solution time. 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0bcec8496b655fffa3591d36fbd5c230", "text": "We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation data, most senones are usually rarely seen or not observed, and consequently the ability to model them in a new condition is often not fully exploited. With the original senone classification task as the primary task, and adding auxiliary mono-phone/senone-cluster classification as the secondary tasks, multi-task learning (MTL) is employed to adapt the DNN parameters. With the proposed MTL adaptation framework, we improve the learning ability of the original DNN structure, then enlarge the coverage of the acoustic space to deal with the unseen senone problem, and thus enhance the discrimination power of the adapted DNN models. Experimental results on the 20,000-word open vocabulary WSJ task demonstrate that the proposed framework consistently outperforms the conventional linear hidden layer adaptation schemes without MTL by providing 3.2% relative word error rate reduction (WERR) with only 1 single adaptation utterance, and 10.7% WERR with 40 adaptation utterances against the un-adapted DNN models.", "title": "" }, { "docid": "ff6a2e6b0fbb4e195b095981ab97aae0", "text": "As broadband speeds increase, latency is becoming a bottleneck for many applications—especially for Web downloads. Latency affects many aspects of Web page load time, from DNS lookups to the time to complete a three-way TCP handshake; it also contributes to the time it takes to transfer the Web objects for a page. Previous work has shown that much of this latency can occur in the last mile [2]. Although some performance bottlenecks can be mitigated by increasing downstream throughput (e.g., by purchasing a higher service plan), in many cases, latency introduces performance bottlenecks, particularly for connections with higher throughput. To mitigate latency bottlenecks in the last mile, we have implemented a system that performs DNS prefetching and TCP connection caching to the Web sites that devices inside a home visit most frequently, a technique we call popularity-based prefetching. Many devices and applications already perform DNS prefetching and maintain persistent TCP connections, but most prefetching is predictive based on the content of the page, rather than on past site popularity. We evaluate the optimizations using a simulator that we drive from traffic traces that we collected from five homes in the BISmark testbed [1]. We find that performing DNS prefetching and TCP connection caching for the twenty most popular sites inside the home can double DNS and connection cache hit rates.", "title": "" }, { "docid": "0d119388cedb05317ac6aa5705622520", "text": "Detecting whether a song is favorite for a user is an important but also challenging task in music recommendation. One of critical steps to do this task is to select important features for the detection. This paper presents two methods to evaluate feature importance, in which we compared nine available features based on a large user log in the real world. The set of features includes song metadata, acoustic feature, and user preference used by Collaborative Filtering techniques. The evaluation methods are designed from two views: i) the correlation between the estimated scores by song similarity in respect of a feature and the scores estimated by real play count, ii) feature selection methods over a binary classification problem, i.e., “like” or “dislike”. The experimental results show the user preference is the most important feature and artist similarity is of the second importance among these nine features.", "title": "" }, { "docid": "2e8e9401e76bfdb2b121fbc7da29b2c1", "text": "BACKGROUND\nMagnetic resonance (MR) imaging has established its usefulness in diagnosing hamstring muscle strain and identifying features correlating with the duration of rehabilitation in athletes; however, data are currently lacking that may predict which imaging parameters may be predictive of a repeat strain.\n\n\nPURPOSE\nThis study was conducted to identify whether any MR imaging-identifiable parameters are predictive of athletes at risk of sustaining a recurrent hamstring strain in the same playing season.\n\n\nSTUDY DESIGN\nCohort study; Level of evidence, 3.\n\n\nMETHODS\nForty-one players of the Australian Football League who sustained a hamstring injury underwent MR examination within 3 days of injury between February and August 2002. The imaging parameters measured were the length of injury, cross-sectional area, the specific muscle involved, and the location of the injury within the muscle-tendon unit. Players who suffered a repeat injury during the same season were reimaged, and baseline and repeat injury measurements were compared. Comparison was also made between this group and those who sustained a single strain.\n\n\nRESULTS\nForty-one players sustained hamstring strains that were positive on MR imaging, with 31 injured once and 10 suffering a second injury. The mean length of hamstring muscle injury for the isolated group was 83.4 mm, compared with 98.7 mm for the reinjury group (P = .35). In the reinjury group, the second strain was also of greater length than the original (mean, 107.5 mm; P = .07). Ninety percent of players sustaining a repeat injury demonstrated an injury length greater than 60 mm, compared with only 58% in the single strain group (P = .01). Only 7% of players (1 of 14) with a strain <60 mm suffered a repeat injury. Of the 27 players sustaining a hamstring strain >60 mm, 33% (9 of 27) suffered a repeat injury. Of all the parameters assessed, only a history of anterior cruciate ligament sprain was a statistically significant predictor for suffering a second strain during the same season of competition.\n\n\nCONCLUSION\nA history of anterior cruciate ligament injury was the only statistically significant risk factor for a recurrent hamstring strain in our study. Of the imaging parameters, the MR length of a strain had the strongest correlation association with a repeat hamstring strain and therefore may assist in identifying which athletes are more likely to suffer further reinjury.", "title": "" }, { "docid": "413d6b01d62148fa86627f7cede5c53a", "text": "Each day, anti-virus companies receive tens of thousands samples of potentially harmful executables. Many of the malicious samples are variations of previously encountered malware, created by their authors to evade pattern-based detection. Dealing with these large amounts of data requires robust, automatic detection approaches. This paper studies malware classification based on call graph clustering. By representing malware samples as call graphs, it is possible to abstract certain variations away, enabling the detection of structural similarities between samples. The ability to cluster similar samples together will make more generic detection techniques possible, thereby targeting the commonalities of the samples within a cluster. To compare call graphs mutually, we compute pairwise graph similarity scores via graph matchings which approximately minimize the graph edit distance. Next, to facilitate the discovery of similar malware samples, we employ several clustering algorithms, including k-medoids and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Clustering experiments are conducted on a collection of real malware samples, and the results are evaluated against manual classifications provided by human malware analysts. Experiments show that it is indeed possible to accurately detect malware families via call graph clustering. We anticipate that in the future, call graphs can be used to analyse the emergence of new malware families, and ultimately to automate implementation of generic detection schemes.", "title": "" }, { "docid": "1406e692dc31cd4f89ea9a5441b84691", "text": "2004 Recent advancements in Field Programmable Gate Array (FPGA) technology have resulted in FPGA devices that support the implementation of a complete computer system on a single FPGA chip. A soft-core processor is a central component of such a system. A soft-core processor is a microprocessor defined in software, which can be synthesized in programmable hardware, such as FPGAs. The Nios soft-core processor from Altera Corporation is studied and a Verilog implementation of the Nios soft-core processor has been developed, called UT Nios. The UT Nios is described, its performance dependence on various architectural parameters is investigated and then compared to the original implementation from Altera. Experiments show that the performance of different types of applications varies significantly depending on the architectural parameters. The performance comparison shows that UT Nios achieves performance comparable to the original implementation. Finally, the design methodology, experiences from the design process and issues encountered are discussed. iii Acknowledgments", "title": "" }, { "docid": "249367e508f61804642ae37e27d70901", "text": "For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.", "title": "" }, { "docid": "37f861984ad6aeeb6981835c33db2f7b", "text": "Emergence of resistance among the most important bacterial pathogens is recognized as a major public health threat affecting humans worldwide. Multidrug-resistant organisms have not only emerged in the hospital environment but are now often identified in community settings, suggesting that reservoirs of antibiotic-resistant bacteria are present outside the hospital. The bacterial response to the antibiotic \"attack\" is the prime example of bacterial adaptation and the pinnacle of evolution. \"Survival of the fittest\" is a consequence of an immense genetic plasticity of bacterial pathogens that trigger specific responses that result in mutational adaptations, acquisition of genetic material, or alteration of gene expression producing resistance to virtually all antibiotics currently available in clinical practice. Therefore, understanding the biochemical and genetic basis of resistance is of paramount importance to design strategies to curtail the emergence and spread of resistance and to devise innovative therapeutic approaches against multidrug-resistant organisms. In this chapter, we will describe in detail the major mechanisms of antibiotic resistance encountered in clinical practice, providing specific examples in relevant bacterial pathogens.", "title": "" }, { "docid": "7fed30fd573ec933d59d0bab63a61dcb", "text": "The propagation delay of a comparator and dead time causes the duty-discontinuity region near the boundary of the step-down and step-up regions in a non-inverting buck-boost (NIBB) converter. The duty-discontinuity region leads to an unstable output voltage and an unpredictable output voltage ripple, which might cause the entire power system to shut down. In this paper, a mode-transition technique called duty-lock control is proposed for a digitally controlled NIBB converter. It locks the duty cycle and eliminates the error between the output voltage and the reference signal by using a proposed fixed reference scheme that ensures the stability of the digital controller and output voltage. The experimental results that were applied to a field-programmable gate array-based platform revealed that the output voltage of the NIBB converter is stable throughout the entire transition region, without any efficiency tradeoffs. The input voltage of the converter that was provided by a Li-ion battery was 2.7-4.2 V, and the output voltage was 1.0-3.6 V, which is suitable for radio-frequency power amplifiers. The switching frequency was 500 kHz, and the maximum load current was 450 mA.", "title": "" }, { "docid": "26aa69e5c79a80e2e464049a3f36532c", "text": "Tumor-derived exosomes are emerging mediators of tumorigenesis. We explored the function of melanoma-derived exosomes in the formation of primary tumors and metastases in mice and human subjects. Exosomes from highly metastatic melanomas increased the metastatic behavior of primary tumors by permanently 'educating' bone marrow progenitors through the receptor tyrosine kinase MET. Melanoma-derived exosomes also induced vascular leakiness at pre-metastatic sites and reprogrammed bone marrow progenitors toward a pro-vasculogenic phenotype that was positive for c-Kit, the receptor tyrosine kinase Tie2 and Met. Reducing Met expression in exosomes diminished the pro-metastatic behavior of bone marrow cells. Notably, MET expression was elevated in circulating CD45−C-KITlow/+TIE2+ bone marrow progenitors from individuals with metastatic melanoma. RAB1A, RAB5B, RAB7 and RAB27A, regulators of membrane trafficking and exosome formation, were highly expressed in melanoma cells. Rab27A RNA interference decreased exosome production, preventing bone marrow education and reducing, tumor growth and metastasis. In addition, we identified an exosome-specific melanoma signature with prognostic and therapeutic potential comprised of TYRP2, VLA-4, HSP70, an HSP90 isoform and the MET oncoprotein. Our data show that exosome production, transfer and education of bone marrow cells supports tumor growth and metastasis, has prognostic value and offers promise for new therapeutic directions in the metastatic process.", "title": "" }, { "docid": "e757926fbaec4097530b9a00c1278b1c", "text": "Many fish populations have both resident and migratory individuals. Migrants usually grow larger and have higher reproductive potential but lower survival than resident conspecifics. The ‘decision’ about migration versus residence probably depends on the individual growth rate, or a physiological process like metabolic rate which is correlated with growth rate. Fish usually mature as their somatic growth levels off, where energetic costs of maintenance approach energetic intake. After maturation, growth also stagnates because of resource allocation to reproduction. Instead of maturation, however, fish may move to an alternative feeding habitat and their fitness may thereby be increased. When doing so, maturity is usually delayed, either to the new asymptotic length, or sooner, if that gives higher expected fitness. Females often dominate among migrants and males among residents. The reason is probably that females maximize their fitness by growing larger, because their reproductive success generally increases exponentially with body size. Males, on the other hand, may maximize fitness by alternative life histories, e.g. fighting versus sneaking, as in many salmonid species where small residents are the sneakers and large migrants the fighters. Partial migration appears to be partly developmental, depending on environmental conditions, and partly genetic, inherited as a quantitative trait influenced by a number of genes.", "title": "" } ]
scidocsrr
0668118e5df73d81df0ea64474adbe64
Measuring and evaluating the compactness of superpixels
[ { "docid": "bff8ad5f962f501b299a0f69a0a820fd", "text": "Many methods for object recognition, segmentation, etc., rely on tessellation of an image into “superpixels”. A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D “supervoxel” segmentation. We achieve high boundary recall on 2D images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation.", "title": "" }, { "docid": "7a50e69ef09e01afb39229625221ca3d", "text": "Superpixels are becoming increasingly popular for use in computer vision applications. However, there are few algorithms that output a desired number of regular, compact superpixels with a low computational overhead. We introduce a novel algorithm that clusters pixels in the combined five-dimensional color and image plane space to efficiently generate compact, nearly uniform superpixels. The simplicity of our approach makes it extremely easy to use – a lone parameter specifies the number of superpixels – and the efficiency of the algorithm makes it very practical. Experiments show that our approach produces superpixels at a lower computational cost while achieving a segmentation quality equal to or greater than four state-of-the-art methods, as measured by boundary recall and under-segmentation error. We also demonstrate the benefits of our superpixel approach in contrast to existing methods for two tasks in which superpixels have already been shown to increase performance over pixel-based methods.", "title": "" } ]
[ { "docid": "8604589b2c45d6190fdbc50073dfda23", "text": "Many real world, complex phenomena have an underlying structure of evolving networks where nodes and links are added and removed over time. A central scientific challenge is the description and explanation of network dynamics, with a key test being the prediction of short and long term changes. For the problem of short-term link prediction, existing methods attempt to determine neighborhood metrics that correlate with the appearance of a link in the next observation period. Here, we provide a novel approach to predicting future links by applying an evolutionary algorithm (Covariance Matrix Evolution) to weights which are used in a linear combination of sixteen neighborhood and node similarity indices. We examine reciprocal reply networks of Twitter users constructed at the time scale of weeks, both as a test of our general method and as a problem of scientific interest in itself. Our evolved predictors exhibit a thousand-fold improvement over random link prediction, to our knowledge strongly outperforming all extant methods. Based on our findings, we suggest possible factors which may be driving the evolution of Twitter reciprocal reply networks.", "title": "" }, { "docid": "30a617e3f7e492ba840dfbead690ae39", "text": "Information systems professionals must pay attention to online customer retention. Drawing on the relationship marketing literature, we formulated and tested a model to explain B2C user repurchase intention from the perspective of relationship quality. The model was empirically tested through a survey conducted in Northern Ireland. Results showed that online relationship quality and perceived website usability positively impacted customer repurchase intention. Moreover, online relationship quality was positively influenced by perceived vendor expertise in order fulfillment, perceived vendor reputation, and perceived website usability, whereas distrust in vendor behavior negatively influenced online relationship quality. Implications of these findings are discussed. 2011 Elsevier B.V. All rights reserved. § This work was partially supported by Strategic Research Grant at City University of Hong Kong, China (No. CityU 7002521), and the National Nature Science Foundation of China (No. 70773008). * Corresponding author at: P7722, City University of Hong Kong, Hong Kong, China. Tel.: +852 27887492; fax: +852 34420370. E-mail address: ylfang@cityu.edu.hk (Y. Fang).", "title": "" }, { "docid": "dfdf2581010777e51ff3e29c5b9aee7f", "text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.", "title": "" }, { "docid": "ef1d28df2575c2c844ca2fa109893d92", "text": "Measurement of the quantum-mechanical phase in quantum matter provides the most direct manifestation of the underlying abstract physics. We used resonant x-ray scattering to probe the relative phases of constituent atomic orbitals in an electronic wave function, which uncovers the unconventional Mott insulating state induced by relativistic spin-orbit coupling in the layered 5d transition metal oxide Sr2IrO4. A selection rule based on intra-atomic interference effects establishes a complex spin-orbital state represented by an effective total angular momentum = 1/2 quantum number, the phase of which can lead to a quantum topological state of matter.", "title": "" }, { "docid": "49c8cd55ffc5de2fe6064837be2f9816", "text": "L-theanine acid is an amino acid in tea which affects mental state directly. Along with other most popular tea types; white, green, and black tea, Oolong tea also has sufficient L-theanine to relax the human brain. It apparently can reduce the concern, blood pressure, dissolve the fat in the arteries, and especially slow aging by substances against free radicals. Therefore, this research study about the effect of L-theanine in Oolong Tea on human brain's attention focused on meditation during book reading state rely on each person by using electroencephalograph (EEG) and K-means clustering. An electrophysiological monitoring will properly measure the voltage fluctuation of Alpha rhythm for the understanding of higher attention processes of human brain precisely. K-means clustering investigates and defines that the group of converted waves data has a variable effective level rely on each classified group, which female with lower BMI has a higher effect on L-theanine than male apparently. In conclusion, the results promise the L-theanine significantly affects on meditation by increasing in Alpha waves on each person that beneficially supports production proven of Oolong tea in the future.", "title": "" }, { "docid": "38a0f56e760b0e7a2979c90a8fbcca68", "text": "The Rubik’s Cube is perhaps the world’s most famous and iconic puzzle, well-known to have a rich underlying mathematical structure (group theory). In this paper, we show that the Rubik’s Cube also has a rich underlying algorithmic structure. Specifically, we show that the n×n×n Rubik’s Cube, as well as the n×n×1 variant, has a “God’s Number” (diameter of the configuration space) of Θ(n/ logn). The upper bound comes from effectively parallelizing standard Θ(n) solution algorithms, while the lower bound follows from a counting argument. The upper bound gives an asymptotically optimal algorithm for solving a general Rubik’s Cube in the worst case. Given a specific starting state, we show how to find the shortest solution in an n×O(1)×O(1) Rubik’s Cube. Finally, we show that finding this optimal solution becomes NPhard in an n×n×1 Rubik’s Cube when the positions and colors of some cubies are ignored (not used in determining whether the cube is solved).", "title": "" }, { "docid": "6f415236f4a045f62f4e184f4e03258d", "text": "The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.", "title": "" }, { "docid": "dd7940191f4b2d63e063bc27a0dcb787", "text": "The Distributed Interactive Virtual Environment (DIVE) is a heterogeneous distributed VR system based on UNIX and Internet networking protocols. Each participating process has a copy of a replicated database and changes are propagated to the other processes with reliable multicast protocols. DIVE provides a dynamic virtual environment whew applications and users can enter and leave the environment on demand. Several user-related abstractions have been introduced to ease the task of application and user interface construction.", "title": "" }, { "docid": "3dba5be7cb08ab1466035cfee182991e", "text": "We describe a male patient with lobar holoprosencephaly, ectrodactyly, and cleft lip/palate, a syndrome which has been seen previously in only six patients. In addition, our patient developed hypernatraemia, which has been described in three patients before.", "title": "" }, { "docid": "8a8841e81793f19fe82106fbe5df91d9", "text": "In this paper, we present anO(n log3 n) time algorithm for finding shortest paths in a planar graph with real weights. This can be compared to the best previous strongly polynomial time algorithm developed by Lipton, Rose, and Tarjan in 1978 which ran inO(n3=2) time, and the best polynomial algorithm developed by Henzinger, Klein, Subramanian, and Rao in 1994 which ran ine O(n4=3) time. We also present significantly improved algorithms for query and dynamic versions of the shortest path problems.", "title": "" }, { "docid": "6df7df578e98e64314a6f719ef6f8e0a", "text": "The Android ecosystem has witnessed a surge in malware, which not only puts mobile devices at risk but also increases the burden on malware analysts assessing and categorizing threats. In this paper, we show how to use machine learning to automatically classify Android malware samples into families with high accuracy, while observing only their runtime behavior. We focus exclusively on dynamic analysis of runtime behavior to provide a clean point of comparison that is dual to static approaches. Specific challenges in the use of dynamic analysis on Android are the limited information gained from tracking low-level events and the imperfect coverage when testing apps, e.g., due to inactive command and control servers. We observe that on Android, pure system calls do not carry enough semantic content for classification and instead rely on lightweight virtual machine introspection to also reconstruct Android-level inter-process communication. To address the sparsity of data resulting from low coverage, we introduce a novel classification method that fuses Support Vector Machines with Conformal Prediction to generate high-accuracy prediction sets where the information is insufficient to pinpoint a single family.", "title": "" }, { "docid": "27dfad28a88d47b621abece187428f24", "text": "This paper explores and investigates Deep Convolutional Neural Networks (DCNNs) architectures to increase efficiency and robustness of semantic segmentation tasks. The proposed solutions are based on Up-Convolutional Networks. We introduce three different architectures in this work. The first architecture, called Part-Net, is designed to tackle the specific problem of human body part segmentation and to provide robustness to overfitting and body part oclussion. The second network, called Fast-Net, is a network specifically designed to provide the lowest computation load without loosing representation power. Such architecture is capable of being run on mobile GPUs. The last architecture, called M-Net, aims to maximize the robustness characteristics of deep semantic segmentation approaches through multiresolution fusion. The networks achieve state-of-the-art performance on the PASCAL Parts Dataset and competitive results on the KITTI dataset for road and lane segmentation. Moreover, we introduce a new part segmentation dataset designed to bring semantic segmentation to highly realistic robotics scenarios, called Freiburg City Dataset. Additionally, we present results obtained with a ground robot and an unmanned aerial vehicle and a full system which explore the capabilities of human body part segmentation in the context of human-robot interaction.", "title": "" }, { "docid": "bc807811f3aefdd15ff338bf80c10225", "text": "HOGENBOOM. 1986. Pollen selection in breeding glasshouse tomatoes for low energy conditions, pp. 125-130. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology of Pollen. Springer-Verlag, N.Y. SARI-GORLA, M. C. FROVA, AND R. REDAELLI. 1986. Extent ofgene expression at the gametophytic phase in maize, pp. 27-32. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology of Pollen. Springer-Verlag, N.Y. SEARCY, K., AND D. MULCAHY. 1986. Gametophytic expression of heavy metal tolerance, pp. 159-164. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology of Pollen. Springer-Verlag, N.Y. SHfYANNA, K. R., AND J. HESLOP-HARRISON. 1981. Membrane state and pollen viability. J. Ann. Bot. 47:759-766. SIMON, J., AND J. C. SANFORD. 1986. Induction of gametic selection in situ by stylar application of selective agents, pp. 107-112. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology ofPollen. Springer-Verlag, N.Y. SNOW, A. A. 1986. Pollination dynamics in Epilobium canum (Onagraceae): Consequences for gametophytic selection. Amer. J. Bot. 73:139-151.", "title": "" }, { "docid": "b2e8d42c86b2ee63c36ecc6123736f8b", "text": "The balance between detrimental, pro-aging, often stochastic processes and counteracting homeostatic mechanisms largely determines the progression of aging. There is substantial evidence suggesting that the endocannabinoid system (ECS) is part of the latter system because it modulates the physiological processes underlying aging. The activity of the ECS declines during aging, as CB1 receptor expression and coupling to G proteins are reduced in the brain tissues of older animals and the levels of the major endocannabinoid 2-arachidonoylglycerol (2-AG) are lower. However, a direct link between endocannabinoid tone and aging symptoms has not been demonstrated. Here we show that a low dose of Δ9-tetrahydrocannabinol (THC) reversed the age-related decline in cognitive performance of mice aged 12 and 18 months. This behavioral effect was accompanied by enhanced expression of synaptic marker proteins and increased hippocampal spine density. THC treatment restored hippocampal gene transcription patterns such that the expression profiles of THC-treated mice aged 12 months closely resembled those of THC-free animals aged 2 months. The transcriptional effects of THC were critically dependent on glutamatergic CB1 receptors and histone acetylation, as their inhibition blocked the beneficial effects of THC. Thus, restoration of CB1 signaling in old individuals could be an effective strategy to treat age-related cognitive impairments.", "title": "" }, { "docid": "e4e7b1b9ec8f0688d2d10206be59cd99", "text": "Recognizing TimeML events and identifying their attributes, are important tasks in natural language processing (NLP). Several NLP applications like question answering, information retrieval, summarization, and temporal information extraction need to have some knowledge about events of the input documents. Existing methods developed for this task are restricted to limited number of languages, and for many other languages including Persian, there has not been any effort yet. In this paper, we introduce two different approaches for automatic event recognition and classification in Persian. For this purpose, a corpus of events has been built based on a specific version of ISO-TimeML for Persian. We present the specification of this corpus together with the results of applying mentioned approaches to the corpus. Considering these methods are the first effort towards Persian event extraction, the results are comparable to that of successful methods in English. TITLE AND ABSTRACT IN PERSIAN اھداديور جارختسا زا یسراف نوتم فيرعت رب انب ISO-TimeML نتفاي اھداديور یگژيو و اھنآ یاھ ساسا رب TimeML زا یکي لئاسم هزوح رد مھم یعيبط یاھ نابز شزادرپ ی تسا . نابز شزادرپ یاھدربراک زا یرايسب هناماس دننام یعيبط یاھ و یزاس هص2خ ،تاع2طا جارختسا ،خساپ و شسرپ یاھ ات دنراد زاين ینامز تاع2طا جارختسا هرابرد یشناد یاھداديور رد دوجوم نوتم یدورو شور .دنشاب هتشاد هک یياھ نيا دروم رد نونکات هدش داجيا هلئسم نابز دنچ هب دودحم ، صاخ نابز زا یرايسب رد و تسا اھ هلمج زا ،یسراف نابز یراک نونکات هدشن ماجنا هطبار نيا رد یسراف نابز رد اھداديور جارختسا یارب فلتخم شور ود ام ،هلاقم نيا رد .تسا یم هئارا .ميھد یارب هرکيپ ،راک نيا اب قباطم یا ISO-TimeML ، سن هتبلا هخ دش هتخاس ،نآ یسراف صاخ ی ام . ناشن ار ،نآ یور رب لصاح جياتن و هرکيپ نيا تاصخشم یم ميھد شور جياتن . هئارا یاھ هدش هلاقم نيا رد ناونع هب ، شور نيلوا هدايپ یاھ اب ،یسراف نابز یور رب هدش یزاس .تسا هسياقم لباق یسيلگنا نابز رد قفوم یاھ شور", "title": "" }, { "docid": "cec6e899c23dd65881f84cca81205eb0", "text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.", "title": "" }, { "docid": "38f386546b5f866d45ff243599bd8305", "text": "During the last two decades, Structural Equation Modeling (SEM) has evolved from a statistical technique for insiders to an established valuable tool for a broad scientific public. This class of analyses has much to offer, but at what price? This paper provides an overview on SEM, its underlying ideas, potential applications and current software. Furthermore, it discusses avoidable pitfalls as well as built-in drawbacks in order to lend support to researchers in deciding whether or not SEM should be integrated into their research tools. Commented findings of an internet survey give a “State of the Union Address” on SEM users and usage. Which kinds of models are preferred? Which software is favoured in current psychological research? In order to assist the reader on his first steps, a SEM first-aid kit is included. Typical problems and possible solutions are addressed, helping the reader to get the support he needs. Hence, the paper may assist the novice on the first steps and self-critically reminds the advanced reader of the limitations of Structural Equation Modeling", "title": "" }, { "docid": "50dd722cfb32472187e1a73dbe29c4c9", "text": "How to develop slim and accurate deep neural networks has become crucial for realworld applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS.", "title": "" }, { "docid": "a4059636cbdc058e3f3a7621155c68b7", "text": "A <italic>K</italic>-d tree represents a set of <italic>N</italic> points in <italic>K</italic>-dimensional space. Operations on a <italic>semidynamic</italic> tree may delete and undelete points, but may not insert new points. This paper shows that several operations that require <italic>&Ogr;</italic>(log <italic>N</italic>) expected time in general <italic>K</italic>-d trees may be performed in constant expected time in semidynamic trees. These operations include deletion, undeletion, nearest neighbor searching, and fixed-radius near neighbor searching (the running times of the first two are proved, while the last two are supported by experiments and heuristic arguments). Other new techniques can also be applied to general <italic>K</italic>-d trees: simple sampling reduces the time to build a tree from <italic>&Ogr;</italic>(<italic>KN</italic> log <italic>N</italic>) to <italic>&Ogr;</italic>(<italic>KN</italic> + <italic>N</italic> log <italic>N</italic>), and more advanced sampling builds a robust tree in the same time. The methods are straightforward to implement, and lead to a data structure that is significantly faster and less vulnerable to pathological inputs than ordinary <italic>K</italic>-d trees.", "title": "" }, { "docid": "70b6779247f28ddc2e153c7bc159c98d", "text": "Radio-frequency identification (RFID) is a wireless technology for automatic identification using electromagnetic fields in the radio frequency spectrum. In addition to the easy deployment and decreasing prices for tags, this technology has many advantages to bar codes and other common identification methods, such as no required line of sight and the ability to read several tags simultaneously. Therefore it enjoys large popularity among large businesses and continues to spread in the consumer market. Common applications include the fields of electronic article surveillance, access control, tracking, and identification of objects and animals. This paper introduces RFID technology, analyzes modern applications, and tries to point out strengths and weaknesses of RFID systems.", "title": "" } ]
scidocsrr
1a135dbfd5e5664a7f2170e9273f4ccf
A prototype for assessing information security awareness
[ { "docid": "e59379bc46c4fcf85027a1624425949b", "text": "Information Security Culture includes all socio-cultural measures that support technical security methods, so that information security becomes a natural aspect in the daily activity of every employee. To apply these socio-cultural measures in an effective and efficient way, certain management models and tools are needed. In our research we developed a framework analyzing the security culture of an organization which we then applied in a pre-evaluation survey. This paper is based on the results of this survey. We will develop a management model for creating, changing and maintaining Information Security Culture. This model will then be used to define explicit sociocultural measures, based on the concept of internal marketing.", "title": "" } ]
[ { "docid": "73f9c6fc5dfb00cc9b05bdcd54845965", "text": "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.", "title": "" }, { "docid": "76d4ed8e7692ca88c6b5a70c9954c0bd", "text": "Custom-tailored products are meant by the products having various sizes and shapes to meet the customer’s different tastes or needs. Thus fabrication of custom-tailored products inherently involves inefficiency. Custom-tailoring shoes are not an exception because corresponding shoe-lasts must be custom-ordered. It would be nice if many template shoe-lasts had been cast in advance, the most similar template was identified automatically from the custom-ordered shoe-last, and only the different portions in the template shoe-last could be machined. To enable this idea, the first step is to derive the geometric models of template shoe-lasts to be cast. Template shoe-lasts can be derived by grouping all the various existing shoe-lasts into manageable number of groups and by uniting all the shoe-lasts in each group such that each template shoe-last for each group barely encloses all the shoe-lasts in the group. For grouping similar shoe-lasts into respective groups, similarity between shoe-lasts should be quantized. Similarity comparison starts with the determination of the closest pose between two shapes in consideration. The closest pose is derived by comparing the ray distances while one shape is virtually rotated with respect to the other. Shape similarity value and overall similarity value calculated from ray distances are also used for grouping. A prototype system based on the proposed methodology has been implemented and applied to grouping of the shoe-lasts of various shapes and sizes and deriving template shoe-lasts. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a05d87b064ab71549d373599700cfcbf", "text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.", "title": "" }, { "docid": "87652eb26da0fc222979b1ac0d87370e", "text": "In modern greenhouses, several measurement points are required to trace down the local climate parameters in different parts of the big greenhouse to make the greenhouse automation system work properly. Cabling would make the measurement system expensive and vulnerable. Moreover, the cabled measurement points are difficult to relocate once they are installed. Thus, a wireless sensor network (WSN) consisting of small-size wireless sensor nodes equipped with radio and one or several sensors, is an attractive and cost-efficient option to build the required measurement system. In this work, we developed a wireless sensor node for greenhouse monitoring by integrating a sensor platform provided by sensinode Ltd. [1] with three commercial sensors capable to measure four climate variables. The feasibility of the developed node was tested by deploying a simple sensor network into Martens Greenhouse Research Foundation's greenhouse in Narpio town in Western Finland. During a one day experiment, we collected data to evaluate the network reliability and its ability to detect the microclimate layers, which typically exist in the greenhouse between lower and upper flora. We were also able to show that the network can detect the local differences in the greenhouse climate caused by various disturbances, such as direct sunshine near the greenhouse walls. This article is our first step in the area of greenhouse monitoring and control, and it is all about the developed sensor network feasibility and reliability.", "title": "" }, { "docid": "e54e236020d7cf730a5c25a553f08215", "text": "CRISPR–Cas systems that provide defence against mobile genetic elements in bacteria and archaea have evolved a variety of mechanisms to target and cleave RNA or DNA. The well-studied types I, II and III utilize a set of distinct CRISPR-associated (Cas) proteins for production of mature CRISPR RNAs (crRNAs) and interference with invading nucleic acids. In types I and III, Cas6 or Cas5d cleaves precursor crRNA (pre-crRNA) and the mature crRNAs then guide a complex of Cas proteins (Cascade-Cas3, type I; Csm or Cmr, type III) to target and cleave invading DNA or RNA. In type II systems, RNase III cleaves pre-crRNA base-paired with trans-activating crRNA (tracrRNA) in the presence of Cas9 (refs 13, 14). The mature tracrRNA–crRNA duplex then guides Cas9 to cleave target DNA. Here, we demonstrate a novel mechanism in CRISPR–Cas immunity. We show that type V-A Cpf1 from Francisella novicida is a dual-nuclease that is specific to crRNA biogenesis and target DNA interference. Cpf1 cleaves pre-crRNA upstream of a hairpin structure formed within the CRISPR repeats and thereby generates intermediate crRNAs that are processed further, leading to mature crRNAs. After recognition of a 5′-YTN-3′ protospacer adjacent motif on the non-target DNA strand and subsequent probing for an eight-nucleotide seed sequence, Cpf1, guided by the single mature repeat-spacer crRNA, introduces double-stranded breaks in the target DNA to generate a 5′ overhang. The RNase and DNase activities of Cpf1 require sequence- and structure-specific binding to the hairpin of crRNA repeats. Cpf1 uses distinct active domains for both nuclease reactions and cleaves nucleic acids in the presence of magnesium or calcium. This study uncovers a new family of enzymes with specific dual endoribonuclease and endonuclease activities, and demonstrates that type V-A constitutes the most minimalistic of the CRISPR–Cas systems so far described.", "title": "" }, { "docid": "dca74df16e3a90726d51b3222483ac94", "text": "We are concerned with the issue of detecting outliers and change points from time series. In the area of data mining, there have been increased interest in these issues since outlier detection is related to fraud detection, rare event discovery, etc., while change-point detection is related to event/trend change detection, activity monitoring, etc. Although, in most previous work, outlier detection and change point detection have not been related explicitly, this paper presents a unifying framework for dealing with both of them. In this framework, a probabilistic model of time series is incrementally learned using an online discounting learning algorithm, which can track a drifting data source adaptively by forgetting out-of-date statistics gradually. A score for any given data is calculated in terms of its deviation from the learned model, with a higher score indicating a high possibility of being an outlier. By taking an average of the scores over a window of a fixed length and sliding the window, we may obtain a new time series consisting of moving-averaged scores. Change point detection is then reduced to the issue of detecting outliers in that time series. We compare the performance of our framework with those of conventional methods to demonstrate its validity through simulation and experimental applications to incidents detection in network security.", "title": "" }, { "docid": "158c535b44fe81ca7194d5a0b386f2b5", "text": "Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM) [1]. Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training autoencoders. Human observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures (L1 and L2 distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. We argue that significant advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.", "title": "" }, { "docid": "de6d83fd854d92e83a59191f48921e0b", "text": "The automatic detection of objects that are abandoned or removed in a video scene is an interesting area of computer vision, with key applications in video surveillance. Forgotten or stolen luggage in train and airport stations and irregularly parked vehicles are examples that concern significant issues, such as the fight against terrorism and crime, and public safety. Both issues involve the basic task of detecting static regions in the scene. We address this problem by introducing a model-based framework to segment static foreground objects against moving foreground objects in single view sequences taken from stationary cameras. An image sequence model, obtained by learning in a self-organizing neural network image sequence variations, seen as trajectories of pixels in time, is adopted within the model-based framework. Experimental results on real video sequences and comparisons with existing approaches show the accuracy of the proposed stopped object detection approach.", "title": "" }, { "docid": "2f1dc4a089f88d6f7e39b10f53321e89", "text": "⎯ A new technique for summarizing news articles using a neural network is presented. A neural network is trained to learn the relevant characteristics of sentences that should be included in the summary of the article. The neural network is then modified to generalize and combine the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used as a filter to summarize news articles.", "title": "" }, { "docid": "065151713758d05a602b350d31e88dc6", "text": "Previous works have shown that the ear is a promising candida te for biometric identification. However, in prior work, the pre-processing of ear images has had manua l steps, and algorithms have not necessarily handled problems caused by hair and earrings. We pres ent a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition. We evaluated this system with the largest experimen tal study to date in ear biometrics, achieving a rank-one recognition rate of 97.8% for an identification sc enario, and equal error rate of 1.2% for a verification scenario on a database of 415 subjects and 1,386 total probes. Keyword: biometrics, ear biometrics, 3-D shape, skin detection, cur vat e estimation, active contour, iterative closest point.", "title": "" }, { "docid": "b85ca4a4b564fcb61001fd13332ddc65", "text": "Although the archaeological site of Edzná is one of the more accessible Mayan ruins, being located scarcely 60 km to the southeast of the port-city of Campeche, it has until recently escaped the notice which its true significance would seem to merit. Not only does it appear to have been the earliest major Mayan urban center, dating to the middle of the second century before the Christian era and having served as the focus of perhaps as many as 20, 000 inhabitants, but there is also a growing body of evidence to suggest that it played a key role in the development of Mayan astronomy and calendrics. Among the innovations that seemingly had their origin in Edzná are the Maya's fixing of their New Year's Day, the concept of \"year bearers\", and what is probably the oldest lunar observatory in the New World.", "title": "" }, { "docid": "d7f4b2b524a5b7b78263881b2ec7a797", "text": "Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information co tained in large, formal knowledge bases (KBs, e.g., Freebas e) to answer questions, but it is also fundamentally limiting— these semantic parsers can only assign meaning to language that falls within the KB’s manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executab le representations of language, (2) can successfully leverag e the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.", "title": "" }, { "docid": "459a3bc8f54b8f7ece09d5800af7c37b", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.", "title": "" }, { "docid": "01a57e4a8bcc91fd5d172280a6b47577", "text": "Recommendation System Using Collaborative Filtering by Yunkyoung Lee Collaborative filtering is one of the well known and most extensive techniques in recommendation system its basic idea is to predict which items a user would be interested in based on their preferences. Recommendation systems using collaborative filtering are able to provide an accurate prediction when enough data is provided, because this technique is based on the user’s preference. User-based collaborative filtering has been very successful in the past to predict the customer’s behavior as the most important part of the recommendation system. However, their widespread use has revealed some real challenges, such as data sparsity and data scalability, with gradually increasing the number of users and items. To improve the execution time and accuracy of the prediction problem, this paper proposed item-based collaborative filtering applying dimension reduction in a recommendation system. It demonstrates that the proposed approach can achieve better performance and execution time for the recommendation system in terms of existing challenges, according to evaluation metrics using Mean Absolute Error (MAE).", "title": "" }, { "docid": "fe998d6d18b9bab9ee3a011761aaab50", "text": "of quartiles for box plots is a well-established convention: boxes or whiskers should never be used to show the mean, s.d. or s.e.m. As with the division of the box by the median, the whiskers are not necessarily symmetrical (Fig. 1b). The 1.5 multiplier corresponds to approximately ±2.7s (where s is s.d.) and 99.3% coverage of the data for a normal distribution. Outliers beyond the whiskers may be individually plotted. Box plot construction requires a sample of at least n = 5 (preferably larger), although some software does not check for this. For n < 5 we recommend showing the individual data points. Sample size differences can be assessed by scaling the box plot width in proportion to √n (Fig. 1b), the factor by which the precision of the sample’s estimate of population statistics improves as sample size is increased. To assist in judging differences between sample medians, a notch (Fig. 1b) can be used to show the 95% confidence interval (CI) for the median, given by m ± 1.58 × IQR/√n (ref. 1). This is an approximation based on the normal distribution and is accurate in large samples for other distributions. If you suspect the population distribution is not close to normal and your sample size is small, avoid interpreting the interval analytically in the way we have described for CI error bars2. In general, when notches do not overlap, the medians can be judged to differ significantly, but overlap does not rule out a significant difference. For small samples the notch may span a larger interval than the box (Fig. 2). The exact position of box boundaries will be software dependent. First, there is no universally agreedupon method to calculate quartile values, which may be based on simple averaging or linear interpolation. Second, some applications, such as R, use hinges instead of quartiles for box boundaries. The lower and upper hinges are the median of the Points of siGnifiCAnCE", "title": "" }, { "docid": "6cad42e549f449c7156b0a07e2e02726", "text": "Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.", "title": "" }, { "docid": "2421518a0646cb76d2aac6c33ccd06dc", "text": "Modern technologies enable us to record sequences of online user activity at an unprecedented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating-prediction paradigm, ignoring temporal and contextual aspects of user behavior revealed by temporal, recurrent patterns. In contrast to explicit ratings, such activity logs can be collected in a non-intrusive way and can offer richer insights into the dynamics of user preferences, which could potentially lead more accurate user models. In this work we advocate studying this ubiquitous form of data and, by combining ideas from latent factor models for collaborative filtering and language modeling, propose a novel, flexible and expressive collaborative sequence model based on recurrent neural networks. The model is designed to capture a user’s contextual state as a personalized hidden vector by summarizing cues from a data-driven, thus variable, number of past time steps, and represents items by a real-valued embedding. We found that, by exploiting the inherent structure in the data, our formulation leads to an efficient and practical method. Furthermore, we demonstrate the versatility of our model by applying it to two different tasks: music recommendation and mobility prediction, and we show empirically that our model consistently outperforms static and non-collaborative methods.", "title": "" }, { "docid": "00b73790bb0bb2b828e1d443d3e13cf4", "text": "Grippers and robotic hands are an important field in robotics. Recently, the combination of grasping devices and haptic feedback has been a promising avenue for many applications such as laparoscopic surgery and spatial telemanipulation. This paper presents the work behind a new selfadaptive, a.k.a. underactuated, gripper with a proprioceptive haptic feedback in which the apparent stiffness of the gripper as seen by its actuator is used to estimate contact location. This system combines many technologies and concepts in an integrated mechatronic tool. Among them, underactuated grasping, haptic feedback, compliant joints and a differential seesaw mechanism are used. Following a theoretical modeling of the gripper based on the virtual work principle, the authors present numerical data used to validate this model. Then, a presentation of the practical prototype is given, discussing the sensors, controllers, and mechanical architecture. Finally, the control law and the experimental validation of the haptic feedback are presented.", "title": "" }, { "docid": "7110e68a420d10fa75a943d1c1f0bd42", "text": "This paper proposes a compact microstrip Yagi-Uda antenna for 2.45 GHz radio frequency identification (RFID) handheld reader applications. The proposed antenna is etched on a piece of FR4 substrate with an overall size of 65 mm × 55 mm ×1.6 mm and consists of a microstrip balun, a dipole, and a director. The ground plane is designed to act as a reflector that contributes to enhancing the antenna gain. The measured 10-dB return loss bandwidth and peak gain achieved by the proposed antenna are 380 MHz and 7.5 dBi, respectively. In addition, a parametric study is conducted to facilitate the design and optimization processes for engineers.", "title": "" }, { "docid": "a69600725f25e0e927f8ddeb1d30f99d", "text": "Island conservation in the longer term Conservation of biodiversity on islands is important globally because islands are home to more than 20% of the terrestrial plant and vertebrate species in the world, within less than 5% of the global terrestrial area. Endemism on islands is a magnitude higher than on continents [1]; ten of the 35 biodiversity hotspots in the world are entirely, or largely consist of, islands [2]. Yet this diversity is threatened: over half of all recent extinctions have occurred on islands, which currently harbor over one-third of all terrestrial species facing imminent extinction [3] (Figure 1). In response to the biodiversity crisis, island conservation has been an active field of research and action. Hundreds of invasive species eradications and endangered species translocations have been successfully completed [4–6]. However, despite climate change being an increasing research focus generally, its impacts on island biodiversity are only just beginning to be investigated. For example, invasive species eradications on islands have been prioritized largely by threats to native biodiversity, eradication feasibility, economic cost, and reinvasion potential, but have never considered the threat of sea-level rise. Yet, the probability and extent of island submersion would provide a relevant metric for the longevity of long-term benefits of such eradications.", "title": "" } ]
scidocsrr
30b9c1ec81c160e0a14b365602193cde
Pedestrian Recognition Using Second-Order HOG Feature
[ { "docid": "f923a3a18e8000e4094d4a6d6e69b18f", "text": "We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on clusters of the training set. Single-frame classification performance results and system level performance figures for daytime conditions are presented with a discussion about the remaining gap to meet a daytime normal weather condition production system.", "title": "" } ]
[ { "docid": "703cda264eddc139597b9ef9d4c0e977", "text": "Multi-processor systems are becoming the de-facto standard across different computing domains, ranging from high-end multi-tenant cloud servers to low-power mobile platforms. The denser integration of CPUs creates an opportunity for great economic savings achieved by packing processes of multiple tenants or by bundling all kinds of tasks at various privilege levels to share the same platform. This level of sharing carries with it a serious risk of leaking sensitive information through the shared microarchitectural components. Microarchitectural attacks initially only exploited core-private resources, but were quickly generalized to resources shared within the CPU. We present the first fine grain side channel attack that works across processors. The attack does not require CPU co-location of the attacker and the victim. The novelty of the proposed work is that, for the first time the directory protocol of high efficiency CPU interconnects is targeted. The directory protocol is common to all modern multi-CPU systems. Examples include AMD's HyperTransport, Intel's Quickpath, and ARM's AMBA Coherent Interconnect. The proposed attack does not rely on any specific characteristic of the cache hierarchy, e.g. inclusiveness. Note that inclusiveness was assumed in all earlier works. Furthermore, the viability of the proposed covert channel is demonstrated with two new attacks: by recovering a full AES key in OpenSSL, and a full ElGamal key in libgcrypt within the range of seconds on a shared AMD Opteron server.", "title": "" }, { "docid": "2232d02a700d412c61cab20b98b6a6c2", "text": "Intranasal drug delivery (INDD) systems offer a route to the brain that bypasses problems related to gastrointestinal absorption, first-pass metabolism, and the blood-brain barrier; onset of therapeutic action is rapid, and the inconvenience and discomfort of parenteral administration are avoided. INDD has found several applications in neuropsychiatry, such as to treat migraine, acute and chronic pain, Parkinson disease, disorders of cognition, autism, schizophrenia, social phobia, and depression. INDD has also been used to test experimental drugs, such as peptides, for neuropsychiatric indications; these drugs cannot easily be administered by other routes. This article examines the advantages and applications of INDD in neuropsychiatry; provides examples of test, experimental, and approved INDD treatments; and focuses especially on the potential of intranasal ketamine for the acute and maintenance therapy of refractory depression.", "title": "" }, { "docid": "3b2266e92c8b91a7c1937759ca8c3b8a", "text": "Today’s environments of increasing business change require software development methodologies that are more adaptable. This article examines how complex adaptive systems (CAS) theory can be used to increase our understanding of how agile software development practices can be used to develop this capability. A mapping of agile practices to CAS principles and three dimensions (product, process, and people) results in several recommendations for “best practices” in systems development.", "title": "" }, { "docid": "d704917077795fbe16e52ea2385e19ef", "text": "The objectives of this review were to summarize the evidence from randomized controlled trials (RCTs) on the effects of animal-assisted therapy (AAT). Studies were eligible if they were RCTs. Studies included one treatment group in which AAT was applied. We searched the following databases from 1990 up to October 31, 2012: MEDLINE via PubMed, CINAHL, Web of Science, Ichushi Web, GHL, WPRIM, and PsycINFO. We also searched all Cochrane Database up to October 31, 2012. Eleven RCTs were identified, and seven studies were about \"Mental and behavioral disorders\". Types of animal intervention were dog, cat, dolphin, bird, cow, rabbit, ferret, and guinea pig. The RCTs conducted have been of relatively low quality. We could not perform meta-analysis because of heterogeneity. In a study environment limited to the people who like animals, AAT may be an effective treatment for mental and behavioral disorders such as depression, schizophrenia, and alcohol/drug addictions, and is based on a holistic approach through interaction with animals in nature. To most effectively assess the potential benefits for AAT, it will be important for further research to utilize and describe (1) RCT methodology when appropriate, (2) reasons for non-participation, (3) intervention dose, (4) adverse effects and withdrawals, and (5) cost.", "title": "" }, { "docid": "cfec098f84e157a2e12f0ff40551c977", "text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.", "title": "" }, { "docid": "fe1d0321b1182c9ecb92ccd95c83cd25", "text": "Cybercriminals have leveraged the popularity of a large user base available on Online Social Networks (OSNs) to spread spam campaigns by propagating phishing URLs, attaching malicious contents, etc. However, another kind of spam attacks using phone numbers has recently become prevalent on OSNs, where spammers advertise phone numbers to attract users’ attention and convince them to make a call to these phone numbers. The dynamics of phone number based spam is different from URL-based spam due to an inherent trust associated with a phone number. While previous work has proposed strategies to mitigate URL-based spam attacks, phone number based spam attacks have received less attention. In this paper, we aim to detect spammers that use phone numbers to promote campaigns on Twitter. To this end, we collected information (tweets, user meta-data, etc.) about 3, 370 campaigns spread by 670, 251 users. We model the Twitter dataset as a heterogeneous network by leveraging various interconnections between different types of nodes present in the dataset. In particular, we make the following contributions – (i) We propose a simple yet effective metric, called Hierarchical Meta-Path Score (HMPS) to measure the proximity of an unknown user to the other known pool of spammers. (ii) We design a feedback-based active learning strategy and show that it significantly outperforms three state-of-the-art baselines for the task of spam detection. Our method achieves 6.9% and 67.3% higher F1-score and AUC, respectively compared to the best baseline method. (iii) To overcome the problem of less training instances for supervised learning, we show that our proposed feedback strategy achieves 25.6% and 46% higher F1-score and AUC respectively than other oversampling strategies. Finally, we perform a case study to show how our method is capable of detecting those users as spammers who have not been suspended by Twitter (and other baselines) yet.", "title": "" }, { "docid": "caccc9394a9bc06b60e56615fc3cb46a", "text": "This article describes a project designed to change the climate of whiteness in academic nursing. Using an emancipatory, antiracist perspective from whiteness studies, we describe a project that helped faculty and staff to work together to challenge and begin to change the status quo of unnamed white privilege and racial injustice in nursing education.", "title": "" }, { "docid": "f2377c76df4a2bcf0af063cb86befdda", "text": "Overexpression of ErbB2, a receptor-like tyrosine kinase, is shared by several types of human carcinomas. In breast tumors the extent of overexpression has a prognostic value, thus identifying the oncoprotein as a target for therapeutic strategies. Already, antibodies to ErbB2 are used in combination with chemotherapy in the treatment of metastasizing breast cancer. The mechanisms underlying the oncogenic action of ErbB2 involve a complex network in which ErbB2 acts as a ligand-less signaling subunit of three other receptors that directly bind a large repertoire of stroma-derived growth factors. The major partners of ErbB2 in carcinomas are ErbB1 (also called EGFR) and ErbB3, a kinase-defective receptor whose potent mitogenic action is activated in the context of heterodimeric complexes. Why ErbB2-containing heterodimers are relatively oncopotent is a function of a number of processes. Apparently, these heterodimers evade normal inactivation processes, by decreasing the rate of ligand dissociation, internalizing relatively slowly and avoiding the degradative pathway by returning to the cell surface. On the other hand, the heterodimers strongly recruit survival and mitogenic pathways such as the mitogen-activated protein kinases and the phosphatidylinositol 3-kinase. Hyper-activated signaling through the ErbB-signaling network results in dysregulation of the cell cycle homeostatic machinery, with upregulation of active cyclin-D/CDK complexes. Recent data indicate that cell cycle regulators are also linked to chemoresistance in ErbB2-dependent breast carcinoma. Together with D-type cyclins, it seems that the CDK inhibitor p21Waf1 plays an important role in evasion from apoptosis. These recent findings herald a preliminary understanding of the output layer which connects elevated ErbB-signaling to oncogenesis and chemoresistance.", "title": "" }, { "docid": "c89de16110a66d65f8ae7e3476fe90ef", "text": "In this paper, a new notion which we call private data deduplication protocol, a deduplication technique for private data storage is introduced and formalized. Intuitively, a private data deduplication protocol allows a client who holds a private data proves to a server who holds a summary string of the data that he/she is the owner of that data without revealing further information to the server. Our notion can be viewed as a complement of the state-of-the-art public data deduplication protocols of Halevi et al [7]. The security of private data deduplication protocols is formalized in the simulation-based framework in the context of two-party computations. A construction of private deduplication protocols based on the standard cryptographic assumptions is then presented and analyzed. We show that the proposed private data deduplication protocol is provably secure assuming that the underlying hash function is collision-resilient, the discrete logarithm is hard and the erasure coding algorithm can erasure up to α-fraction of the bits in the presence of malicious adversaries in the presence of malicious adversaries. To the best our knowledge this is the first deduplication protocol for private data storage.", "title": "" }, { "docid": "fd9717ee3f6fc31918594bd4855c799c", "text": "Aggregating context information from multiple scales has been proved to be effective for improving accuracy of Single Shot Detectors (SSDs) on object detection. However, existing multi-scale context fusion techniques are computationally expensive, which unfavorably diminishes the advantageous speed of SSD. In this work, we propose a novel network topology, called WeaveNet, that can efficiently fuse multi-scale information and boost the detection accuracy with negligible extra cost. The proposed WeaveNet iteratively weaves context information from adjacent scales together to enable more sophisticated context reasoning while maintaining fast speed. Built by stacking light-weight blocks, WeaveNet is easy to train without requiring batch normalization and can be further accelerated by our proposed architecture simplification. Experimental results on PASCAL VOC 2007, PASCAL VOC 2012 benchmarks show signification performance boost brought by WeaveNet. For 320×320 input of batch size = 8, WeaveNet reaches 79.5% mAP on PASCAL VOC 2007 test in 101 fps with only 4 fps extra cost, and further improves to 79.7% mAP with more iterations.", "title": "" }, { "docid": "6f6b76d8d9d8e3cf1d8fd0ce16706d68", "text": "During the last decade the analysis of intrusion detection has become very significant, the researcher focuses on various dataset to improve system accuracy and to reduce false positive rate based on DAPRA 98 and later the updated version as KDD cup 99 dataset which shows some statistical issues, it degrades the evaluation of anomaly detection that affects the performance of the security analysis which leads to the replacement of KDD cup 99 to NSL-KDD dataset. This paper focus on detailed analysis on NSLKDD dataset and proposed a new technique of combining swarm intelligence (Simplified Swarm Optimization) and data mining algorithm (Random Forest) for feature selection and reduction. SSO is used to find more appropriate set of attributes for classifying network intrusions, and Random Forest is used as a classifier. In the preprocessing step, we optimize the dimension of the dataset by the proposed SSO-RF approach and finds an optimal set of features. SSO is an optimization method that has a strong global search capability and is used here for dimension optimization. The experimental results shows that the proposed approach performs better than the other approaches for the detection of all kinds of attacks present in the dataset.", "title": "" }, { "docid": "64c9a3da19efc8fa29ae648e0cc13138", "text": "Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.", "title": "" }, { "docid": "17de9469bca5e0b407c0dd90379860f9", "text": "This paper describes our rewrite of Phoenix, a MapReduce framework for shared-memory CMPs and SMPs. Despite successfully demonstrating the applicability of a MapReduce-style pipeline to shared-memory machines, Phoenix has a number of limitations; its uniform intermediate storage of key-value pairs, inefficient combiner implementation, and poor task overhead amortization fail to efficiently support a wide range of MapReduce applications, encouraging users to manually circumvent the framework. We describe an alternative implementation, Phoenix++, that provides a modular, flexible pipeline that can be easily adapted by the user to the characteristics of a particular workload. Compared to Phoenix, this new approach achieves a 4.7-fold performance improvement and increased scalability, while allowing users to write simple, strict MapReduce code.", "title": "" }, { "docid": "47da8530df2160ee29ff05aee4ab0342", "text": "The objective of this review was to update Sobal and Stunkard's exhaustive review of the literature on the relation between socioeconomic status (SES) and obesity (Psychol Bull 1989;105:260-75). Diverse research databases (including CINAHL, ERIC, MEDLINE, and Social Science Abstracts) were comprehensively searched during the years 1988-2004 inclusive, using \"obesity,\" \"socioeconomic status,\" and synonyms as search terms. A total of 333 published studies, representing 1,914 primarily cross-sectional associations, were included in the review. The overall pattern of results, for both men and women, was of an increasing proportion of positive associations and a decreasing proportion of negative associations as one moved from countries with high levels of socioeconomic development to countries with medium and low levels of development. Findings varied by SES indicator; for example, negative associations (lower SES associated with larger body size) for women in highly developed countries were most common with education and occupation, while positive associations for women in medium- and low-development countries were most common with income and material possessions. Patterns for women in higher- versus lower-development countries were generally less striking than those observed by Sobal and Stunkard; this finding is interpreted in light of trends related to globalization. Results underscore a view of obesity as a social phenomenon, for which appropriate action includes targeting both economic and sociocultural factors.", "title": "" }, { "docid": "c7e584bca061335c8cd085511f4abb3b", "text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.", "title": "" }, { "docid": "7064b7bf9baf4e59a99f9a4641af8430", "text": "A smart home needs to be human-centric, where it tries to fulfill human needs given the devices it has. Various works are developed to provide homes with reasoning and planning capability to fulfill goals, but most do not support complex sequence of plans or require significant manual effort in devising subplans. This is further aggravated by the need to optimize conflicting personal goals. A solution is to solve the planning problem represented as constraint satisfaction problem (CSP). But CSP uses hard constraints and, thus, cannot handle optimization and partial goal fulfillment efficiently. This paper aims to extend this approach to weighted CSP. Knowledge representation to help in generating planning rules is also proposed, as well as methods to improve performances. Case studies show that the system can provide intelligent and complex plans from activities generated from semantic annotations of the devices, as well as optimization to maximize personal constraints’ fulfillment. Note to Practitioners—Smart home should maximize the fulfillment of personal goals that are often conflicting. For example, it should try to fulfill as much as possible the requests made by both the mother and daughter who wants to watch TV but both having different channel preferences. That said, every person has a set of goals or constraints that they hope the smart home can fulfill. Therefore, human-centric system that automates the loosely coupled devices of the smart home to optimize the goals or constraints of individuals in the home is developed. Automated planning is done using converted services extracted from devices, where conversion is done using existing tools and concepts from Web technologies. Weighted constraint satisfaction that provides the declarative approach to cover large problem domain to realize the automated planner with optimization capability is proposed. Details to speed up planning through search space reduction are also given. Real-time case studies are run in a prototype smart home to demonstrate its applicability and intelligence, where every planning is performed under a maximum of 10 s. The vision of this paper is to be able to implement such system in a community, where devices everywhere can cooperate to ensure the well-being of the community.", "title": "" }, { "docid": "bc90b1e4d456ca75b38105cc90d7d51d", "text": "Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.", "title": "" }, { "docid": "7516f24dad8441f6e13d211047c93f36", "text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focusses mainly on an empirical investigation of the effect of key developer factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer factors on the game development process.", "title": "" }, { "docid": "96d1204b05289190635af23942b8c289", "text": "In this paper a social network is extracted from a literary text. The social network shows, how frequent the characters interact and how similar their social behavior is. Two types of similarity measures are used: the first applies co-occurrence statistics, while the second exploits cosine similarity on different types of word embedding vectors. The results are evaluated by a paid micro-task crowdsourcing survey. The experiments suggest that specific types of word embeddings like word2vec are well-suited for the task at hand and the specific circumstances of literary fiction text.", "title": "" } ]
scidocsrr
4bd261b410dd21406e08d17018edf972
Simple Baseline for Visual Question Answering
[ { "docid": "8328b1dd52bcc081548a534dc40167a3", "text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "title": "" } ]
[ { "docid": "db25bafd722f5a491f5e48a133a2cd9c", "text": "Storytelling humankind’s universal choice for content transmission is becoming of great importance in the field of computer graphics, as the human ability to keep track of information in the information society of the 21 century is dependent on the quality of the information providing systems. Basically, the first steps towards storytelling systems have been taken; everyone today has the possibility to step into enfolding 3D worlds and become immersed in extensive loads of data. However, there is still a great backlog on the human-like organization of the associated data. The reason for this is the absence of the basic authoring systems for interactive storytelling. This position paper presents an approach to new authoring methods for interactive storytelling. It considers the author’s view of the tools to be used and introduces a coherent environment that does not restrict the creative process and lets the author feel comfortable, leading him to create well-narrated, interactive non-linear stories.", "title": "" }, { "docid": "9813df16b1852cf6d843ff3e1c67fa88", "text": "Traumatic neuromas are tumors resulting from hyperplasia of axons and nerve sheath cells after section or injury to the nervous tissue. We present a case of this tumor, confirmed by anatomopathological examination, in a male patient with history of circumcision. Knowledge of this entity is very important in achieving the differential diagnosis with other lesions that affect the genital area such as condyloma acuminata, bowenoid papulosis, lichen nitidus, sebaceous gland hyperplasia, achrochordon and pearly penile papules.", "title": "" }, { "docid": "189c27376ac9d6345e3ace59e7030d01", "text": "A probabilistic or weighted grammar implies a posterior probability distribution over possible parses of a given input sentence. One often needs to extract information from this distribution, by computing the expected counts (in the unknown parse) of various grammar rules, constituents, transitions, or states. This requires an algorithm such as inside-outside or forward-backward that is tailored to the grammar formalism. Conveniently, each such algorithm can be obtained by automatically differentiating an “inside” algorithm that merely computes the log-probability of the evidence (the sentence). This mechanical procedure produces correct and efficient code. As for any other instance of back-propagation, it can be carried out manually or by software. This pedagogical paper carefully spells out the construction and relates it to traditional and nontraditional views of these algorithms.", "title": "" }, { "docid": "f462cb7fb501c561dea600ca6e815ff2", "text": "This study assessed the role of rape myth acceptance (RMA) and situational factors in the perception of three different rape scenarios (date rape, marital rape, and stranger rape). One hundred and eighty-two psychology undergraduates were asked to emit four judgements about each rape situation: victim responsibility, perpetrator responsibility, intensity of trauma, and likelihood to report the crime to the police. It was hypothesized that neither RMA nor situational factors alone can explain how rape is perceived; it is the interaction between these two factors that best account for social reactions to sexual aggression. The results generally supported the authors' hypothesis: Victim blame, estimation of trauma, and the likelihood of reporting the crime to the police were best explained by the interaction between observer characteristics, such as RMA, and situational clues. That is, the less stereotypic the rape situation was, the greater was the influence of attitudes toward rape on attributions.", "title": "" }, { "docid": "f6e080319e7455fda0695f324941edcb", "text": "The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol.", "title": "" }, { "docid": "f301f87dee3c13d06e34f533bb69cf01", "text": "Representation of news events as latent feature vectors is essential for several tasks, such as news recommendation, news event linking, etc. However, representations proposed in the past fail to capture the complex network structure of news events. In this paper we propose Event2Vec, a novel way to learn latent feature vectors for news events using a network. We use recently proposed network embedding techniques, which are proven to be very effective for various prediction tasks in networks. As events involve different classes of nodes, such as named entities, temporal information, etc, general purpose network embeddings are agnostic to event semantics. To address this problem, we propose biased random walks that are tailored to capture the neighborhoods of news events in event networks. We then show that these learned embeddings are effective for news event recommendation and news event linking tasks using strong baselines, such as vanilla Node2Vec, and other state-of-the-art graph-based event ranking techniques.", "title": "" }, { "docid": "f31fa4bfc30cc4f0eff4399d16a077dd", "text": "BACKGROUND:Immunohistochemistry allowed recent recognition of a distinct focal gastritis in Crohn's disease. Following reports of lymphocytic colitis and small bowel enteropathy in children with regressive autism, we aimed to see whether similar changes were seen in the stomach. We thus studied gastric antral biopsies in 25 affected children, in comparison to 10 with Crohn's disease, 10 with Helicobacter pylori infection, and 10 histologically normal controls. All autistic, Crohn's, and normal patients were H. pylori negative.METHODS:Snap-frozen antral biopsies were stained for CD3, CD4, CD8, γδ T cells, HLA-DR, IgG, heparan sulphate proteoglycan, IgM, IgA, and C1q. Cell proliferation was assessed with Ki67.RESULTS:Distinct patterns of gastritis were seen in the disease states: diffuse, predominantly CD4+ infiltration in H. pylori, and focal-enhanced gastritis in Crohn's disease and autism, the latter distinguished by striking dominance of CD8+ cells, together with increased intraepithelial lymphocytes in surface, foveolar and glandular epithelium. Proliferation of foveolar epithelium was similarly increased in autism, Crohn's disease and H. pylori compared to controls. A striking finding, seen only in 20/25 autistic children, was colocalized deposition of IgG and C1q on the subepithelial basement membrane and the surface epithelium.CONCLUSIONS:These findings demonstrate a focal CD8-dominated gastritis in autistic children, with novel features. The lesion is distinct from the recently recognized focal gastritis of Crohn's disease, which is not CD8-dominated. As in the small intestine, there is epithelial deposition of IgG.", "title": "" }, { "docid": "bb334cad4724e6bd090f68ac3951273c", "text": "Despite extensive evidence for cognitive deficits associated with drug use and multiple publications supporting the efficacy of cognitive rehabilitation treatment (CRT) services for drug addictions, there are a few well-structured tools and organized programs to improve cognitive abilities in substance users. Most published studies on cognitive rehabilitation for drug dependent patients used rehabilitation tools, which have been previously designed for other types of brain injuries such as schizophrenia or traumatic brain injuries and not specifically designed for drug dependent patients. These studies also suffer from small sample size, lack of follow-up period assessments and or comprehensive treatment outcome measures. To address these limitations, we decided to develop and investigate the efficacy of a paper and pencil cognitive rehabilitation package called NECOREDA (Neurocognitive Rehabilitation for Disease of Addiction) to improve neurocognitive deficits associated with drug dependence particularly caused by stimulants (e.g. amphetamine type stimulants and cocaine) and opiates. To evaluate the feasibility of NECOREDA program, we conducted a pilot study with 10 opiate and methamphetamine dependent patients for 3 months in outpatient setting. NECOREDA was revised based on qualitative comments received from clients and treatment providers. Final version of NECOREDA is composed of brain training exercises called \"Brain Gym\" and psychoeducational modules called \"Brain Treasures\" which is implemented in 16 training sessions interleaved with 16 review and practice sessions. NECOREDA will be evaluated as an add-on intervention to methadone maintenance treatment in a randomized clinical trial among opiate dependent patients starting from August 2015. We discuss methodological features of NECOREDA development and evaluation in this article.", "title": "" }, { "docid": "73e6f03d67508bd2f04b955fc750c18d", "text": "Interleaving is a key component of many digital communication systems involving error correction schemes. It provides a form of time diversity to guard against bursts of errors. Recently, interleavers have become an even more integral part of the code design itself, if we consider for example turbo and turbo-like codes. In a non-cooperative context, such as passive listening, it is a challenging problem to estimate the interleaver parameters. In this paper we propose an algorithm that allows us to estimate the parameters of the interleaver at the output of a binary symmetric channel and to locate the codewords in the interleaved block. This gives us some clues about the interleaving function used.", "title": "" }, { "docid": "9d9714639d8f5c24bdb3f731f31c88d7", "text": "Controversy surrounds the function of the anterior cingulate cortex. Recent discussions about its role in behavioural control have centred on three main issues: its involvement in motor control, its proposed role in cognition and its relationship with the arousal/drive state of the organism. I argue that the overlap of these three domains is key to distinguishing the anterior cingulate cortex from other frontal regions, placing it in a unique position to translate intentions to actions.", "title": "" }, { "docid": "98689a2f03193a2fb5cc5195ef735483", "text": "Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts (\\em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.", "title": "" }, { "docid": "976064ba00f4eb2020199f264d29dae2", "text": "Social network analysis is a large and growing body of research on the measurement and analysis of relational structure. Here, we review the fundamental concepts of network analysis, as well as a range of methods currently used in the field. Issues pertaining to data collection, analysis of single networks, network comparison, and analysis of individual-level covariates are discussed, and a number of suggestions are made for avoiding common pitfalls in the application of network methods to substantive questions.", "title": "" }, { "docid": "b7f1af8c7850ee68c19cf5a4588aeb57", "text": "The ‘ellipsoidal distribution’, in which angles are assumed to be distributed parallel to the surface of an oblate or prolate ellipsoid, has been widely used to describe the leaf angle distribution (LAD) of plant canopies. This ellipsoidal function is constrained to show a probability density of zero at an inclination angle of zero; however, actual LADs commonly show a peak probability density at zero, a pattern consistent with functional models of plant leaf display. A ‘rotated ellipsoidal distribution’ is described here, which geometrically corresponds to an ellipsoid in which small surface elements are rotated normal to the surface. Empirical LADs from canopy and understory species in an old-growth coniferous forest were used to compare the two models. In every case the rotated ellipsoidal function provided a better description of empirical data than did the non-rotated function, while retaining only a single parameter. The ratio of G-statistics for goodness of fit for the two functions ranged from 1.03 to 3.88. The improved fit is due to the fact that the rotated function always shows a probability density greater than zero at inclination angles of zero, can show a mode at zero, and more accurately characterizes the overall shape of empirical distributions. ©2000 Published by Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "ff40eca4b4a27573e102b40c9f70aea4", "text": "This paper is concerned with the question of how to online combine an ensemble of active learners so as to expedite the learning progress during a pool-based active learning session. We develop a powerful active learning master algorithm, based a known competitive algorithm for the multi-armed bandit problem and a novel semi-supervised performance evaluation statistic. Taking an ensemble containing two of the best known active learning algorithms and a new algorithm, the resulting new active learning master algorithm is empirically shown to consistently perform almost as well as and sometimes outperform the best algorithm in the ensemble on a range of classification problems.", "title": "" }, { "docid": "99d57cef03e21531be9f9663ec023987", "text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.", "title": "" }, { "docid": "81f9a52b6834095cd7be70b39af0e7f0", "text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.", "title": "" }, { "docid": "f7b911eca27efc3b0535f8b48222f993", "text": "Numerous entity linking systems are addressing the entity recognition problem by using off-the-shelf NER systems. It is, however, a difficult task to select which specific model to use for these systems, since it requires to judge the level of similarity between the datasets which have been used to train models and the dataset at hand to be processed in which we aim to properly recognize entities. In this paper, we present the newest version of ADEL, our adaptive entity recognition and linking framework, where we experiment with an hybrid approach mixing a model combination method to improve the recognition level and to increase the efficiency of the linking step by applying a filter over the types. We obtain promising results when performing a 4-fold cross validation experiment on the OKE 2016 challenge training dataset. We also demonstrate that we achieve better results that in our previous participation on the OKE 2015 test set. We finally report the results of ADEL on the OKE 2016 test set and we present an error analysis highlighting the main difficulties of this challenge.", "title": "" }, { "docid": "5e0bcb6cf54879c65e9da7a08d97bc6b", "text": "The present study made an attempt to analyze the existing buying behaviour of Instant Food Products by individual households and to predict the demand for Instant Food Products of Hyderabad city in Andra Padesh .All the respondents were aware of pickles and Sambar masala but only 56.67 per cent of respondents were aware of Dosa/Idli mix. About 96.11 per cent consumers of Dosa/Idli mix and more than half of consumers of pickles and Sambar masala prepared their own. Low cost of home preparation and differences in tastes were the major reasons for non consumption, whereas ready availability and save time of preparation were the reasons for consuming Instant Food Products. Retail shops are the major source of information and source of purchase of Instant Food Products. The average monthly expenditure on Instant Food Products was found to be highest in higher income groups. The average per capita purchase and per capita expenditure on Instant food Products had a positive relationship with income of households.High price and poor taste were the reasons for not purchasing particular brand whereas best quality, retailers influence and ready availability were considered for preferring particular brand of products by the consumers.", "title": "" }, { "docid": "79e9a4586da238e29d8a9175d9bad827", "text": "We describe generative programming, an approach to generating customized programming components or systems, and active libraries, which are based on this approach. In contrast to conventional libraries, active libraries may contain metaprograms that implement domain-specific code generation, optimizations, debugging, profiling and testing. Several working examples (Blitz++, GMCL, Xroma) are presented to illustrate the potential of active libraries. We discuss relevant implementation technologies.", "title": "" }, { "docid": "e54f649fced7c82b643b9ada2dca6187", "text": "Some 3D computer vision techniques such as structure from motion (SFM) and augmented reality (AR) depend on a specific perspective-n-point (PnP) algorithm to estimate the absolute camera pose. However, existing PnP algorithms are difficult to achieve a good balance between accuracy and efficiency, and most of them do not make full use of the internal camera information such as focal length. In order to attack these drawbacks, we propose a fast and robust PnP (FRPnP) method to calculate the absolute camera pose for 3D compute vision. In the proposed FRPnP method, we firstly formulate the PnP problem as the optimization problem in the null space that can avoid the effects of the depth of each 3D point. Secondly, we can easily get the solution by the direct manner using singular value decomposition. Finally, the accurate information of camera pose can be obtained by optimization strategy. We explore four ways to evaluate the proposed FRPnP algorithm with synthetic dataset, real images, and apply it in the AR and SFM system. Experimental results show that the proposed FRPnP method can obtain the best balance between computational cost and precision, and clearly outperforms the state-of-the-art PnP methods.", "title": "" } ]
scidocsrr
068e4e157ac7017ba36db28dfcb53191
SQL-Injection Security Evolution Analysis in PHP
[ { "docid": "79e1023e1928e95317f584ec92f54ca0", "text": "Identifying code duplication in large multi-platform software system is a challenging problem. This is due to a variety of reasons including the presence of high-level programming languages and structures interleaved with hardware-dependent low-level resources and assembler code, the use of GUI-based configuration scripts generating commands to compile the system, and the extremely high number of possible different configurations. This paper studies the extent and the evolution of code duplications in the Linux kernel. Linux is a large, multi-platform software system; it is based on the Open Source concept, and so there are no obstacles in discussing its implementation. In addition, it is decidedly too large to be examined manually: the current Linux kernel release (2.4.18) is about three million LOCs. Nineteen releases, from 2.4.0 to 2.4.18, were processed and analyzed, identifying code duplication among Linux subsystems by means of a metric-based approach. The obtained results support the hypothesis that the Linux system does not contain a relevant fraction of code duplication. Furthermore, code duplication tends to remain stable across releases, thus suggesting a fairly stable structure, evolving smoothly without any evidence of degradation. q 2002 Published by Elsevier Science B.V.", "title": "" } ]
[ { "docid": "c7f0a749e38b3b7eba871fca80df9464", "text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.", "title": "" }, { "docid": "64cbc5ec72c81bd44e992076de5edc56", "text": "The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : R → R. Our main theorem is that, if G is L-Lipschitz, then roughly O(k logL) random Gaussian measurements suffice for an `2/`2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.", "title": "" }, { "docid": "86000fd18e5608ca92a46c9f7fc4a04c", "text": "The objective of consensus clustering is to find a single partitioning which agrees as much as possible with existing basic partitionings. Consensus clustering emerges as a promising solution to find cluster structures from heterogeneous data. As an efficient approach for consensus clustering, the K-means based method has garnered attention in the literature, however the existing research efforts are still preliminary and fragmented. To that end, in this paper, we provide a systematic study of K-means-based consensus clustering (KCC). Specifically, we first reveal a necessary and sufficient condition for utility functions which work for KCC. This helps to establish a unified framework for KCC on both complete and incomplete data sets. Also, we investigate some important factors, such as the quality and diversity of basic partitionings, which may affect the performances of KCC. Experimental results on various realworld data sets demonstrate that KCC is highly efficient and is comparable to the state-of-the-art methods in terms of clustering quality. In addition, KCC shows high robustness to incomplete basic partitionings with many missing values.", "title": "" }, { "docid": "7f390d8dfd98d03ad4e7b56948c8adce", "text": "Recent advances in deep learning have enabled the extraction of high-level features from raw sensor data which has opened up new possibilities in many different fields, including computer generated choreography. In this paper we present a system chorrnn for generating novel choreographic material in the nuanced choreographic language and style of an individual choreographer. It also shows promising results in producing a higher level compositional cohesion, rather than just generating sequences of movement. At the core of chor-rnn is a deep recurrent neural network trained on raw motion capture data and that can generate new dance sequences for a solo dancer. Chor-rnn can be used for collaborative human-machine choreography or as a creative catalyst, serving as inspiration for a choreographer.", "title": "" }, { "docid": "de9aa1b5c6e61da518e87a55d02c45e9", "text": "A novel type of dual-mode microstrip bandpass filter using degenerate modes of a meander loop resonator has been developed for miniaturization of high selectivity narrowband microwave bandpass filters. A filter of this type having a 2.5% bandwidth at 1.58 GHz was designed and fabricated. The measured filter performance is presented.", "title": "" }, { "docid": "8c436595c7f453b565ae0c974d86c4fb", "text": "To determine the enhancing effect of a whey protein isolate on the cytotoxicity of a potential anticancer drug, baicalein, the human hepatoma cell line Hep G2 was assigned to grow in different media for four days, and cell growth and apoptosis were investigated. The control group was grown in normal medium; the other three groups were grown in whey protein isolate (Immunocal) medium, baicalein medium, and a combination of Immunocal and baicalein. As indicated by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide assay, survival rate was significantly lower in cells grown in baicalein + Immunocal than in cells grown in baicalein alone. In contrast, there was no significant difference in survival rate of the cells grown in Immunocal. In the investigation of apoptosis, cells grown in baicalein + Immunocal showed a higher phosphatidylserine exposure, lower mitochondrial transmembrane potential, and nearly 13 times more cells undergoing apoptosis than cells grown in baicalein alone. We also demonstrated that Immunocal reduced glutathione (GSH) in Hep G2 cells by 20-40% and regulated the elevation of GSH, which was in response to baicalein. In conclusion, Immunocal seemed to enhance the cytotoxicity of baicalein by inducing more apoptosis; this increase in apoptotic cells may be associated with the depletion of GSH in Hep G2 cells. This is the first study to demonstrate, in vitro, that Immunocal may function as an adjuvant in cancer treatments.", "title": "" }, { "docid": "6a541e92e92385c27ceec1e55a50b46e", "text": "BACKGROUND\nWe retrospectively studied the outcome of Pavlik harness treatment in late-diagnosed hip dislocation in infants between 6 and 24 months of age (Graf type 3 and 4 or dislocated hips on radiographs) treated in our hospital between 1984 and 2004. The Pavlik harness was progressively applied to improve both flexion and abduction of the dislocated hip. In case of persistent adduction contracture, an abduction splint was added temporarily to improve the abduction.\n\n\nMETHODS\nWe included 24 patients (26 hips) between 6 and 24 months of age who presented with a dislocated hip and primarily treated by Pavlik harness in our hospital between 1984 and 2004. The mean age at diagnosis was 9 months (range 6 to 23 mo). The average follow-up was 6 years 6 months (2 to 12 y). Ultrasound images and radiographs were assessed at the time of diagnosis, one year after reposition and at last follow-up.\n\n\nRESULTS\nTwelve of the twenty-six hips (46%) were successfully reduced with Pavlik harness after an average treatment of 14 weeks (4 to 28 wk). One patient (9%) needed a secondary procedure 1 year 9 months after reposition because of residual dysplasia (Pelvis osteotomy). Seventeen of the 26 hips were primary diagnosed by Ultrasound according to the Graf classification. Ten had a Graf type 3 hip and 7 hips were classified as Graf type 4. The success rate was 60% for the type 3 hips and 0% for the type 4 hips. (P=0.035). None of the hips that were reduced with the Pavlik harness developed an avascular necrosis (AVN). Of the hips that failed the Pavlik harness treatment, three hips showed signs of AVN, 1 after closed reposition and 2 after open reposition.\n\n\nCONCLUSION\nThe use of a Pavlik harness in the late-diagnosed hip dislocation type Graf 3 can be a successful treatment option in the older infant. We have noticed few complications in these patients maybe due to progressive and gentle increase of abduction and flexion, with or without temporary use of an abduction splint. The treatment should be abandoned if the hips are not reduced after 6 weeks. None of the Graf 4 hips could be reduced successfully by Pavlik harness. This was significantly different from the success rate for the Graf type 3 hips.\n\n\nLEVEL OF EVIDENCE\nTherapeutic study, clinical case series: Level IV.", "title": "" }, { "docid": "d86eb92d0d9b35b68f42b03c6587cfe3", "text": "Introduction The badminton smash is an essential component of a player’s repertoire and a significant stroke in gaining success as it is the most common winning shot, accounting for 53.9% of winning shots (Tsai and Chang, 1998; Tong and Hong, 2000; Rambely et al., 2005). The speed of the shuttlecock exceeds that of any other racket sport projectile with a maximum shuttle speed of 493 km/h (306 mph) reported in 2013 by Tan Boon Heong. If a player is able to cause the shuttle to travel at a higher velocity and give the opponent less reaction time to the shot, it would be expected that the smash would be a more effective weapon (Kollath, 1996; Sakurai and Ohtsuki, 2000).", "title": "" }, { "docid": "17c3e9af0d6bc8cd4e0915df0b9b2bf3", "text": "The focus of the three previous chapters has been on context-free grammars and their use in automatically generating constituent-based representations. Here we present another family of grammar formalisms called dependency grammars that Dependency grammar are quite important in contemporary speech and language processing systems. In these formalisms, phrasal constituents and phrase-structure rules do not play a direct role. Instead, the syntactic structure of a sentence is described solely in terms of the words (or lemmas) in a sentence and an associated set of directed binary grammatical relations that hold among the words. The following diagram illustrates a dependency-style analysis using the standard graphical method favored in the dependency-parsing community. (14.1) I prefer the morning flight through Denver nsubj dobj det nmod nmod case root Relations among the words are illustrated above the sentence with directed, labeled arcs from heads to dependents. We call this a typed dependency structure Typed dependency because the labels are drawn from a fixed inventory of grammatical relations. It also includes a root node that explicitly marks the root of the tree, the head of the entire structure. Figure 14.1 shows the same dependency analysis as a tree alongside its corresponding phrase-structure analysis of the kind given in Chapter 11. Note the absence of nodes corresponding to phrasal constituents or lexical categories in the dependency parse; the internal structure of the dependency parse consists solely of directed relations between lexical items in the sentence. These relationships directly encode important information that is often buried in the more complex phrase-structure parses. For example, the arguments to the verb prefer are directly linked to it in the dependency structure, while their connection to the main verb is more distant in the phrase-structure tree. Similarly, morning and Denver, modifiers of flight, are linked to it directly in the dependency structure. A major advantage of dependency grammars is their ability to deal with languages that are morphologically rich and have a relatively free word order. For Free word order example, word order in Czech can be much more flexible than in English; a grammatical object might occur before or after a location adverbial. A phrase-structure grammar would need a separate rule for each possible place in the parse tree where such an adverbial phrase could occur. A dependency-based approach would just have one link type representing this particular adverbial relation. Thus, a dependency grammar approach abstracts away from word-order information, …", "title": "" }, { "docid": "df70cb4b1d37680cccb7d79bdea5d13b", "text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.", "title": "" }, { "docid": "e9698e55abb8cee0f3a5663517bd0037", "text": "0377-2217/$ see front matter 2008 Elsevier B.V. A doi:10.1016/j.ejor.2008.06.027 * Corresponding author. Tel.: +32 16326817. E-mail address: Nicolas.Glady@econ.kuleuven.ac.b The definition and modeling of customer loyalty have been central issues in customer relationship management since many years. Recent papers propose solutions to detect customers that are becoming less loyal, also called churners. The churner status is then defined as a function of the volume of commercial transactions. In the context of a Belgian retail financial service company, our first contribution is to redefine the notion of customer loyalty by considering it from a customer-centric viewpoint instead of a product-centric one. We hereby use the customer lifetime value (CLV) defined as the discounted value of future marginal earnings, based on the customer’s activity. Hence, a churner is defined as someone whose CLV, thus the related marginal profit, is decreasing. As a second contribution, the loss incurred by the CLV decrease is used to appraise the cost to misclassify a customer by introducing a new loss function. In the empirical study, we compare the accuracy of various classification techniques commonly used in the domain of churn prediction, including two cost-sensitive classifiers. Our final conclusion is that since profit is what really matters in a commercial environment, standard statistical accuracy measures for prediction need to be revised and a more profit oriented focus may be desirable. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "86b36ae7b039da4b5b195262725a8373", "text": "INTRODUCTION\nNext to existing terminology of the lower urinary tract, due to its increasing complexity, the terminology for pelvic floor dysfunction in women may be better updated by a female-specific approach and clinically based consensus report.\n\n\nMETHODS\nThis report combines the input of members of the Standardization and Terminology Committees of two international organizations, the International Urogynecological Association (IUGA), and the International Continence Society (ICS), assisted at intervals by many external referees. Appropriate core clinical categories and a subclassification were developed to give an alphanumeric coding to each definition. An extensive process of 15 rounds of internal and external review was developed to exhaustively examine each definition, with decision-making by collective opinion (consensus).\n\n\nRESULTS\nA terminology report for female pelvic floor dysfunction, encompassing over 250 separate definitions, has been developed. It is clinically based with the six most common diagnoses defined. Clarity and user-friendliness have been key aims to make it interpretable by practitioners and trainees in all the different specialty groups involved in female pelvic floor dysfunction. Female-specific imaging (ultrasound, radiology, and MRI) has been a major addition while appropriate figures have been included to supplement and help clarify the text. Ongoing review is not only anticipated but will be required to keep the document updated and as widely acceptable as possible.\n\n\nCONCLUSION\nA consensus-based terminology report for female pelvic floor dysfunction has been produced aimed at being a significant aid to clinical practice and a stimulus for research.", "title": "" }, { "docid": "f9bc2b91d31b3aa8ccbdfbfdae363fd8", "text": "Motor control is the study of how organisms make accurate goal-directed movements. Here we consider two problems that the motor system must solve in order to achieve such control. The first problem is that sensory feedback is noisy and delayed, which can make movements inaccurate and unstable. The second problem is that the relationship between a motor command and the movement it produces is variable, as the body and the environment can both change. A solution is to build adaptive internal models of the body and the world. The predictions of these internal models, called forward models because they transform motor commands into sensory consequences, can be used to both produce a lifetime of calibrated movements, and to improve the ability of the sensory system to estimate the state of the body and the world around it. Forward models are only useful if they produce unbiased predictions. Evidence shows that forward models remain calibrated through motor adaptation: learning driven by sensory prediction errors.", "title": "" }, { "docid": "20f3b5b42f33056276c44fe4b2f655d2", "text": "We explore unsupervised representation learning of radio communication signals in raw sampled time series representation. We demonstrate that we can learn modulation basis functions using convolutional autoencoders and visually recognize their relationship to the analytic bases used in digital communications. We also propose and evaluate quantitative metrics for quality of encoding using domain relevant performance metrics.", "title": "" }, { "docid": "279ef6239b6e072588c93ed282942a1a", "text": "Recent studies on the neural bases of sensorimotor adaptation demonstrate that the cerebellar and striatal thalamocortical pathways contribute to early learning. Transfer of learning involves a reduction in the contribution of early learning networks and increased reliance on the cerebellum. The neural correlates of learning to learn remain to be determined but likely involve enhanced functioning of the general aspects of early learning.", "title": "" }, { "docid": "c9c9af3680df50d4dd72c73c90a41893", "text": "BACKGROUND\nVideo games provide extensive player involvement for large numbers of children and adults, and thereby provide a channel for delivering health behavior change experiences and messages in an engaging and entertaining format.\n\n\nMETHOD\nTwenty-seven articles were identified on 25 video games that promoted health-related behavior change through December 2006.\n\n\nRESULTS\nMost of the articles demonstrated positive health-related changes from playing the video games. Variability in what was reported about the games and measures employed precluded systematically relating characteristics of the games to outcomes. Many of these games merged the immersive, attention-maintaining properties of stories and fantasy, the engaging properties of interactivity, and behavior-change technology (e.g., tailored messages, goal setting). Stories in video games allow for modeling, vicarious identifying experiences, and learning a story's \"moral,\" among other change possibilities.\n\n\nCONCLUSIONS\nResearch is needed on the optimal use of game-based stories, fantasy, interactivity, and behavior change technology in promoting health-related behavior change.", "title": "" }, { "docid": "3f26885065251a6108072b4c0b4de5df", "text": "We present a Few-Shot Relation Classification Dataset (FewRel), consisting of 70, 000 sentences on 100 relations derived from Wikipedia and annotated by crowdworkers. The relation of each sentence is first recognized by distant supervision methods, and then filtered by crowdworkers. We adapt the most recent state-of-the-art few-shot learning methods for relation classification and conduct thorough evaluation of these methods. Empirical results show that even the most competitive few-shot learning models struggle on this task, especially as compared with humans. We also show that a range of different reasoning skills are needed to solve our task. These results indicate that few-shot relation classification remains an open problem and still requires further research. Our detailed analysis points multiple directions for future research. All details and resources about the dataset and baselines are released on http://zhuhao.me/fewrel.", "title": "" }, { "docid": "81c8b1e9c54d089bc63166866e88bb17", "text": "Performing literature survey for scholarly activities has become a challenging and time consuming task due to the rapid growth in the number of scientific articles. Thus, automatic recommendation of high quality citations for a given scientific query topic is immensely valuable. The state-of-the-art on the problem of citation recommendation suffers with the following three limitations. First, most of the existing approaches for citation recommendation require input in the form of either the full article or a seed set of citations, or both. Nevertheless, obtaining the recommendation for citations given a set of keywords is extremely useful for many scientific purposes. Second, the existing techniques for citation recommendation aim at suggesting prestigious and well-cited articles. However, we often need recommendation of diversified citations of the given query topic for many scientific purposes; for instance, it helps authors to write survey papers on a topic and it helps scholars to get a broad view of key problems on a topic. Third, one of the problems in the keyword based citation recommendation is that the search results typically would not include the semantically correlated articles if these articles do not use exactly the same keywords. To the best of our knowledge, there is no known citation recommendation system in the literature that addresses the above three limitations simultaneously. In this paper, we propose a novel citation recommendation system called DiSCern to precisely address the above research gap. DiSCern finds relevant and diversified citations in response to a search query, in terms of keyword(s) to describe the query topic, while using only the citation graph and the keywords associated with the articles, and no latent information. We use a novel keyword expansion step, inspired by community finding in social network analysis, in DiSCern to ensure that the semantically correlated articles are also included in the results. Our proposed approach primarily builds on the Vertex Reinforced Random Walk (VRRW) to balance prestige and diversity in the recommended citations. We demonstrate the efficacy of DiSCern empirically on two datasets: a large publication dataset of more than 1.7 million articles in computer science domain and a dataset of more than 29,000 articles in theoretical high-energy physics domain. The experimental results show that our proposed approach is quite efficient and it outperforms the state-of-the-art algorithms in terms of both relevance and diversity.", "title": "" }, { "docid": "18233af1857390bff51d2e713bc766d9", "text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.", "title": "" }, { "docid": "678a4872dfe753bac26bff2b29ac26b0", "text": "Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.", "title": "" } ]
scidocsrr
9976bfd16745a47353d82eb282b0d6d9
More than a neuroanatomical representation in The Creation of Adam by Michelangelo Buonarroti, a representation of the Golden Ratio.
[ { "docid": "b261534c045299c1c3a0e0cc37caa618", "text": "Michelangelo (1475-1564) had a life-long interest in anatomy that began with his participation in public dissections in his early teens, when he joined the court of Lorenzo de' Medici and was exposed to its physician-philosopher members. By the age of 18, he began to perform his own dissections. His early anatomic interests were revived later in life when he aspired to publish a book on anatomy for artists and to collaborate in the illustration of a medical anatomy text that was being prepared by the Paduan anatomist Realdo Colombo (1516-1559). His relationship with Colombo likely began when Colombo diagnosed and treated him for nephrolithiasis in 1549. He seems to have developed gouty arthritis in 1555, making the possibility of uric acid stones a distinct probability. Recurrent urinary stones until the end of his life are well documented in his correspondence, and available documents imply that he may have suffered from nephrolithiasis earlier in life. His terminal illness with symptoms of fluid overload suggests that he may have sustained obstructive nephropathy. That this may account for his interest in kidney function is evident in his poetry and drawings. Most impressive in this regard is the mantle of the Creator in his painting of the Separation of Land and Water in the Sistine Ceiling, which is in the shape of a bisected right kidney. His use of the renal outline in a scene representing the separation of solids (Land) from liquid (Water) suggests that Michelangelo was likely familiar with the anatomy and function of the kidney as it was understood at the time.", "title": "" } ]
[ { "docid": "407561ea1df1544c94e2516d66a40dcc", "text": "This paper reviews current technological developments in polarization engineering and the control of the quantum-confined Stark effect (QCSE) for InxGa1- xN-based quantum-well active regions, which are generally employed in visible LEDs for solid-state lighting applications. First, the origin of the QCSE in III-N wurtzite semiconductors is introduced, and polarization-induced internal fields are discussed in order to provide contextual background. Next, the optical and electrical properties of InxGa1- xN-based quantum wells that are affected by the QCSE are described. Finally, several methods for controlling the QCSE of InxGa1- xN-based quantum wells are discussed in the context of performance metrics of visible light emitters, considering both pros and cons. These strategies include doping control, strain/polarization field/electronic band structure control, growth direction control, and crystalline structure control.", "title": "" }, { "docid": "dc23ec643882393b69adca86c944bef4", "text": "This memo describes a snapshot of the reasoning behind a proposed new namespace, the Host Identity namespace, and a new protocol layer, the Host Identity Protocol (HIP), between the internetworking and transport layers. Herein are presented the basics of the current namespaces, their strengths and weaknesses, and how a new namespace will add completeness to them. The roles of this new namespace in the protocols are defined. The memo describes the thinking of the authors as of Fall 2003. The architecture may have evolved since. This document represents one stable point in that evolution of understanding.", "title": "" }, { "docid": "4e7106a78dcf6995090669b9a25c9551", "text": "In this paper partial discharges (PD) in disc-shaped cavities in polycarbonate are measured at variable frequency (0.01-100 Hz) of the applied voltage. The advantage of PD measurements at variable frequency is that more information about the insulation system may be extracted than from traditional PD measurements at a single frequency (usually 50/60 Hz). The PD activity in the cavity is seen to depend on the applied frequency. Moreover, the PD frequency dependence changes with the applied voltage amplitude, the cavity diameter, and the cavity location (insulated or electrode bounded). It is suggested that the PD frequency dependence is governed by the statistical time lag of PD and the surface charge decay in the cavity. This is the first of two papers addressing the frequency dependence of PD in a cavity. In the second paper a physical model of PD in a cavity at variable applied frequency is presented.", "title": "" }, { "docid": "78fafa0e14685d317ab88361d0a0dc8c", "text": "Industry analysts expect volume production of integrated circuits on 300-mm wafers to start in 2001 or 2002. At that time, appropriate production equipment must be available. To meet this need, the MEDEA Project has supported us at ASM Europe in developing an advanced vertical batch furnace system for 300-mm wafers. Vertical furnaces are widely used for many steps in the production of integrated circuits. In volume production, these batch furnaces achieve a lower cost per production step than single-wafer processing methods. Applications for vertical furnaces are extensive, including the processing of low-pressure chemical vapor deposition (LPCVD) layers such as deposited oxides, polysilicon, and nitride. Furthermore, the furnaces can be used for oxidation and annealing treatments. As the complexity of IC technology increases, production equipment must meet the technology guidelines summarized in Table 1 from the Semiconductor Industry Association’s Roadmap. The table shows that the minimal feature size will sharply decrease, and likewise the particle size and level will decrease. The challenge in designing a new generation of furnaces for 300-mm wafers was to improve productivity as measured in throughput (number of wafers processed per hour), clean-room footprint, and capital cost. Therefore, we created a completely new design rather than simply upscaling the existing 200mm equipment.", "title": "" }, { "docid": "950c29856f0afb6d51f94d75a76e6941", "text": "A developmental theory of reckless behavior among adolescents is presented, in which sensation seeking and adolescent egocentrism are especially prominent factors. Findings from studies of automobile driving, sex without contraception, illegal drug use, and minor criminal activity are presented in evidence of this. The influence of peers is then discussed and reinterpreted in the light of sensation seeking and adolescent egocentrism. Socialization influences are considered in interaction with sensation seeking and adolescent egocentrism, and the terms narrow and broad socialization are introduced. Factors that may be responsible for the decline of reckless behavior with age are discussed. © 1992 Academic", "title": "" }, { "docid": "61f9b5b698c847bfb6316fdb5481d529", "text": "We present a feature vector formation technique for documents Sparse Composite Document Vector (SCDV) which overcomes several shortcomings of the current distributional paragraph vector representations that are widely used for text representation. In SCDV, word embeddings are clustered to capture multiple semantic contexts in which words occur. They are then chained together to form document topic-vectors that can express complex, multi-topic documents. Through extensive experiments on multi-class and multi-label classification tasks, we outperform the previous state-of-the-art method, NTSG (Liu et al., 2015a). We also show that SCDV embeddings perform well on heterogeneous tasks like Topic Coherence, context-sensitive Learning and Information Retrieval. Moreover, we achieve significant reduction in training and prediction times compared to other representation methods. SCDV achieves best of both worlds better performance with lower time and space complexity.", "title": "" }, { "docid": "d390ba28e1bb9fdb72b2de8498838806", "text": "Named Entity Disambiguation algorithms typically learn a single model for all target entities. In this paper we present a word expert model and train separate deep learning models for each target entity string, yielding 500K classification tasks. This gives us the opportunity to benchmark popular text representation alternatives on this massive dataset. In order to face scarce training data we propose a simple data-augmentation technique and transfer-learning. We show that bagof-word-embeddings are better than LSTMs for tasks with scarce training data, while the situation is reversed when having larger amounts. Transferring an LSTM which is learned on all datasets is the most effective context representation option for the word experts in all frequency bands. The experiments show that our system trained on out-ofdomain Wikipedia data surpasses comparable NED systems which have been trained on indomain training data.", "title": "" }, { "docid": "e0632c0bb393eb567f8bcc21468742b2", "text": "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "title": "" }, { "docid": "0016ef3439b78a29c76a14e8db2a09be", "text": "In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called multiagent enforced subpopulations (multiagent ESP), is proposed and demonstrated in a prey-capture task. First, the approach is shown to be more efficient than evolving a single central controller for all agents. Second, cooperation is found to be most efficient through stigmergy, i.e., through role-based responses to the environment, rather than communication between the agents. Together these results suggest that role-based cooperation is an effective strategy in certain multiagent tasks.", "title": "" }, { "docid": "b83e784d3ec4afcf8f6ed49dbe90e157", "text": "In this paper, the impact of an increased number of layers on the performance of axial flux permanent magnet synchronous machines (AFPMSMs) is studied. The studied parameters are the inductance, terminal voltages, PM losses, iron losses, the mean value of torque, and the ripple torque. It is shown that increasing the number of layers reduces the fundamental winding factor. In consequence, the rated torque for the same current reduces. However, the reduction of harmonics associated with a higher number of layers reduces the ripple torque, PM losses, and iron losses. Besides studying the performance of the AFPMSMs for the rated conditions, the study is broadened for the field weakening (FW) region. During the FW region, the flux of the PMs is weakened by an injection of a reversible d-axis current. This keeps the terminal voltage of the machine fixed at the rated value. The inductance plays an important role in the FW study. A complete study for the FW shows that the two layer winding has the optimum performance compared to machines with an other number of winding layers.", "title": "" }, { "docid": "0b1db23ae4767d7653e3198919706e99", "text": "Greenhouse cultivation has evolved from simple covered rows of open-fields crops to highly sophisticated controlled environment agriculture (CEA) facilities that projected the image of plant factories for urban agriculture. The advances and improvements in CEA have promoted the scientific solutions for the efficient production of plants in populated cities and multi-story buildings. Successful deployment of CEA for urban agriculture requires many components and subsystems, as well as the understanding of the external influencing factors that should be systematically considered and integrated. This review is an attempt to highlight some of the most recent advances in greenhouse technology and CEA in order to raise the awareness for technology transfer and adaptation, which is necessary for a successful transition to urban agriculture. This study reviewed several aspects of a high-tech CEA system including improvements in the frame and covering materials, environment perception and data sharing, and advanced microclimate control and energy optimization models. This research highlighted urban agriculture and its derivatives, including vertical farming, rooftop greenhouses and plant factories which are the extensions of CEA and have emerged as a response to the growing population, environmental degradation, and urbanization that are threatening food security. Finally, several opportunities and challenges have been identified in implementing the integrated CEA and vertical farming for urban agriculture.", "title": "" }, { "docid": "ad34926aa46429f194a19892732b1e9c", "text": "_____________________________ *This article is based on the authors’ article “Nicchu Anime Sangyo No Shijo-Soudatsu ~ Kokusan Anime Shinkou Wo Hakaru Chugoku To Dou Mukiaunoka~” [Market Competition in the Animation Industry Between Japan and China ~How to Face China’s Rising Interest in Promoting Domestically-Produced Animation~], originally published in the April 2012 issue of “Hoso Kenkyu to Chousa” [the NHK monthly report on Broadcast Research]. Full text in Japanese available below: http://www.nhk.or.jp/bunken/summary/research/report/2012_04/20120404.pdf", "title": "" }, { "docid": "2cd3130e123a440cd91edafc4a6848fa", "text": "The aim of this research is to provide a design of an integrated intelligent system for management and controlling traffic lights based on distributed long range Photoelectric Sensors in distances prior to and after the traffic lights. The appropriate distances for sensors are chosen by the traffic management department so that they can monitor cars that are moving towards a specific traffic and then transfer this data to the intelligent software that are installed in the traffic control cabinet, which can control the traffic lights according to the measures that the sensors have read, and applying a proposed algorithm based on the total calculated relative weight of each road. Accordingly, the system will open the traffic that are overcrowded and give it a longer time larger than the given time for other traffics that their measures proved that their traffic density is less. This system can be programmed with very important criteria that enable it to take decisions for intelligent automatic control of traffic lights. Also the proposed system is designed to accept information about any emergency case through an active RFID based technology. Emergency cases such as the passing of presidents, ministries and ambulances vehicles that require immediate opening for the traffic automatically. The system has the ability to open a complete path for such emergency cases from the next traffic until reaching the target destination. (end of the path). As a result the system will guarantee the fluency of traffic for such emergency cases or for the main vital streets and paths that require the fluent traffic all the time, without affecting the fluency of traffic generally at normal streets according to the time of the day and the traffic density. Also the proposed system can be tuned to run automatically without any human intervention or can be tuned to allow human intervention at certain circumstances.", "title": "" }, { "docid": "4a6d48bd0f214a94f2137f424dd401eb", "text": "During the past decade, scientific research has provided new insight into the development from an acute, localised musculoskeletal disorder towards chronic widespread pain/fibromyalgia (FM). Chronic widespread pain/FM is characterised by sensitisation of central pain pathways. An in-depth review of basic and clinical research was performed to design a theoretical framework for manual therapy in these patients. It is explained that manual therapy might be able to influence the process of chronicity in three different ways. (I) In order to prevent chronicity in (sub)acute musculoskeletal disorders, it seems crucial to limit the time course of afferent stimulation of peripheral nociceptors. (II) In the case of chronic widespread pain and established sensitisation of central pain pathways, relatively minor injuries/trauma at any locations are likely to sustain the process of central sensitisation and should be treated appropriately with manual therapy accounting for the decreased sensory threshold. Inappropriate pain beliefs should be addressed and exercise interventions should account for the process of central sensitisation. (III) However, manual therapists ignoring the processes involved in the development and maintenance of chronic widespread pain/FM may cause more harm then benefit to the patient by triggering or sustaining central sensitisation.", "title": "" }, { "docid": "543a0cdc8101c6f253431c8a4d697be6", "text": "While significant progress has been made in the image captioning task, video description is still comparatively in its infancy, due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video. Recently, reinforcement and adversarial learning based methods have been explored to improve the image captioning models; however, both types of methods suffer from a number of issues, e.g. poor readability and high redundancy for RL and stability issues for GANs. In this work, we instead propose to apply adversarial techniques during inference, designing a discriminator which encourages better multi-sentence video description. In addition, we find that a multi-discriminator “hybrid” design, where each discriminator targets one aspect of a description, leads to the best results. Specifically, we decouple the discriminator to evaluate on three criteria: 1) visual relevance to the video, 2) language diversity and fluency, and 3) coherence across sentences. Our approach results in more accurate, diverse and coherent multi-sentence video descriptions, as shown by automatic as well as human evaluation on the popular ActivityNet Captions dataset.", "title": "" }, { "docid": "13f9fd9879c0a08d2e7ba457875f01f5", "text": "We compare two different groups of visual features that can be used in addition to audio to improve automatic speech recognition (ASR), high- and low-level visual features. Facial animation parameters (FAPs), supported by the MPEG-4 standard for the visual representation of speech, are used as high-level visual features. Principal component analysis (PCA) based projection weights of the intensity images of the mouth area are used as low-level visual features. PCA is also applied on the FAPs. We develop an audio-visual ASR (AV-ASR) system and compare its performance for two different visual feature groups, following two approaches. The first approach assumes the same dimensionality for both high- and low-level visual features, while, in the second approach, the percentage of statistical variance described by the visual features used is the same. Multi-stream hidden Markov models (HMMs) and a late integration approach are used to integrate audio and visual information and perform continuous AV-ASR experiments. Experiments were performed at various SNRs (0-30 dB) with additive white Gaussian noise on a relatively large vocabulary database (approximately 1000 words). Conclusions are drawn on the trade off between the dimensionality of the visual features and the amount of speechreading information contained in them and its influence on the AV-ASR performance.", "title": "" }, { "docid": "742dbc3c68771953899228627b1f894e", "text": "This thesis will examine and evaluate different mechanics that could be used in games using augmented reality. Augmented reality, the technology used to integrate computer-generated images with the real world environment, allows developers to enhance a user’s gaming experience. The different mechanics will focus on immersion and on user engagement and examine which of the two is more important in games. This is examined by implementing the different mechanics in an application for a Google Tango tablet. Immersion is created by letting the environment act on virtual objects, via occlusion culling. The virtual agent interacts with the real world to generate engagement. The different methods are surveyed online, and user tests performed with the application. The results showed how the concept of combining the surveyed methods of generating immersion and engagement using augmented reality was successful.", "title": "" }, { "docid": "849f89d0007ec44c45257f07f08ba1d1", "text": "This paper presents Autobank, a prototype tool for constructing a widecoverage Minimalist Grammar (MG) (Stabler, 1997), and semi-automatically converting the Penn Treebank (PTB) into a deep Minimalist treebank. The front end of the tool is a graphical user interface which facilitates the rapid development of a seed set of MG trees via manual reannotation of PTB preterminals with MG lexical categories. The system then extracts various dependency mappings between the source and target trees, and uses these in concert with a non-statistical MG parser to automatically reannotate the rest of the corpus. Autobank thus enables deep treebank conversions (and subsequent modifications) without the need for complex transduction algorithms accompanied by cascades of ad hoc rules; instead, the locus of human effort falls directly on the task of grammar construction itself.", "title": "" }, { "docid": "a83fcfc62bdf0f58335e0853c006eaff", "text": "Compressed sensing (CS) in magnetic resonance imaging (MRI) enables the reconstruction of MR images from highly undersampled k-spaces, and thus substantial reduction of data acquisition time. In this context, edge-preserving and sparsity-promoting regularizers are used to exploit the prior knowledge that MR images are sparse or compressible in a given transform domain and thus to regulate the solution space. In this study, we introduce a new regularization scheme by iterative linearization of the non-convex clipped absolute deviation (SCAD) function in an augmented Lagrangian framework. The performance of the proposed regularization, which turned out to be an iteratively weighted total variation (TV) regularization, was evaluated using 2D phantom simulations and 3D retrospective undersampling of clinical MRI data by different sampling trajectories. It was demonstrated that the proposed regularization technique substantially outperforms conventional TV regularization, especially at reduced sampling rates.", "title": "" }, { "docid": "378c3b785db68bd5efdf1ad026c901ea", "text": "Intrinsically switched tunable filters are switched on and off using the tuning elements that tune their center frequencies and/or bandwidths, without requiring an increase in the tuning range of the tuning elements. Because external RF switches are not needed, substantial improvements in insertion loss, linearity, dc power consumption, control complexity, size, and weight are possible compared to conventional approaches. An intrinsically switched varactor-tuned bandstop filter and bandpass filter bank are demonstrated here for the first time. The intrinsically switched bandstop filter prototype has a second-order notch response with more than 50 dB of rejection continuously tunable from 665 to 1000 MHz (50%) with negligible passband ripple in the intrinsic off state. The intrinsically switched tunable bandpass filter bank prototype, comprised of three third-order bandpass filters, has a constant 50-MHz bandwidth response continuously tunable from 740 to 1644 MHz (122%) with less than 5 dB of passband insertion loss and more than 40 dB of isolation between bands.", "title": "" } ]
scidocsrr
28e872c8a018e5768c7a02f8fc6e265f
Crowdsourcing in logistics: concepts and applications using the social crowd
[ { "docid": "8b0ac11c05601e93557fe0d5097b4529", "text": "We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be \"target earners,\" contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of $1.38/hour. We discuss how to use our calibrated model in applications.", "title": "" } ]
[ { "docid": "fc62e84fc995deb1932b12821dfc0ada", "text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.", "title": "" }, { "docid": "1a8c09947e4a1505733d7373b4bd2e2b", "text": "A flashover occurs when a fire spreads very rapidly through crevices due to intense heat. Flashovers present one of the most frightening and challenging fire phenomena to those who regularly encounter them: firefighters. Firefighters’ safety and lives often depend on their ability to predict flashovers before they occur. Typical pre-flashover fire characteristics include dark smoke, high heat, and rollover (“angel fingers”) and can be quantified by color, size, and shape. Using a color video stream from a firefighter’s body camera, we applied generative adversarial neural networks for image enhancement. The neural networks were trained to enhance very dark fire and smoke patterns in videos and monitor dynamic changes in smoke and fire areas. Preliminary tests with limited flashover training videos showed that we predicted a flashover as early as 55 seconds before it occurred.", "title": "" }, { "docid": "683bad69cfb2c8980020dd1f8bd8cea4", "text": "BRUTUS is a program that tells stories. The stories are intriguing, they hold a hint of mystery, and—not least impressive—they are written in correct English prose. An example (p. 124) is shown in Figure 1. This remarkable feat is grounded in a complex architecture making use of a number of levels, each of which is parameterized so as to become a locus of possible variation. The specific BRUTUS1 implementation that illustrates the program’s prowess exploits the theme of betrayal, which receives an elaborate analysis, culminating in a set", "title": "" }, { "docid": "7f48835a746d23edbdaa410800d0d322", "text": "Nager syndrome, or acrofacial dysostosis type 1 (AFD1), is a rare multiple malformation syndrome characterized by hypoplasia of first and second branchial arches derivatives and appendicular anomalies with variable involvement of the radial/axial ray. In 2012, AFD1 has been associated with dominant mutations in SF3B4. We report a 22-week-old fetus with AFD1 associated with diaphragmatic hernia due to a previously unreported SF3B4 mutation (c.35-2A>G). Defective diaphragmatic development is a rare manifestation in AFD1 as it is described in only 2 previous cases, with molecular confirmation in 1 of them. Our molecular finding adds a novel pathogenic splicing variant to the SF3B4 mutational spectrum and contributes to defining its prenatal/fetal phenotype.", "title": "" }, { "docid": "0eea594d14beea7be624d9cffc543f12", "text": "BACKGROUND\nLoss of the interproximal dental papilla may cause functional and, especially in the maxillary anterior region, phonetic and severe esthetic problems. The purpose of this study was to investigate whether the distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth could be correlated with the presence of the interproximal papilla in Taiwanese patients.\n\n\nMETHODS\nIn total, 200 interproximal sites of maxillary anterior teeth in 45 randomly selected patients were examined. Selected subjects were adult Taiwanese with fully erupted permanent dentition. The presence of the interproximal papilla was determined visually. If there was no visible space apical to the contact area, the papilla was recorded as being present. The distance from the contact point to the crest of bone was measured on standardized periapical radiographs using a paralleling technique with a RinnXCP holder.\n\n\nRESULTS\nData revealed that when the distance from the contact point to the bone crest on standardized periapical radiographs was 5 mm or less, the papillae were almost 100% present. When the distance was 6 mm, 51% of the papillae were present, and when the distance was 7 mm or greater, only 23% of the papillae were present.\n\n\nCONCLUSION\nThe distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth is highly associated with the presence or absence of the interproximal papilla in Taiwanese patients, and is a useful guide for clinical evaluation.", "title": "" }, { "docid": "29071361fbd22ea8be9ff5c522ef5131", "text": "Background. Eclampsia is a reliable indicator of poor birth preparedness and complications readiness. We determined perceptions about eclampsia, birth preparedness, and complications readiness among antenatal clients in Kano, Nigeria. Materials and Method. A cross-sectional design was used to study 250 randomly selected antenatal clients. Data was analyzed using SPSS 16.0. Result. The mean age of the respondents was 26.1 ± 6.4 years. The majority perceived that eclampsia is preventable through good ANC (76.4%) and hospital delivery (70.8%). Overall, 66.8% had good perception about eclampsia. Having at least secondary school education and multigravidity were associated with good perception about eclampsia on multivariate analysis. About a third (39.6%) of the mothers was less prepared. On binary logistic regression, good perception about eclampsia and multigravidity were associated with being very prepared for birth. Up to 37.6% were not ready for complications. Half (50.4%) knew at least three danger signs of pregnancy, and 30.0% donated blood or identified suitable blood donor. On multivariate analysis, having at least secondary school education, being very prepared for birth, and multigravidity emerged as the only predictors of the respondents' readiness for complications. Conclusion and Recommendations. Health workers should emphasize the practicability of birth preparedness and complications readiness during ANC and in the communities, routinely review plans, and support clients meet-up challenging areas. The importance of girl-child education to at least secondary school should be buttressed.", "title": "" }, { "docid": "5a47438a776c3760aafb9fd291e720b0", "text": "In this tutorial paper we present equalization techniques to mitigate inter-symbol interference (ISI) in high-speed communication links. Both transmit and receive equalizers are analyzed and high-speed circuits implementing them are presented. It is shown that a digital transmit equalizer is the simplest to design, while a continuous-time receive equalizer generally provides better performance. Decision feedback equalizer (DFE) is described and the loop latency problem is addressed. Finally, techniques to set the equalizer parameters adaptively are presented.", "title": "" }, { "docid": "84d28257f98ec1d78dcdfbdd7ec17e78", "text": "True gender self child therapy is based on the premise of gender as a web that weaves together nature, nurture, and culture and allows for a myriad of healthy gender outcomes. This article presents concepts of true gender self, false gender self, and gender creativity as they operationalize in clinical work with children who need therapeutic supports to establish an authentic gender self while developing strategies for negotiating an environment resistant to that self. Categories of gender nonconforming children are outlined and excerpts of a treatment of a young transgender child are presented to illustrate true gender self child therapy.", "title": "" }, { "docid": "f83a8c7d80085c9428421a69202af206", "text": "The simulation of EM (electromagnetic) wave propagation requires considerable computation time, as it analyzes a large number of propagation paths. To overcome this problem, we propose a GPU (graphics processing unit)-based parallel algorithm for VPL (vertical plane launch)-approximated EM wave propagation. The conventional algorithm computes the gain along propagation paths with irregular memory access, which results in low GPU performance. In our proposed algorithm, a CPU reorders irregular propagation paths to a GPU-suitable linear order on the CPU memory at each receiving point. We hid the reordering time behind CPU-GPU communication and GPU-based computation of gain on the reordered memory. We found that our proposed algorithm with a quad GPU is up to 30 times faster than the conventional algorithm with a 16-threaded dual CPU.", "title": "" }, { "docid": "4b0fcab3e9599f24cae499a4a2cbbd55", "text": "In June 2016, Apple made a bold announcement that it will deploy local differential privacy for some of their user data collection in order to ensure privacy of user data, even from Apple [21, 23]. The details of Apple’s approach remained sparse. Although several patents [17–19] have since appeared hinting at the algorithms that may be used to achieve differential privacy, they did not include a precise explanation of the approach taken to privacy parameter choice. Such choice and the overall approach to privacy budget use and management are key questions for understanding the privacy protections provided by any deployment of differential privacy. In this work, through a combination of experiments, static and dynamic code analysis of macOS Sierra (Version 10.12) implementation, we shed light on the choices Apple made for privacy budget management. We discover and describe Apple’s set-up for differentially private data processing, including the overall data pipeline, the parameters used for differentially private perturbation of each piece of data, and the frequency with which such data is sent to Apple’s servers. We find that although Apple’s deployment ensures that the (differential) privacy loss per each datum submitted to its servers is 1 or 2, the overall privacy loss permitted by the system is significantly higher, as high as 16 per day for the four initially announced applications of Emojis, New words, Deeplinks and Lookup Hints [21]. Furthermore, Apple renews the privacy budget available every day, which leads to a possible privacy loss of 16 times the number of days since user opt-in to differentially private data collection for those four applications. We applaud Apple’s deployment of differential privacy for its bold demonstration of feasibility of innovation while guaranteeing rigorous privacy. However, we argue that in order to claim the full benefits of differentially private data collection, Apple must give full transparency of its implementation and privacy loss choices, enable user choice in areas related to privacy loss, and set meaningful defaults on the daily and device lifetime privacy loss permitted. ACM Reference Format: Jun Tang, Aleksandra Korolova, Xiaolong Bai, XueqiangWang, and Xiaofeng Wang. 2017. Privacy Loss in Apple’s Implementation of Differential Privacy", "title": "" }, { "docid": "43398874a34c7346f41ca7a18261e878", "text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2917b7b1453f9e6386d8f47129b605fb", "text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "title": "" }, { "docid": "ad131f6baec15a011252f484f1ef8f18", "text": "Recent studies have shown that Alzheimer's disease (AD) is related to alteration in brain connectivity networks. One type of connectivity, called effective connectivity, defined as the directional relationship between brain regions, is essential to brain function. However, there have been few studies on modeling the effective connectivity of AD and characterizing its difference from normal controls (NC). In this paper, we investigate the sparse Bayesian Network (BN) for effective connectivity modeling. Specifically, we propose a novel formulation for the structure learning of BNs, which involves one L1-norm penalty term to impose sparsity and another penalty to ensure the learned BN to be a directed acyclic graph - a required property of BNs. We show, through both theoretical analysis and extensive experiments on eleven moderate and large benchmark networks with various sample sizes, that the proposed method has much improved learning accuracy and scalability compared with ten competing algorithms. We apply the proposed method to FDG-PET images of 42 AD and 67 NC subjects, and identify the effective connectivity models for AD and NC, respectively. Our study reveals that the effective connectivity of AD is different from that of NC in many ways, including the global-scale effective connectivity, intra-lobe, inter-lobe, and inter-hemispheric effective connectivity distributions, as well as the effective connectivity associated with specific brain regions. These findings are consistent with known pathology and clinical progression of AD, and will contribute to AD knowledge discovery.", "title": "" }, { "docid": "34ffa62e6fee34d60d2f06c639e6bf03", "text": "Resonant converters often require accurate load characterization in order to ensure appropriate and safe control. Besides, for systems with a highly variable load, as the induction heating systems, a real-time load estimation is mandatory. This paper presents the development of an FPGA-based test-bench aimed to extract the electrical equivalent of the induction heating loads. The proposed test-bench comprises a resonant power converter, sigma-delta ADCs, and an embedded system implemented in an FPGA. The characterization algorithm is based on the discrete-time Fourier series computed directly from the ΔΣ ADC bit-streams, and the FPGA implementation has been partitioned into hardware and software platforms to optimize the performance and resources utilization. Analytical and simulation results are verified through experimental measurements with the proposed test-bench. As a result, the proposed platform can be used as a load identification tool either for stand-alone or PC-hosted operation.", "title": "" }, { "docid": "4f3177b303b559f341b7917683114257", "text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.", "title": "" }, { "docid": "f36b48faca2ea99cd554d461bc1651e8", "text": "Reverse transcription PCR (RT-PCR) represents a sensitive and powerful tool for analyzing RNA. While it has tremendous potential for quantitative applications, a comprehensive knowledge of its technical aspects is required. Successful quantitative RT-PCR involves correction for experimental variations in individual RT and PCR efficiencies. This review addresses the mathematics of RT-PCR, choice of RNA standards (internal vs. external) and quantification strategies (competitive, noncompetitive and kinetic [real-time] amplification). Finally, the discussion turns to practical considerations in experimental design. It is hoped that this review will be appropriate for those undertaking these experiments for the first time or wishing to improve (or validate) a technique in what is frequently a confusing and contradictory field.", "title": "" }, { "docid": "930dc4c82d32906e69ab0a8ddda21e7c", "text": "In this paper, we propose and analyze a trust-region model-based algorithm for solving unconstrained stochastic optimization problems. Our framework utilizes random models of an objective function f(x), obtained from stochastic observations of the function or its gradient. Our method also utilizes estimates of function values to gauge progress that is being made. The convergence analysis relies on requirements that these models and these estimates are sufficiently accurate with sufficiently high, but fixed, probability. Beyond these conditions, no assumptions are made on how these models and estimates are generated. Under these general conditions we show an almost sure global convergence of the method to a first order stationary point. In the second part of the paper, we present examples of generating sufficiently accurate random models under biased or unbiased noise assumptions. Lastly, we present some computational results showing the benefits of the proposed method compared to existing approaches that are based on sample averaging or stochastic gradients.", "title": "" }, { "docid": "e38cbee5c03319d15086e9c39f7f8520", "text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.", "title": "" }, { "docid": "f81261c4a64359778fd3d399ba3fe749", "text": "Credit card frauds are increasing day by day regardless of the various techniques developed for its detection. Fraudsters are so expert that they engender new ways for committing fraudulent transactions each day which demands constant innovation for its detection techniques as well. Many techniques based on Artificial Intelligence, Data mining, Fuzzy logic, Machine learning, Sequence Alignment, decision tree, neural network, logistic regression, naïve Bayesian, Bayesian network, metalearning, Genetic Programming etc., has evolved in detecting various credit card fraudulent transactions. A steady indulgent on all these approaches will positively lead to an efficient credit card fraud detection system. This paper presents a survey of various techniques used in credit card fraud detection mechanisms and Hidden Markov Model (HMM) in detail. HMM categorizes card holder’s profile as low, medium and high spending based on their spending behavior in terms of amount. A set of probabilities for amount of transaction is being assigned to each cardholder. Amount of each incoming transaction is then matched with card owner’s category, if it justifies a predefined threshold value then the transaction is decided to be legitimate else declared as fraudulent. Index Terms — Credit card, fraud detection, Hidden Markov Model, online shopping", "title": "" } ]
scidocsrr
df78397ac9d30233626e47de0f52f3a6
IoT interoperability: A hub-based approach
[ { "docid": "fbb71a8a7630350a7f33f8fb90b57965", "text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.", "title": "" }, { "docid": "e6c90edef3273566d919db52f1e8a629", "text": "In this position paper, we discuss our experiences with a lightweight Web of Things (WoT) toolkit and use those experiences to explore what an effective WoT toolkit looks like. We argue that while the WoT community has experimented, like us, with a variety of toolkits, it hasn't yet found one that appeals sufficiently to a broad range of developers. This failure, we believe, is hindering the adoption of the WoT and the growth of the community. We conclude the paper with a set of open questions, which, although not exhaustive, are aimed at opening up a community discussion on the needs of developers and how best the community can meet those needs and so further the adoption of the WoT. In essence, we believe that the time may be right to begin to agree on some basic functionality and approaches to WoT toolkits.", "title": "" } ]
[ { "docid": "6f4e5448f956017c39c1727e0eb5de7b", "text": "Recently, community search over graphs has attracted significant attention and many algorithms have been developed for finding dense subgraphs from large graphs that contain given query nodes. In applications such as analysis of protein protein interaction (PPI) networks, citation graphs, and collaboration networks, nodes tend to have attributes. Unfortunately, most previously developed community search algorithms ignore these attributes and result in communities with poor cohesion w.r.t. their node attributes. In this paper, we study the problem of attribute-driven community search, that is, given an undirected graph G where nodes are associated with attributes, and an input query Q consisting of nodes Vq and attributes Wq , find the communities containing Vq , in which most community members are densely inter-connected and have similar attributes. We formulate our problem of finding attributed truss communities (ATC), as finding all connected and close k-truss subgraphs containing Vq, that are locally maximal and have the largest attribute relevance score among such subgraphs. We design a novel attribute relevance score function and establish its desirable properties. The problem is shown to be NP-hard. However, we develop an efficient greedy algorithmic framework, which finds a maximal k-truss containing Vq, and then iteratively removes the nodes with the least popular attributes and shrinks the graph so as to satisfy community constraints. We also build an elegant index to maintain the known k-truss structure and attribute information, and propose efficient query processing algorithms. Extensive experiments on large real-world networks with ground-truth communities shows the efficiency and effectiveness of our proposed methods.", "title": "" }, { "docid": "38863f217a610af5378c42e03cd3fe3c", "text": "In human movement learning, it is most common to teach constituent elements of complex movements in isolation, before chaining them into complex movements. Segmentation and recognition of observed movement could thus proceed out of this existing knowledge, which is directly compatible with movement generation. In this paper, we address exactly this scenario. We assume that a library of movement primitives has already been taught, and we wish to identify elements of the library in a complex motor act, where the individual elements have been smoothed together, and, occasionally, there might be a movement segment that is not in our library yet. We employ a flexible machine learning representation of movement primitives based on learnable nonlinear attractor system. For the purpose of movement segmentation and recognition, it is possible to reformulate this representation as a controlled linear dynamical system. An Expectation-Maximization algorithm can be developed to estimate the open parameters of a movement primitive from the library, using as input an observed trajectory piece. If no matching primitive from the library can be found, a new primitive is created. This process allows a straightforward sequential segmentation of observed movement into known and new primitives, which are suitable for robot imitation learning. We illustrate our approach with synthetic examples and data collected from human movement. Appearing in Proceedings of the 15 International Conference on Artificial Intelligence and Statistics (AISTATS) 2012, La Palma, Canary Islands. Volume XX of JMLR: W&CP XX. Copyright 2012 by the authors.", "title": "" }, { "docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37", "text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.", "title": "" }, { "docid": "84c87c50659d18b130f4aaf8c1b3c7f1", "text": "We describe initial work on an extension of the Kaldi toolkit that supports weighted finite-state transducer (WFST) decoding on Graphics Processing Units (GPUs). We implement token recombination as an atomic GPU operation in order to fully parallelize the Viterbi beam search, and propose a dynamic load balancing strategy for more efficient token passing scheduling among GPU threads. We also redesign the exact lattice generation and lattice pruning algorithms for better utilization of the GPUs. Experiments on the Switchboard corpus show that the proposed method achieves identical 1-best results and lattice quality in recognition and confidence measure tasks, while running 3 to 15 times faster than the single process Kaldi decoder. The above results are reported on different GPU architectures. Additionally we obtain a 46-fold speedup with sequence parallelism and multi-process service (MPS) in GPU.", "title": "" }, { "docid": "43a57d9ad5a4ea7cb446adf8cb91f640", "text": "It is widely acknowledged that the value of a house is the mixture of a large number of characteristics. House price prediction thus presents a unique set of challenges in practice. While a large body of works are dedicated to this task, their performance and applications have been limited by the shortage of long time span of transaction data, the absence of real-world settings and the insufficiency of housing features. To this end, a time-aware latent hierarchical model is introduced to capture underlying spatiotemporal interactions behind the evolution of house prices. The hierarchical perspective obviates the need for historical transaction data of exactly same houses when temporal effects are considered. The proposed framework is examined on a large-scale dataset of the property transaction in Beijing. The whole experimental procedure strictly complies with the real-world scenario. The empirical evaluation results demonstrate the outperformance of our approach over alternative competitive methods.", "title": "" }, { "docid": "44a1c6ebc90e57398ee92a137a5a54f8", "text": "Most of human actions consist of complex temporal compositions of more simple actions. Action recognition tasks usually relies on complex handcrafted structures as features to represent the human action model. Convolutional Neural Nets (CNN) have shown to be a powerful tool that eliminate the need for designing handcrafted features. Usually, the output of the last layer in CNN (a layer before the classification layer -known as fc7) is used as a generic feature for images. In this paper, we show that fc7 features, per se, can not get a good performance for the task of action recognition, when the network is trained only on images. We present a feature structure on top of fc7 features, which can capture the temporal variation in a video. To represent the temporal components, which is needed to capture motion information, we introduced a hierarchical structure. The hierarchical model enables to capture sub-actions from a complex action. At the higher levels of the hierarchy, it represents a coarse capture of action sequence and lower levels represent fine action elements. Furthermore, we introduce a method for extracting key-frames using binary coding of each frame in a video, which helps to improve the performance of our hierarchical model. We experimented our method on several action datasets and show that our method achieves superior results compared to other stateof-the-arts methods.", "title": "" }, { "docid": "8748fa5adb122919ad490ab3375c8fb6", "text": "Department of Pathology, Stanford University, Stanford, CA; Department of Pathology, Weill Cornell Medical College, New York, NY; Department of Pathology, Massachusetts General Hospital, Boston, MA; Institute of Pathology, University of Cologne, Cologne, Germany; Department of Pathology, Johns Hopkins Medical Institutions, Baltimore, MD; Section of Hematology/Oncology, University of Chicago, Chicago, IL; Comprehensive Cancer Center, James Cancer Hospital and Solove Research Institute, The Ohio State University, Columbus, OH; Department of Molecular Medicine, University of Pavia, and Department of Hematology Oncology, Fondazione IRCCS Policlinico San Matteo, Pavia, Italy; and Department of Pathology, University of Chicago, Chicago, IL", "title": "" }, { "docid": "b682d1da4fd31e470aa96244a47f081a", "text": "With Android being the most widespread mobile platform, protecting it against malicious applications is essential. Android users typically install applications from large remote repositories, which provides ample opportunities for malicious newcomers. In this paper, we propose a simple, and yet highly effective technique for detecting malicious Android applications on a repository level. Our technique performs automatic classification based on tracking system calls while applications are executed in a sandbox environment. We implemented the technique in a tool called MALINE, and performed extensive empirical evaluation on a suite of around 12,000 applications. The evaluation yields an overall detection accuracy of 93% with a 5% benign application classification error, while results are improved to a 96% detection accuracy with up-sampling. This indicates that our technique is viable to be used in practice. Finally, we show that even simplistic feature choices are highly effective, suggesting that more heavyweight approaches should be thoroughly (re)evaluated. Android Malware Detection Based on System Calls Marko Dimjašević, Simone Atzeni, Zvonimir Rakamarić University of Utah, USA {marko,simone,zvonimir}@cs.utah.edu Ivo Ugrina University of Zagreb, Croatia", "title": "" }, { "docid": "bcf27c4f750ab74031b8638a9b38fd87", "text": "δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.", "title": "" }, { "docid": "de83d02f5f120163ed86050ee6962f50", "text": "Researchers have recently questioned the benefits associated with having high self-esteem. The authors propose that the importance of self-esteem lies more in how people strive for it rather than whether it is high or low. They argue that in domains in which their self-worth is invested, people adopt the goal to validate their abilities and qualities, and hence their self-worth. When people have self-validation goals, they react to threats in these domains in ways that undermine learning; relatedness; autonomy and self-regulation; and over time, mental and physical health. The short-term emotional benefits of pursuing self-esteem are often outweighed by long-term costs. Previous research on self-esteem is reinterpreted in terms of self-esteem striving. Cultural roots of the pursuit of self-esteem are considered. Finally, the alternatives to pursuing self-esteem, and ways of avoiding its costs, are discussed.", "title": "" }, { "docid": "995e00375e52698cf83097fd0cc517ab", "text": "The analysis of continously larger datasets is a task of major importance in a wide variety of scientific fields. In this sense, cluster analysis algorithms are a key element of exploratory data analysis, due to their easiness in the implementation and relatively low computational cost. Among these algorithms, the K-means algorithm stands out as the most popular approach, besides its high dependency on the initial conditions, as well as to the fact that it might not scale well on massive datasets. In this article, we propose a recursive and parallel approximation to the K-means algorithm that scales well on both the number of instances and dimensionality of the problem, without affecting the quality of the approximation. In order to achieve this, instead of analyzing the entire dataset, we work on small weighted sets of points that mostly intend to extract information from those regions where it is harder to determine the correct cluster assignment of the original instances. In addition to different theoretical properties, which deduce the reasoning behind the algorithm, experimental results indicate that our method outperforms the state-of-the-art in terms of the trade-off between number of distance computations and the quality of the solution obtained.", "title": "" }, { "docid": "ea200dc100d77d8c156743bede4a965b", "text": "We present a contextual spoken language understanding (contextual SLU) method using Recurrent Neural Networks (RNNs). Previous work has shown that context information, specifically the previously estimated domain assignment, is helpful for domain identification. We further show that other context information such as the previously estimated intent and slot labels are useful for both intent classification and slot filling tasks in SLU. We propose a step-n-gram model to extract sentence-level features from RNNs, which extract sequential features. The step-n-gram model is used together with a stack of Convolution Networks for training domain/intent classification. Our method therefore exploits possible correlations among domain/intent classification and slot filling and incorporates context information from the past predictions of domain/intent and slots. The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.", "title": "" }, { "docid": "fee1419f689259bc5fe7e4bfd8f0242c", "text": "One of the challenges in computer vision is how to learn an accurate classifier for a new domain by using labeled images from an old domain under the condition that there is no available labeled images in the new domain. Domain adaptation is an outstanding solution that tackles this challenge by employing available source-labeled datasets, even with significant difference in distribution and properties. However, most prior methods only reduce the difference in subspace marginal or conditional distributions across domains while completely ignoring the source data label dependence information in a subspace. In this paper, we put forward a novel domain adaptation approach, referred to as Enhanced Subspace Distribution Matching. Specifically, it aims to jointly match the marginal and conditional distributions in a kernel principal dimensionality reduction procedure while maximizing the source label dependence in a subspace, thus raising the subspace distribution matching degree. Extensive experiments verify that it can significantly outperform several state-of-the-art methods for cross-domain image classification problems.", "title": "" }, { "docid": "a697f85ad09699ddb38994bd69b11103", "text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.", "title": "" }, { "docid": "7e325afeaaf3cc548bca023e35fbd203", "text": "The short length of the estrous cycle of rats makes them ideal for investigation of changes occurring during the reproductive cycle. The estrous cycle lasts four days and is characterized as: proestrus, estrus, metestrus and diestrus, which may be determined according to the cell types observed in the vaginal smear. Since the collection of vaginal secretion and the use of stained material generally takes some time, the aim of the present work was to provide researchers with some helpful considerations about the determination of the rat estrous cycle phases in a fast and practical way. Vaginal secretion of thirty female rats was collected every morning during a month and unstained native material was observed using the microscope without the aid of the condenser lens. Using the 10 x objective lens, it was easier to analyze the proportion among the three cellular types, which are present in the vaginal smear. Using the 40 x objective lens, it is easier to recognize each one of these cellular types. The collection of vaginal lavage from the animals, the observation of the material, in the microscope, and the determination of the estrous cycle phase of all the thirty female rats took 15-20 minutes.", "title": "" }, { "docid": "b821bbb4e0a0759c3dff401a936461f9", "text": "Context: Numerous open source software projects are based on volunteers’ collaboration and require a continuous influx of newcomers for their continuity. Newcomers face barriers that can lead them to give up. These barriers hinder both developers willing to make a single contribution and those willing to become a project member. Objective: This study aims to identify and classify the barriers that newcomers face when contributing to Open Source Software projects. Method: We conducted a systematic literature review of papers reporting empirical evidence regarding the barriers that newcomers face when contributing to Open Source Software (OSS) projects. We retrieved 291 studies *Corresponding author [Address: Rua das Cerejeiras, 60 CEP 87301-350 Campo Mourao–PR–Brazil – Phone +55(44)88383380 ] Email addresses: igorfs@utfpr.edu.br (Igor Steinmacher), magsilva@utfpr.edu.br (Marco Aurelio Graciotto Silva), gerosa@ime.usp.br (Marco Aurelio Gerosa), redmiles@ics.uci.edu (David F. Redmiles) Preprint submitted to Information and Software Technology November 5, 2014 by querying 4 digital libraries. Twenty studies were identified as primary. We performed a backward snowballing approach, and searched for other papers published by the authors of the selected papers to identify potential studies. Then, we used a coding approach inspired by open coding and axial coding procedures from Grounded Theory to categorize the barriers reported by the selected studies. Results: We identified 20 studies providing empirical evidence of barriers faced by newcomers to OSS projects while making a contribution. From the analysis, we identified 15 different barriers, which we grouped into five categories: social interaction, newcomers’ previous knowledge, finding a way to start, documentation, and technical hurdles. We also classified the problems with regard to their origin: newcomers, community, or product. Conclusion: The results are useful to researchers and OSS practitioners willing to investigate or to implement tools to support newcomers. We mapped technical and non-technical barriers that hinder newcomers’ first contributions. The most evidenced barriers are related to socialization, appearing in 75% (15 out of 20) of the studies analyzed, with a high focus on interactions in mailing lists (receiving answers and socialization with other members). There is a lack of in-depth studies on technical issues, such as code issues. We also noticed that the majority of the studies relied on historical data gathered from software repositories and that there was a lack of experiments and qualitative studies in this area.", "title": "" }, { "docid": "7842e5c7ad3dc11d9d53b360e4e2691a", "text": "It is becoming obvious that all cancers have a defe ctiv p53 pathway, either through TP53 mutation or deregulation of the tumor suppressor function of the wild type TP53 . In this study we examined the expression of P53 and Caspase 3 in transperitoneally injected Ehrlich As cite carcinoma cells (EAC) treated with Tetrodotoxin in the liver of adult mice in order to evaluate the po ssible pro apoptotic effect of Tetrodotoxin . Results: Early in the treatment, num erous EAC detected in the large blood vessels & cen tral veins and expressed both of P53 & Caspase 3 in contrast to the late absence of P53 expressing EAC at the 12 th day of Tetrodotoxin treatment. In the same context , predominantly the perivascular hepatocytes expresse d Caspase 3 in contrast to the more diffuse express ion pattern late with Tetrodotoxin treatment. Non of the hepatocytes ever expressed P5 3 neither with early nor late Tetrodotoxin treatmen t. Conclusion: Tetrodotoxin therapy has a proapoptotic effect on Ehrlich Ascites carcin oma Cells (EAC). This may be through enhancing the tumor suppressor function of the wild type TP53 with subsequent Caspase 3 activation .", "title": "" }, { "docid": "a48d3b21d1d1e3e7e069a46aa17df7ef", "text": "The linear step-up multiple testing procedure controls the False Discovery Rate (FDR) at the desired level q for independent and positively dependent test statistics. When all null hypotheses are true, and the test statistics are independent and continuous, the bound is sharp. When some of the null hypotheses are not true, the procedure is conservative by a factor which is the proportion m0/m of the true null hypotheses among the hypotheses. We provide a new two-stage procedure in which the linear step-up procedure is used in stage one to estimate m0, providing a new level q′ which is used in the linear step-up procedure in the second stage. We prove that a general form of the two-stage procedure controls the FDR at the desired level q. This framework enables us to study analytically the properties of other procedures that exist in the literature. A simulation study is presented that shows that two-stage adaptive procedures improve in power over the original procedure, mainly because they provide tighter control of the FDR. We further study the performance of the current suggestions, some variations of the procedures, and previous suggestions, in the case where the test statistics are positively dependent, a case for which the original procedure controls", "title": "" }, { "docid": "7b64db09b7f072af236edf92cbca7537", "text": "Political How to Defeat Terrorism with Neuro-Semantics (2) The President Finally Speaks out about Radial Isalism (3) A Call for Responsible Protests (20) Hidden Frames #1 (21) Behind Hidden Frames #2 (22) What’s Behind It All? #3 (23) The Politics within NLP and Neuro-Semantics Models (24) Ferguson Again? What’s the Real Problem? (35) Give me a Non-Politician (36) Is it Time for Trump to Apologize? (41) Religious Terrorism: The Paris Attacks (49) A Meaningful Antidote for Terrorism (51)", "title": "" }, { "docid": "247eebd69a651f6f116f41fdf885ae39", "text": "RFID identification is a new technology that will become ubiquitous as RFID tags will be applied to every-day items in order to yield great productivity gains or “smart” applications for users. However, this pervasive use of RFID tags opens up the possibility for various attacks violating user privacy. In this work we present an RFID authentication protocol that enforces user privacy and protects against tag cloning. We designed our protocol with both tag-to-reader and reader-to-tag authentication in mind; unless both types of authentication are applied, any protocol can be shown to be prone to either cloning or privacy attacks. Our scheme is based on the use of a secret shared between tag and database that is refreshed to avoid tag tracing. However, this is done in such a way so that efficiency of identification is not sacrificed. Additionally, our protocol is very simple and it can be implemented easily with the use of standard cryptographic hash functions. In analyzing our protocol, we identify several attacks that can be applied to RFID protocols and we demonstrate the security of our scheme. Furthermore, we show how forward privacy is guaranteed; messages seen today will still be valid in the future, even after the tag has been compromised.", "title": "" } ]
scidocsrr
7fe329e310dd2fb639443d88d006890a
On the comparison of different kernel functionals and neighborhood geometry for nonlocal means filtering
[ { "docid": "01e064e0f2267de5a26765f945114a6e", "text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "title": "" }, { "docid": "67e16f36bb6d83c5d6eae959a7223b77", "text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.", "title": "" } ]
[ { "docid": "6bc31257bfbcc9531a3acf1ec738c790", "text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.", "title": "" }, { "docid": "33cc033661cd680d11dfa14d5fe74d31", "text": "Authentication and authorization are essential parts of basic security processes and are sorely needed in the Internet of Things (IoT). The emergence of edge and fog computing creates new opportunities for security and trust management in the IoT. In this article, the authors discuss existing solutions to establish and manage trust in networked systems and argue that these solutions face daunting challenges when scaled to the IoT. They give a vision of efficient and scalable trust management for the IoT based on locally centralized, globally distributed trust management using an open source infrastructure with local authentication and authorization entities to be deployed on edge devices.", "title": "" }, { "docid": "16d7767e9f2216ce0789b8a92d8d65e4", "text": "In the rst genetic programming (GP) book John Koza noticed that tness histograms give a highly informative global view of the evolutionary process (Koza, 1992). The idea is further developed in this paper by discussing GP evolution in analogy to a physical system. I focus on three interrelated major goals: (1) Study the the problem of search eeort allocation in GP; (2) Develop methods in the GA/GP framework that allow adap-tive control of diversity; (3) Study ways of adaptation for faster convergence to optimal solution. An entropy measure based on phenotype classes is introduced which abstracts tness histograms. In this context, entropy represents a measure of population diversity. An analysis of entropy plots and their correlation with other statistics from the population enables an intelligent adaptation of search control.", "title": "" }, { "docid": "0df2ca944dcdf79369ef5a7424bf3ffe", "text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.", "title": "" }, { "docid": "14c786d87fc06ab85ad41f6f6c30db21", "text": "When an attacker tries to penetrate the network, there are many defensive systems, including intrusion detection systems (IDSs). Most IDSs are capable of detecting many attacks, but can not provide a clear idea to the analyst because of the huge number of false alerts generated by these systems. This weakness in the IDS has led to the emergence of many methods in which to deal with these alerts, minimize them and highlight the real attacks. It has come to a stage to take a stock of the research results a comprehensive view so that further research in this area will be motivated objectively to fulfill the gaps", "title": "" }, { "docid": "0f50b3dd947b9a04d121079e0fa8f10e", "text": "Twitter has undoubtedly caught the attention of both the general public, and academia as a microblogging service worthy of study and attention. Twitter has several features that sets it apart from other social media/networking sites, including its 140 character limit on each user's message (tweet), and the unique combination of avenues via which information is shared: directed social network of friends and followers, where messages posted by a user is broadcast to all its followers, and the public timeline, which provides real time access to posts or tweets on specific topics for everyone. While the character limit plays a role in shaping the type of messages that are posted and shared, the dual mode of sharing information (public vs posts to one's followers) provides multiple pathways in which a posting can propagate through the user landscape via forwarding or \"Retweets\", leading us to ask the following questions: How does a message resonate and spread widely among the users on Twitter, and are the resulting cascade dynamics different due to the unique features of Twitter? What role does content of a message play in its popularity? Realizing that tweet content would play a major role in the information propagation dynamics (as borne out by the empirical results reported in this paper), we focused on patterns of information propagation on Twitter by observing the sharing and reposting of messages around a specific topic, i.e. the Iranian election.\n We know that during the 2009 post-election protests in Iran, Twitter and its large community of users played an important role in disseminating news, images, and videos worldwide and in documenting the events. We collected tweets of more than 20 million publicly accessible users on Twitter and analyzed over three million tweets related to the Iranian election posted by around 500K users during June and July of 2009. Our results provide several key insights into the dynamics of information propagation that are special to Twitter. For example, the tweet cascade size distribution is a power-law with exponent of -2.51 and more than 99% of the cascades have depth less than 3. The exponent is different from what one expects from a branching process (usually used to model information cascades) and so is the shallow depth, implying that the dynamics underlying the cascades are potentially different on Twitter. Similarly, we are able to show that while Twitter's Friends-Followers network structure plays an important role in information propagation through retweets (re-posting of another user's message), the search bar and trending topics on Twitter's front page offer other significant avenues for the spread of information outside the explicit Friends-Followers network. We found that at most 63.7% of all retweets in this case were reposts of someone the user was following directly. We also found that at least 7% of retweets are from the public posts, and potentially more than 30% of retweets are from the public timeline. In the end, we examined the context and content of the kinds of information that gained the attention of users and spread widely on Twitter. Our data indicates that the retweet probabilities are highly content dependent.", "title": "" }, { "docid": "a9808fef734b205146a2d8edf1171d6a", "text": "The Buss-Perry Aggression Questionnaire (AQ) is a self-report measure of aggressiveness commonly employed in nonforensic and forensic settings and is included in violent offender pre- and posttreatment assessment batteries. The aim of the current study was to assess the fit of the four-factor model of the AQ with violent offenders ( N = 271), a population for which the factor structure of the English version of the AQ has not previously been examined. Confirmatory factor analyses did not yield support for the four-factor model of the original 29-item AQ. Acceptable fit was obtained with the 12-item short form, but careful examination of the relationships between the latent factors revealed that the four subscales of the AQ may not represent distinct aspects of aggressiveness. Our findings call into question whether the AQ optimally measures trait aggressiveness among violent offenders.", "title": "" }, { "docid": "cec9f586803ffc8dc5868f6950967a1f", "text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.", "title": "" }, { "docid": "3ca2d95885303f1ab395bd31d32df0c2", "text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.", "title": "" }, { "docid": "2f1862591d5f9ee80d7cdcb930f86d8d", "text": "In this research convolutional neural networks are used to recognize whether a car on a given image is damaged or not. Using transfer learning to take advantage of available models that are trained on a more general object recognition task, very satisfactory performances have been achieved, which indicate the great opportunities of this approach. In the end, also a promising attempt in classifying car damages into a few different classes is presented. Along the way, the main focus was on the influence of certain hyper-parameters and on seeking theoretically founded ways to adapt them, all with the objective of progressing to satisfactory results as fast as possible. This research open doors for future collaborations on image recognition projects in general and for the car insurance field in particular.", "title": "" }, { "docid": "5a4d8576222e8b704baaa1b67815ca01", "text": "In evolutionary robotics, populations of robots are typically trained in simulation before one or more of them are instantiated as physical robots. However, in order to evolve robust behavior, each robot must be evaluated in multiple environments. If an environment is characterized by f free parameters, each of which can take one of np features, each robot must be evaluated in all np environments to ensure robustness. Here, we show that if the robots are constrained to have modular morphologies and controllers, they only need to be evaluated in np environments to reach the same level of robustness. This becomes possible because the robots evolve such that each module of the morphology allows the controller to independently recognize a familiar percept in the environment, and each percept corresponds to one of the environmental free parameters. When exposed to a new environment, the robot perceives it as a novel combination of familiar percepts which it can solve without requiring further training. A non-modular morphology and controller however perceives the same environment as a completely novel environment, requiring further training. This acceleration in evolvability – the rate of the evolution of adaptive and robust behavior – suggests that evolutionary robotics may become a scalable approach for automatically creating complex autonomous machines, if the evolution of neural and morphological modularity is taken into account.", "title": "" }, { "docid": "8d041241f1a587b234c8784dea9088a4", "text": "Modern intelligent vehicles have electronic control units containing firmware that enables various functions in the vehicle. New firmware versions are constantly developed to remove bugs and improve functionality. Automobile manufacturers have traditionally performed firmware updates over cables but in the near future they are aiming at conducting firmware updates over the air, which would allow faster updates and improved safety for the driver. In this paper, we present a protocol for secure firmware updates over the air. The protocol provides data integrity, data authentication, data confidentiality, and freshness. In our protocol, a hash chain is created of the firmware, and the first packet is signed by a trusted source, thus authenticating the whole chain. Moreover, the packets are encrypted using symmetric keys. We discuss the practical considerations that exist for implementing our protocol and show that the protocol is computationally efficient, has low memory overhead, and is suitable for wireless communication. Therefore, it is well suited to the limited hardware resources in the wireless vehicle environment.", "title": "" }, { "docid": "63405ca71cf052b0011106e5fda6a9ea", "text": "Device-to-Device (D2D) communication has emerged as a promising technology for optimizing spectral efficiency in future cellular networks. D2D takes advantage of the proximity of communicating devices for efficient utilization of available resources, improving data rates, reducing latency, and increasing system capacity. The research community is actively investigating the D2D paradigm to realize its full potential and enable its smooth integration into the future cellular system architecture. Existing surveys on this paradigm largely focus on interference and resource management. We review recently proposed solutions in over explored and under explored areas in D2D. These solutions include protocols, algorithms, and architectures in D2D. Furthermore, we provide new insights on open issues in these areas. Finally, we discuss potential future research directions.", "title": "" }, { "docid": "1c367cad26436a059e56d000ac0db3c4", "text": "We propose a goal-driven web navigation as a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site, which is represented as a graph consisting of web pages as nodes and hyperlinks as directed edges, to find a web page in which a query appears. The agent is required to have sophisticated high-level reasoning based on natural languages and efficient sequential decision making capability to succeed. We release a software tool, called WebNav, that automatically transforms a website into this goal-driven web navigation task, and as an example, we make WikiNav, a dataset constructed from the English Wikipedia containing approximately 5 million articles and more than 12 million queries for training. We evaluate two different agents based on neural networks on the WikiNav and provide the human performance. Our results show the difficulty of the task for both humans and machines. With this benchmark, we expect faster progress in developing artificial agents with natural language understanding and planning skills.", "title": "" }, { "docid": "26f2e3918eb624ce346673d10b5d2eb7", "text": "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.", "title": "" }, { "docid": "e81c74c33b7d00a9482392778f661466", "text": "This paper considers the use of a simple posterior sampling algorithm to balance between exploration and exploitation when learning to optimize actions such as in multi-armed bandit problems. The algorithm, also known as Thompson Sampling, offers significant advantages over the popular upper confidence bound (UCB) approach, and can be applied to problems with finite or infinite action spaces and complicated relationships among action rewards. We make two theoretical contributions. The first establishes a connection between posterior sampling and UCB algorithms. This result lets us convert regret bounds developed for UCB algorithms into Bayes risk bounds for posterior sampling. Our second theoretical contribution is a Bayes risk bound for posterior sampling that applies broadly and can be specialized to many model classes. This bound depends on a new notion we refer to as the margin dimension, which measures the degree of dependence among action rewards. Compared to UCB algorithm Bayes risk bounds for specific model classes, our general bound matches the best available for linear models and is stronger than the best available for generalized linear models. Further, our analysis provides insight into performance advantages of posterior sampling, which are highlighted through simulation results that demonstrate performance surpassing recently proposed UCB algorithms.", "title": "" }, { "docid": "340f64ed182a54ef617d7aa2ffeac138", "text": "Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.", "title": "" }, { "docid": "4b5ac7d23ffcfc965f5f54ef227099bc", "text": "In this brief, we propose a fast yet energy-efficient reconfigurable approximate carry look-ahead adder (RAP-CLA). This adder has the ability of switching between the approximate and exact operating modes making it suitable for both error-resilient and exact applications. The structure, which is more area and power efficient than state-of-the-art reconfigurable approximate adders, is achieved by some modifications to the conventional carry look ahead adder (CLA). The efficacy of the proposed RAP-CLA adder is evaluated by comparing its characteristics to those of two state-of-the-art reconfigurable approximate adders as well as the conventional (exact) CLA in a 15 nm FinFET technology. The results reveal that, in the approximate operating mode, the proposed 32-bit adder provides up to 55% and 28% delay and power reductions compared to those of the exact CLA, respectively, at the cost of up to 35.16% error rate. It also provides up to 49% and 19% lower delay and power consumption, respectively, compared to other approximate adders considered in this brief. Finally, the effectiveness of the proposed adder on two image processing applications of smoothing and sharpening is demonstrated.", "title": "" }, { "docid": "ee54c02fb1856ccf4f11fe1778f0883c", "text": "Failure Mode, Mechanism and Effect Analysis (FMMEA) is a reliability analysis method which is used to study possible failure modes, failure mechanisms of each component, and to identify the effects of various failure modes on the components and functions. This paper introduces how to implement FMMEA on the Single Board Computer in detail, including system definition, identification of potential failure modes, analysis of failure cause, failure mechanism, and failure effect analysis. Finite element analysis is carried out for the Single Board Computer, including thermal stress analysis and vibration stress analysis. Temperature distribution and vibration modes are obtained, which are the inputs of physics of failure models. Using a variety of Physics of Failure models, the quantitative calculation of single point failure for the Single Board Computer are carried out. Results showed that the time to failure (TTF) of random access memory chip which is SOP (small outline package) is the shortest and the failure is due to solder joint fatigue failure caused by the temperature cycle. It is the weak point of the entire circuit board. Thus solder joint thermal fatigue failure is the main failure mechanism of the Single Board Computer. In the implementation process of PHM for the Single Board Computer, the failure condition of this position should be monitored.", "title": "" } ]
scidocsrr
0a9890541b2ad9cdd30e9fd33697c366
Prescriptive Control of Business Processes - New Potentials Through Predictive Analytics of Big Data in the Process Manufacturing Industry
[ { "docid": "d170d7cf20b0a848bb0d81c5d163b505", "text": "The organizational and social issues associated with the development, implementation and use of computer-based information systems have increasingly attracted the attention of information systems researchers. Interest in qualitative research methods such as action research, case study research and ethnography, which focus on understanding social phenomena in their natural setting, has consequently grown. Case study research is the most widely used qualitative research method in information systems research, and is well suited to understanding the interactions between information technology-related innovations and organizational contexts. Although case study research is useful as ameans of studying information systems development and use in the field, there can be practical difficulties associated with attempting to undertake case studies as a rigorous and effective method of research. This paper addresses a number of these difficulties and offers some practical guidelines for successfully completing case study research. The paper focuses on the pragmatics of conducting case study research, and draws from the discussion at a panel session conducted by the authors at the 8th Australasian Conference on Information Systems, September 1997 (ACIS 97), from the authors' practical experiences, and from the case study research literature.", "title": "" } ]
[ { "docid": "2fcd5a776f5e57c89806f52d52bd90d1", "text": "This paper investigates a material-efficient axial pole pairing method for torque ripple reduction in a direct-drive outer-rotor surface-mounted permanent-magnet synchronous machine. The effects of the magnet pole arc width on the torque ripple characteristics of the machine are first established by both analytical and 2-D finite element approaches. Furthermore, the effectiveness of the axial pole pairing technique in mitigating the machine cogging torque, back electromotive force harmonics, and overall torque quality is comprehensively examined. Finally, 3-D finite element analysis and experiments are carried out to validate the proposed approach, and the results show that axial pole pairing can be cost efficiently implemented in terms of magnet material usage and assembly.", "title": "" }, { "docid": "f4166e4121dbd6f6ab209e6d99aac63f", "text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.", "title": "" }, { "docid": "deed7aab7e678b9474e11e05ebfefc04", "text": "Ultrathin films of single-walled carbon nanotubes (SWNTs) represent an attractive, emerging class of material, with properties that can approach the exceptional electrical, mechanical, and optical characteristics of individual SWNTs, in a format that, unlike isolated tubes, is readily suitable for scalable integration into devices. These features suggest the potential for realistic applications as conducting or semiconducting layers in diverse types of electronic, optoelectronic and sensor systems. This article reviews recent advances in assembly techniques for forming such films, modeling and experimental work that reveals their collective properties, and engineering aspects of implementation in sensors and in electronic devices and circuits with various levels of complexity. A concluding discussion provides some perspectives on possibilities for future work in fundamental and applied aspects.", "title": "" }, { "docid": "c50d4072f38c7d73c087c3442aee3113", "text": "In recent years, Semantic Web (SW) research has resulted in significant outcomes. Various industries have adopted SW technologies, while the ‘deep web’ is still pursuing the critical transformation point, in which the majority of data found on the deep web will be exploited through SW value layers. In this article we analyse the SW applications from a ‘market’ perspective. We are setting the key requirements for real-world information systems that are SW-enabled and we discuss the major difficulties for the SW uptake that has been delayed. This article contributes to the literature of SW and knowledge management providing a context for discourse towards best practices on SW-based information systems.", "title": "" }, { "docid": "d75763c0b265cbae740583f153928862", "text": "In this paper, we explore the concept of the smart parking system and their categories. The classifications of various existing systems are explained. The parking system handles various technologies, and the categories of those techniques are given. The functions of the nodes in wireless sensor networks are classified.", "title": "" }, { "docid": "225a492370efee6eca39f713026efe12", "text": "Researchers in the social and behavioral sciences routinely rely on quasi-experimental designs to discover knowledge from large data-bases. Quasi-experimental designs (QEDs) exploit fortuitous circumstances in non-experimental data to identify situations (sometimes called \"natural experiments\") that provide the equivalent of experimental control and randomization. QEDs allow researchers in domains as diverse as sociology, medicine, and marketing to draw reliable inferences about causal dependencies from non-experimental data. Unfortunately, identifying and exploiting QEDs has remained a painstaking manual activity, requiring researchers to scour available databases and apply substantial knowledge of statistics. However, recent advances in the expressiveness of databases, and increases in their size and complexity, provide the necessary conditions to automatically identify QEDs. In this paper, we describe the first system to discover knowledge by applying quasi-experimental designs that were identified automatically. We demonstrate that QEDs can be identified in a traditional database schema and that such identification requires only a small number of extensions to that schema, knowledge about quasi-experimental design encoded in first-order logic, and a theorem-proving engine. We describe several key innovations necessary to enable this system, including methods for automatically constructing appropriate experimental units and for creating aggregate variables on those units. We show that applying the resulting designs can identify important causal dependencies in real domains, and we provide examples from academic publishing, movie making and marketing, and peer-production systems. Finally, we discuss the integration of QEDs with other approaches to causal discovery, including joint modeling and directed experimentation.", "title": "" }, { "docid": "c32d61da51308397d889db143c3e6f9d", "text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.", "title": "" }, { "docid": "658fbe3164e93515d4222e634b413751", "text": "A prediction market is a place where individuals can wager on the outcomes of future events. Those who forecast the outcome correctly win money, and if they forecast incorrectly, they lose money. People value money, so they are incentivized to forecast such outcomes as accurately as they can. Thus, the price of a prediction market can serve as an excellent indicator of how likely an event is to occur [1, 2]. Augur is a decentralized platform for prediction markets. Our goal here is to provide a blueprint of a decentralized prediction market using Bitcoin’s input/output-style transactions. Many theoretical details of this project, such as its game-theoretic underpinning, are touched on lightly or not at all. This work builds on (and is intended to be read as a companion to) the theoretical foundation established in [3].", "title": "" }, { "docid": "40a88d168ad559c1f68051e710c49d6b", "text": "Modern robotic systems tend to get more complex sensors at their disposal, resulting in complex algorithms to process their data. For example, camera images are being used map their environment and plan their route. On the other hand, the robotic systems are becoming mobile more often and need to be as energy-efficient as possible; quadcopters are an example of this. These two trends interfere with each other: Data-intensive, complex algorithms require a lot of processing power, which is in general not energy-friendly nor mobile-friendly. In this paper, we describe how to move the complex algorithms to a computing platform that is not part of the mobile part of the setup, i.e. to offload the processing part to a base station. We use the ROS framework for this, as ROS provides a lot of existing computation solutions. On the mobile part of the system, our hard real-time execution framework, called LUNA, is used, to make it possible to run the loop controllers on it. The design of a `bridge node' is explained, which is used to connect the LUNA framework to ROS. The main issue to tackle is to subscribe to an arbitrary ROS topic at run-time, instead of defining the ROS topics at compile-time. Furthermore, it is shown that this principle is working and the requirements of network bandwidth are discussed.", "title": "" }, { "docid": "e2fb4ed617cffabba2f28b95b80a30b3", "text": "The importance of information security education, information security training, and information security awareness in organisations cannot be overemphasised. This paper presents working definitions for information security education, information security training and information security awareness. An investigation to determine if any differences exist between information security education, information security training and information security awareness was conducted. This was done to help institutions understand when they need to train or educate employees and when to introduce information security awareness programmes. A conceptual analysis based on the existing literature was used for proposing working definitions, which can be used as a reference point for future information security researchers. Three important attributes (namely focus, purpose and method) were identified as the distinguishing characteristics of information security education, information security training and information security awareness. It was found that these information security concepts are different in terms of their focus, purpose and methods of delivery.", "title": "" }, { "docid": "f19f4d2c9e05f30e21d09ab41da9ec47", "text": "Multilayered artificial neural networks have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy-efficient on-chip storage for the synaptic weights, motivated primarily by the observation that the number of synapses is orders of magnitude larger than the number of neurons. Typical digital CMOS implementations of such large-scale networks are power hungry. In order to minimize the power consumption, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of neural networks to small synaptic weight perturbations enables us to scale the operating voltage of the 6T SRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200 mV from the nominal operating voltage (950 mV) for practically no loss (less than 0.5%) in accuracy (22 nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.", "title": "" }, { "docid": "72b3fbd8c7f03a4ad1e36ceb5418cba6", "text": "The risk for multifactorial diseases is determined by risk factors that frequently apply across disorders (universal risk factors). To investigate unresolved issues on etiology of and individual’s susceptibility to multifactorial diseases, research focus should shift from single determinant-outcome relations to effect modification of universal risk factors. We present a model to investigate universal risk factors of multifactorial diseases, based on a single risk factor, a single outcome measure, and several effect modifiers. Outcome measures can be disease overriding, such as clustering of disease, frailty and quality of life. “Life course epidemiology” can be considered as a specific application of the proposed model, since risk factors and effect modifiers of multifactorial diseases typically have a chronic aspect. Risk factors are categorized into genetic, environmental, or complex factors, the latter resulting from interactions between (multiple) genetic and environmental factors (an example of a complex factor is overweight). The proposed research model of multifactorial diseases assumes that determinant-outcome relations differ between individuals because of modifiers, which can be divided into three categories. First, risk-factor modifiers that determine the effect of the determinant (such as factors that modify gene-expression in case of a genetic determinant). Second, outcome modifiers that determine the expression of the studied outcome (such as medication use). Third, generic modifiers that determine the susceptibility for multifactorial diseases (such as age). A study to assess disease risk during life requires phenotype and outcome measurements in multiple generations with a long-term follow up. Multiple generations will also enable to separate genetic and environmental factors. Traditionally, representative individuals (probands) and their first-degree relatives have been included in this type of research. We put forward that a three-generation design is the optimal approach to investigate multifactorial diseases. This design has statistical advantages (precision, multiple-informants, separation of non-genetic and genetic familial transmission, direct haplotype assessment, quantify genetic effects), enables unique possibilities to study social characteristics (socioeconomic mobility, partner preferences, between-generation similarities), and offers practical benefits (efficiency, lower non-response). LifeLines is a study based on these concepts. It will be carried out in a representative sample of 165,000 participants from the northern provinces of the Netherlands. LifeLines will contribute to the understanding of how universal risk factors are modified to influence the individual susceptibility to multifactorial diseases, not only at one stage of life but cumulatively over time: the lifeline.", "title": "" }, { "docid": "aff08e12e131749f90dac656c3f40853", "text": "These days, there are more than a million recipes on the Web. When you search for a recipe with one query such as “nikujaga,” the name of a typical Japanese food, you can find thousands of “nikujaga” recipes as the result. Even if you focus on only the top ten results, it is still difficult to find out the characteristic feature of each recipe because a cooking is a work-flow including parallel procedures. According to our survey, people place the most importance on the differences of cooking procedures when they compare the recipes. However, such differences are difficult to be extracted just by comparing the recipe texts as existing methods. Therefore, our system extracts (i) a general way to cook as a summary of cooking procedures and (ii) the characteristic features of each recipe by analyzing the work-flows of the top ten results. In the experiments, our method succeeded in extracting 54% of manually extracted features while the previous research addressed 37% of them.", "title": "" }, { "docid": "594c6c6f13c64639f864787cb0ac7a0e", "text": "This paper describes a hybrid routing algorithm Flare, which could be used for payment routing in Lightning Network. The design goal for the algorithm is to ensure that routes can be found as quickly as possible. This is accomplished at the cost of each node proactively gathering information about the Lightning Network topology. The collected information includes both payment channels close to the node in terms of hop distance and paths to beacon nodes, which are close to the node in the node address space. The usage of beacon node serves to supplement a node’s local view of the network with randomly selected feeler nodes deeper in the network. The combination of local and beacon nodes allows a node to minimize routing state, while finding routes to any given node with high probability. We perform simulations of the routing algorithm and find it to be scalable to at least 100,000 nodes. © 2016 Bitfury Group Limited and Olaoluwa Osuntokun Without permission, anyone may use, reproduce or distribute any material in this paper for noncommercial and educational use (i.e., other than for a fee or for commercial purposes) provided that the original source and the applicable copyright notice are cited. lightning@bitfury.com, Bitfury Group laolu@lightning.network Bitcoin is the world’s most widely used and valuable digital currency [1] , which allows anyone to send value without a trusted intermediary or depository. Bitcoin contains an advanced scripting system allowing users to program instructions for funds [2] . Bitcoin aggregates transactions into blocks with the expected time interval of 10 minutes between blocks. Bitcoin payments are widely regarded as practically irreversible after six confirmations (i.e., five additional blocks built on top of the block containing the transaction in question) [3] , or about one hour. Micropayments (i.e., payments less than a few cents) could take a long time to get confirmed, with Bitcoin transaction fees rendering such payments unviable. These conditions necessitate the development solutions like overlays , which could combine the advantages of the Bitcoin blockchain (i.e., its security and censorship resistance) with guarantees of near -instant payment processing and affordable micropayments. The Lightning Network (LN) [4] is one promising overlay solution to the problems mentioned above. LN operates as a network of bidirectional payment channels transferring value out of band, i.e., not recording transactions on the Bitcoin blockchain. LN is designed to be decentralized (i.e., a failure of a single party or a few parties would not render the network inoperable) and trustless (i.e., at no time would the custody of users’ funds be delegated to trusted third parties). Security of the network is enforced by blockchain smart contracts using Bitcoin’s builtin scripting without creating on -blockchain transactions for individual payments. LN could be deployed for other blockchains using a Bitcoin -like data model with unspent transaction outputs; furthermore, LN could be used for trustless inter -ledger payments. Other payment channel network concepts have emerged (e.g., Stroem [5] , Impulse [6] , Decker – Wattenhofer channels [7]); we focus our attention on LN as a more popular concept, which embraces the spirit of Bitcoin in its trustless and decentralized nature. LN payments would not need block confirmations, thus being near instant in its normal case of operation. A single payment on LN might involve several payment channels; however, payments are designed to be atomic (either the entire payment succeeds or fails). Thus, LN could be used at retail point -ofsale terminals, with machine -to -machine transactions, or anywhere instant payments are needed. LN could also allow for scalable bitcoin micropayments (e.g., tips on social websites) and micropayment streams (e.g., payment for online videos), therefore contributing to the expansion of the Bitcoin ecosystem. One of the defining features of LN is the ability to route payments between network users with one or more intermediaries without the need to trust them; i.e., it may not be necessary for parties to create a direct payment channel in order to complete a payment. Correspondingly, a problem of particular importance for LN is payment routing, i.e., finding a path of payment channels, which could route the payment from the sender to the recipient and would be optimal according to certain criteria (e.g., time to complete the payment and/or routing expenses). Without a fully automated solution to payment routing, it could be difficult for LN to establish a foothold.", "title": "" }, { "docid": "de9ed927d395f78459e84b1c27f9c746", "text": "JuMP is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax. JuMP takes advantage of advanced features of the Julia programming language to offer unique functionality while achieving performance on par with commercial modeling tools for standard tasks. In this work we will provide benchmarks, present the novel aspects of the implementation, and discuss how JuMP can be extended to new problem classes and composed with state-of-the-art tools for visualization and interactivity.", "title": "" }, { "docid": "1c6e9cbb9d935cdbe8e2f361b07398d9", "text": "We present a fluid-dynamic model for the simulation of urban traffic networks with road sections of different lengths and capacities. The model allows one to efficiently simulate the transitions between free and congested traffic, taking into account congestion-responsive traffic assignment and adaptive traffic control. We observe dynamic traffic patterns which significantly depend on the respective network topology. Synchronization is only one interesting example and implies the emergence of green waves. In this connection, we will discuss adaptive strategies of traffic light control which can considerably improve throughputs and travel times, using self-organization principles based on local interactions between vehicles and traffic lights. Similar adaptive control principles can be applied to other queueing networks such as production systems. In fact, we suggest to turn push operation of traffic systems into pull operation: By removing vehicles as fast as possible from the network, queuing effects can be most efficiently avoided. The proposed control concept can utilize the cheap sensor technologies available in the future and leads to reasonable operation modes. It is flexible,", "title": "" }, { "docid": "15195baf3ec186887e4c5ee5d041a5a6", "text": "We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.", "title": "" }, { "docid": "457c0f8f7cde71eb65e83f4d859e6c94", "text": "In cloud computing environment the new data management model is in use now a days that enables data integration and access on a large scale cloud computing as a service termed as Database-as-a-service (DAAS). Through which service provider offers customer management functionalities as well as the expensive hardware. Data privacy is the major security determinant in DAAS because data will be shared with a third party; an un-trusted server is dangerous and unsafe for the user. This paper shows a concern on the security element in cloud environment. It suggests a technique to enhance the security of cloud database. This technique provides the flexible multilevel and hybrid security. It uses RSA, Triple DES and Random Number generator algorithms as an encrypting tool.", "title": "" }, { "docid": "bda04f2eaee74979d7684681041e19bd", "text": "In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.", "title": "" }, { "docid": "b03ae1c57ed0e5c49fb99a8232d694d6", "text": "Introduction The Neolithic Hongshan Culture flourished between 4500 and 3000 BCE in what is today northeastern China and Inner Mongolia (Figure 1). Village sites are found in the northern part of the region, while the two ceremonial sites of Dongshanzui and Niuheliang are located in the south, where villages are fewer (Guo 1995, Li 2003). The Hongshan inhabitants included agriculturalists who cultivated millet and pigs for subsistence, and accomplished artisans who carved finely crafted jades and made thin black-on-red pottery. Organized labor of a large number of workers is suggested by several impressive constructions, including an artificial hill containing three rings of marble-like stone, several high cairns with elaborate interiors and a 22 meter long building which contained fragments of life-sized statues. One fragment was a face with inset green jade eyes (Figure 2). A ranked society is implied by the burials, which include decorative jades made in specific, possibly iconographic, shapes. It has been argued previously that the sizes and locations of the mounded tombs imply at least three elite ranks (Nelson 1996).", "title": "" } ]
scidocsrr
bacf432b3f231d7be68f71b627cfc327
Machine translation evaluation versus quality estimation
[ { "docid": "4292a60a5f76fd3e794ce67d2ed6bde3", "text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.", "title": "" } ]
[ { "docid": "dfca5783e6ec34d228278f14c5719288", "text": "Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latentspace back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.", "title": "" }, { "docid": "88d8783128ea9052f99b7f491d042029", "text": "1.1 UWB antennas in the field of high pulsed power For the last few years, the generation of high-power electromagnetic waves has been one of the major applications of high pulsed power (HPP). It has aroused great interest in the scientific community since it is at the origin of several technological advances. Several kinds of high power radiation sources have been created. There currently appears to be a strong inclination towards compact and autonomous sources of high power microwaves (HPM) (Cadilhon et al., 2010; Pécastaing et al., 2009). The systems discussed here always consist of an electrical high pulsed power generator combined with an antenna. The HPP generator consists of a primary energy source, a power-amplification system and a pulse forming stage. It sends the energy to a suitable antenna. When this radiating element has good electromagnetic characteristics over a wide band of frequency and high dielectric strength, it is possible to generate high power electromagnetic waves in the form of pulses. The frequency band of the wave that is radiated can cover a very broad spectrum of over one decade in frequency. In this case, the technique is of undoubted interest for a wide variety of civil and military applications. Such applications can include, for example, ultra-wideband (UWB) pulse radars to detect buried mines or to rescue buried people, the production of nuclear electromagnetic pulse (NEMP) simulators for electromagnetic compatibility and vulnerability tests on electronic and IT equipment, and UWB communications systems and electromagnetic jamming, the principle of which consists of focusing high-power electromagnetic waves on an identified target to compromise the target’s mission by disrupting or destroying its electronic components. Over the years, the evolution of the R&D program for the development of HPM sources has evidenced the technological difficulties intrinsic to each elementary unit and to each of the physical parameters considered. Depending on the wave form chosen, there is in fact a very wide range of possibilities for the generation of microwave power. The only real question is", "title": "" }, { "docid": "d157e462a13515132e73888101d48ab6", "text": "This paper describes the development of a fuzzy gain scheduling scheme of PID controllers for process control. Fuzzy rules and reasoning are utilized on-line to determine the controller parameters based on the error signal and its first difference. Simulation results demonstrate that better control performance can be achieved in comparison with ZieglerNichols controllers and Kitamori’s PID controllers.", "title": "" }, { "docid": "73edcacf75e82f0f83c6f8c7e832854d", "text": "As a key technology of home area networks in smart grids, fine-grained power usage monitoring may help conserve electricity. Several existing systems achieve this goal by exploiting appliances' power usage signatures identified in labor-intensive in situ training processes. Recent work shows that autonomous power usage monitoring can be achieved by supplementing a smart meter with distributed sensors that detect the working states of appliances. However, sensors must be carefully installed for each appliance, resulting in high installation cost. This paper presents Supero - the first ad hoc sensor system that can monitor appliance power usage without supervised training. By exploiting multisensor fusion and unsupervised machine learning algorithms, Supero can classify the appliance events of interest and autonomously associate measured power usage with the respective appliances. Our extensive evaluation in five real homes shows that Supero can estimate the energy consumption with errors less than 7.5%. Moreover, non-professional users can quickly deploy Supero with considerable flexibility.", "title": "" }, { "docid": "28cdd3fafd052941c496d246e0df244b", "text": "Writing Windows NT device drivers can be a daunting task. Device drivers must be fully re-entrant, must use only limited resources and must be created with special development environments. Executing device drivers in user-mode offers significant coding advantages. User-mode device drivers have access to all user-mode libraries and applications. They can be developed using standard development tools and debugged on a single machine. Using the Proxy Driver to retrieve I/O requests from the kernel, user-mode drivers can export full device services to the kernel and applications. User-mode device drivers offer enormous flexibility for emulating devices and experimenting with new file systems. Experimental results show that in many cases, the overhead of moving to user-mode for processing I/O can be masked by the inherent costs of accessing physical devices.", "title": "" }, { "docid": "1725dcaa94fef84f8a29ffea9ea311bd", "text": "Is moral judgment accomplished by intuition or conscious reasoning? An answer demands a detailed account of the moral principles in question. We investigated three principles that guide moral judgments: (a) Harm caused by action is worse than harm caused by omission, (b) harm intended as the means to a goal is worse than harm foreseen as the side effect of a goal, and (c) harm involving physical contact with the victim is worse than harm involving no physical contact. Asking whether these principles are invoked to explain moral judgments, we found that subjects generally appealed to the first and third principles in their justifications, but not to the second. This finding has significance for methods and theories of moral psychology: The moral principles used in judgment must be directly compared with those articulated in justification, and doing so shows that some moral principles are available to conscious reasoning whereas others are not.", "title": "" }, { "docid": "f7e19e14c90490e1323e47860d21ec4d", "text": "There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery — including DNA-sequencing technologies and analysis algorithms — need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision.", "title": "" }, { "docid": "99858cd85b95757b3d6c3aa0cd65e0b6", "text": "The geological and ecological (geo-ecological) environment of the coastal coal-mining city is a basic element for human subsistence, and it connects the regional economy with social sustainable development. Using Longkou, a coastal mining city, as an example, a geological and ecological environmental quality assessment was conducted based on spatiotemporal big data. Remote sensing images, a digital elevation model (DEM), and precipitation and interpolation processing were used to generate factor layers. A synthetic evaluation index system was set up, including physical geography, geological conditions, mining intensity, ecological environmental recovery and geological hazards associated with mining. Moreover, an analytical hierarchy process was used to calculate the factor weight of each evaluation factor, and a consistency check was performed to build an assessment model of the geoecological environment of Longkou. The results indicate that multi-factor spatiotemporal big data provide a scientific assessment of the geo-ecological environmental quality with indispensable data and methods. The spatial distribution of geo-ecological environmental quality presented clear specialization of zonality, showing poor quality in the coastal coal mine ore concentration area and good quality in the inland and mountainous areas of Nanshan Mountain. The geo-ecological environmental quality of Longkou was divided into 5 levels as worst, poor, middle, good and better districts. The good and better districts accounted for 76.763% of the total area of the assessment region, indicating that the geoecological environmental quality of the study area was in good condition. The mining intensity and ecological environment recovery were major factors in determining the regional variation of the geoecological environment of Longkou. The possible causes inducing uncertainties and limitations in evaluation of the geo-ecological environmental quality were discussed. The model combining AHP with GIS proposed in this paper is an effective means of evaluating regional geo-ecological environmental quality. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2621777d5f39092295c3f7c548b255f8", "text": "Caller ID (caller identification) is a service provided by telephone operators where the phone number and/or the name of the caller is transmitted to inform the callee who is calling. Today, most people trust the caller ID information and some banks even use Caller ID to authenticate customers. However, with the proliferation of smartphones and VoIP, it is easy to spoof caller ID information by installing a particular application on the smartphone or by using service providers that offer Caller ID spoofing. As the phone network is fragmented between countries and companies and upgrades of old hardware is costly, no mechanism is available today to let end-users easily detect Caller ID spoofing attacks. In this article, we propose a new approach of using end-to-end caller ID verification schemes that leverage features of the existing phone network infrastructure (CallerDec ). We design an SMS-based and a timing-based version of CallerDec that works with existing combinations of landlines, cellular and VoIP networks and can be deployed at the liberty of the users. We implemented both CallerDec schemes as an App for Android-based phones and validated their effectiveness in detecting spoofing attacks in various scenarios.", "title": "" }, { "docid": "b8a681b6c928d8b84fa5f30154d5af85", "text": "Medicine relies on the use of pharmacologically active agents (drugs) to manage and treat disease. However, drugs are not inherently effective; the benefit of a drug is directly related to the manner by which it is administered or delivered. Drug delivery can affect drug pharmacokinetics, absorption, distribution, metabolism, duration of therapeutic effect, excretion, and toxicity. As new therapeutics (e.g., biologics) are being developed, there is an accompanying need for improved chemistries and materials to deliver them to the target site in the body, at a therapeutic concentration, and for the required period of time. In this Perspective, we provide an historical overview of drug delivery and controlled release followed by highlights of four emerging areas in the field of drug delivery: systemic RNA delivery, drug delivery for localized therapy, oral drug delivery systems, and biologic drug delivery systems. In each case, we present the barriers to effective drug delivery as well as chemical and materials advances that are enabling the field to overcome these hurdles for clinical impact.", "title": "" }, { "docid": "dcdb6242febbef358efe5a1461957291", "text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.", "title": "" }, { "docid": "bbd378407abb1c2a9a5016afee40c385", "text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.", "title": "" }, { "docid": "4d52f2c0ec2f5f96f2676dfc012bc2d8", "text": "We have expanded the field of \"DNA computers\" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the \"Knight problem,\" which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine \"bits\" or placeholders in a combinatorial RNA library. We recovered a set of \"winning\" molecules that describe solutions to this problem.", "title": "" }, { "docid": "e82df2786524c8a427c8aecfc5ab817a", "text": "This paper presents 2×2 patch array antenna for 2.45 GHz industrial, scientific and medical (ISM) band application. In this design, four array radiating elements interconnected with a transmission line and excited by 50Ω subminiature (SMA). The proposed antenna structure is combined with a reflector in order to investigate the effect of air gap between radiating element and reflector in terms of reflection coefficient (S11) bandwidth and realized gain. The analysis on the effect of air gap has significantly achieved maximum reflection coefficient and realized gain of -16 dB and 19.29 dBi respectively at 2.45 GHz.", "title": "" }, { "docid": "45f709e638c044b077616a225c441f1f", "text": "We study the unsupervised learning of CNNs for optical flow estimation using proxy ground truth data. Supervised CNNs, due to their immense learning capacity, have shown superior performance on a range of computer vision problems including optical flow prediction. They however require the ground truth flow which is usually not accessible except on limited synthetic data. Without the guidance of ground truth optical flow, unsupervised CNNs often perform worse as they are naturally ill-conditioned. We therefore propose a novel framework in which proxy ground truth data generated from classical approaches is used to guide the CNN learning. The models are further refined in an unsupervised fashion using an image reconstruction loss. Our guided learning approach is competitive with or superior to state-of-the-art approaches on three standard benchmark datasets yet is completely unsupervised and can run in real time.", "title": "" }, { "docid": "3cc9d3767cbfac13fcb7d363419eccad", "text": "SpeechPy is an open source Python package that contains speech preprocessing techniques, speech features, and important post-processing operations. It provides most frequent used speech features including MFCCs and filterbank energies alongside with the log-energy of filter-banks. The aim of the package is to provide researchers with a simple tool for speech feature extraction and processing purposes in applications such as Automatic Speech Recognition and Speaker Verification.", "title": "" }, { "docid": "3cd706fd5899efabf7efa45631ad7fdb", "text": "This paper describes a pattern-based method to automatically enrich a core ontology with the definitions of a domain glossary. We show an application of our methodology to the cultural heritage domain, using the CIDOC CRM core ontology. To enrich the CIDOC, we use available resources such as the AAT art and architecture glossary, WordNet, the Dmoz taxonomy for named entities, and others.", "title": "" }, { "docid": "564591c62475a2f9ec1eafb8ce95ae32", "text": "IT companies worldwide have started to improve their service management processes based on best practice frameworks, such as IT Infrastructure Library (ITIL). However, many of these companies face difficulties in demonstrating the positive outcomes of IT service management (ITSM) process improvement. This has led us to investigate the research problem: What positive impacts have resulted from IT service management process improvement? The main contributions of this paper are 1) to identify the ITSM process improvement outcomes in two IT service provider organizations and 2) provide advice as lessons learnt.", "title": "" }, { "docid": "8ce46c28f967ef5ab76548630983748a", "text": "Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research.", "title": "" }, { "docid": "8e03f4410676fb4285596960880263e9", "text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.", "title": "" } ]
scidocsrr
68a9a4cbab57de7603a6638f2b7c90a9
An Accurate and Fast Current-Biased Voltage-Programmed AMOLED Pixel Circuit With OLED Biased in AC Mode
[ { "docid": "580a420403aba8c8b6bcf06a0aff3b9f", "text": "This paper reviews the mechanisms underlying visible light detection based on phototransistors fabricated using amorphous oxide semiconductor technology. Although this family of materials is perceived to be optically transparent, the presence of oxygen deficiency defects, such as vacancies, located at subgap states, and their ionization under illumination, gives rise to absorption of blue and green photons. At higher energies, we have the usual band-to-band absorption. In particular, the oxygen defects remain ionized even after illumination ceases, leading to persistent photoconductivity, which can limit the frame-rate of active matrix imaging arrays. However, the persistence in photoconductivity can be overcome through deployment of a gate pulsing scheme enabling realistic frame rates for advanced applications such as sensor-embedded display for touch-free interaction.", "title": "" } ]
[ { "docid": "20e87d213dcea9ae58730a1849cc5c9b", "text": "Bitcoin is a peer-to-peer electronic cash system that maintains a public ledger with all transactions. The public availability of this information has implications for the privacy of the users. The public ledger consists of transactions that transfer funds from a set of inputs to a set of outputs. Both inputs and outputs are linked to Bitcoin addresses. In principle, the addresses are pseudonymous. In practice, it is sometimes possible to link Bitcoin addresses to real identities with the consequent privacy leaks. The possibilities of linking addresses to owners are multiplied when addresses are reused to receive funds multiple times. The reuse of addresses also multiplies the amount of private information that is leaked when an address is linked to a real identity. In this work we describe privacy-leaking effects of address reuse and gather statistics of address reuse in the Bitcoin network. We also describe collaborative (CoinJoin) transactions that prevent the privacy attacks that have been published in the literature. Then we analyze the Blockchain to find transactions that could potentially be CoinJoin transactions.", "title": "" }, { "docid": "e98b2cb8bfc56fd2eb75352eec0346a6", "text": "Decreasing magnetic resonance (MR) image acquisition times can potentially reduce procedural cost and make MR examinations more accessible. Compressed sensing (CS)based image reconstruction methods, for example, decrease MR acquisition time by reconstructing high-quality images from data that were originally sampled at rates inferior to the NyquistShannon sampling theorem. Iterative algorithms with data regularization are the standard approach to solving ill-posed, CS inverse problems. These solutions are usually slow, therefore, preventing near-real time image reconstruction. Recently, deeplearning methods have been used to solve the CS MR reconstruction problem. These proposed methods have the advantage of being able to quickly reconstruct images in a single pass using an appropriately trained network. Some recent studies have demonstrated that the quality of their reconstruction equals and sometimes even surpasses the quality of the conventional iterative approaches. A variety of different network architectures (e.g., U-nets and Residual U-nets) have been proposed to tackle the CS reconstruction problem. A drawback of these architectures is that they typically only work on image domain data. For undersampled data, the images computed by applying the inverse Fast Fourier Transform (iFFT) are aliased. In this work we propose a hybrid architecture that works both in the k-space (or frequency-domain) and the image (or spatial) domains. Our network is composed of a complex-valued residual U-net in the k-space domain, an iFFT operation, and a real-valued Unet in the image domain. Our experiments demonstrated, using MR raw k-space data, that the proposed hybrid approach can potentially improve CS reconstruction compared to deep-learning networks that operate only in the image domain. In this study we compare our method with four previously published deep neural networks and examine their ability to reconstruct images that are subsequently used to generate regional volume estimates. We evaluated undersampling ratios of 75% and 80%. Our technique was ranked second in the quantitative analysis, but qualitative analysis indicated that our reconstruction performed the best in hard to reconstruct regions, such as the cerebellum. All images reconstructed with our method were successfully post-processed, and showed good volumetry agreement compared with the fully sampled reconstruction measures.", "title": "" }, { "docid": "dda021771ca1b1e3c56d978149fb30c3", "text": "Intelligent interaction between humans and computers has been a dream of artificial intelligence since the beginning of digital era and one of the original motivations behind the creation of artificial intelligence. A key step towards the achievement of such an ambitious goal is to enable the Question Answering systems understand the information need of the user. In this thesis, we attempt to enable the QA system’s ability to understand the user’s information need by three approaches. First, an clarification question generation method is proposed to help the user clarify the information need and bridge information need gap between QA system and the user. Next, a translation based model is obtained from the large archives of Community Question Answering data, to model the information need behind a question and boost the performance of question recommendation. Finally, a fine-grained classification framework is proposed to enable the systems to recommend answered questions based on information need satisfaction.", "title": "" }, { "docid": "6a1411e0ae6477ad2280dcf941a9fa93", "text": "Measurement of human urinary carcinogen metabolites is a practical approach for obtaining important information about tobacco and cancer. This review presents currently available methods and evaluates their utility. Carcinogens and their metabolites and related compounds that have been quantified in the urine of smokers or non-smokers exposed to environmental tobacco smoke (ETS) include trans,trans-muconic acid (tt-MA) and S-phenylmercapturic acid (metabolites of benzene), 1- and 2-naphthol, hydroxyphenanthrenes and phenanthrene dihydrodiols, 1-hydroxypyrene (1-HOP), metabolites of benzo[a]pyrene, aromatic amines and heterocyclic aromatic amines, N-nitrosoproline, 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol and its glucuronides (NNAL and NNAL-Gluc), 8-oxodeoxyguanosine, thioethers, mercapturic acids, and alkyladenines. Nitrosamines and their metabolites have also been quantified in the urine of smokeless tobacco users. The utility of these assays to provide information about carcinogen dose, delineation of exposed vs. non-exposed individuals, and carcinogen metabolism in humans is discussed. NNAL and NNAL-Gluc are exceptionally useful biomarkers because they are derived from a carcinogen- 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK)- that is specific to tobacco products. The NNAL assay has high sensitivity and specificity, which are particularly important for studies on ETS exposure. Other useful assays that have been widely applied involve quantitation of 1-HOP and tt-MA. Urinary carcinogen metabolite biomarkers will be critical components of future studies on tobacco and human cancer, particularly with respect to new tobacco products and strategies for harm reduction, the role of metabolic polymorphisms in cancer, and further evaluation of human carcinogen exposure from ETS.", "title": "" }, { "docid": "280e83986138daf0237e7502747b8a50", "text": "E-government adoption is the focus of many research studies. However, few studies have compared the adoption factors to identify the most salient predictors of e-government use. This study compares popular adoption constructs to identify the most influential. A survey was administered to elicit citizen perceptions of e-government services. The results of stepwise regression indicate perceived usefulness, trust of the internet, previous use of an e-government service and perceived ease of use all have a significant impact on one’s intention to use an e-government service. The implications for research and practice are discussed below.", "title": "" }, { "docid": "e5e1146fd0704357d865574da45ab2e5", "text": "This paper presents a compact low-loss tunable X-band bandstop filter implemented on a quartz substrate using both miniature RF microelectromechanical systems (RF-MEMS) capacitive switches and GaAs varactors. The two-pole filter is based on capacitively loaded folded-λ/2 resonators that are coupled to a microstrip line, and the filter analysis includes the effects of nonadjacent inter-resonator coupling. The RF-MEMS filter tunes from 11.34 to 8.92 GHz with a - 20-dB rejection bandwidth of 1.18%-3.51% and a filter quality factor of 60-135. The GaAs varactor loaded filter tunes from 9.56 to 8.66 GHz with a - 20-dB bandwidth of 1.65%-2% and a filter quality factor of 55-90. Nonlinear measurements at the filter null with Δf = 1 MHz show that the RF-MEMS loaded filter results in > 25-dBm higher third-order intermodulation intercept point and P-1 dB compared with the varactor loaded filter. Both filters show high rejection levels ( > 24 dB) and low passband insertion loss ( <; 0.8 dB) from dc to the first spurious response at 19.5 GHz. The filter topology can be extended to higher order designs with an even number of poles.", "title": "" }, { "docid": "1fbe886cbbeb562f2583534a8d6a2b75", "text": "A serious problem in mining industrial data bases is that they are often incomplete, and a significant amount of data is missing, or erroneously entered. This paper explores the use of machine-learning based alternatives to standard statistical data completion (data imputation) methods, for dealing with missing data. We have approached the data completion problem using two well-known machine learning techniques. The first is an unsupervised clustering strategy which uses a Bayesian approach to cluster the data into classes. The classes so obtained are then used to predict multiple choices for the attribute of interest. The second technique involves modeling missing variables by supervised induction of a decision tree-based classifier. This predicts the most likely value for the attribute of interest. Empirical tests using extracts from industrial databases maintained by tIoneywell customers have been done in order to compare the two techniques. These tests show both approaches are useful and have advantages and disadvantages. We argue that the choice between unsupervised and supervised classification techniques hould be influenced by the motivation for solving the missing data problem, and discuss potential applications for the procedures we are developing.", "title": "" }, { "docid": "89b5d821fcb5f9a91612b4936b52ad83", "text": "We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCCs computed using multiple time resolutions, and compare their performance against other features that are computed using a single time resolution, such as MFCCs, and derivatives of MFCCs. We find that in each task - pair-wise discrimination, and one vs. all classification - the features involving multiscale decompositions perform significantly better than features computed using a single time-resolution.", "title": "" }, { "docid": "4b28a0cca388452eb8ad5e0296ed73ba", "text": "Animal models are formidable tools to investigate the etiology, the course and the potential treatment of an illness. No convincing animal model of suicide has been produced to date, and despite the intensive study of thousands of animal species naturalists have not identified suicide in nonhuman species in field situations. When modeling suicidal behavior in the animal, the greatest challenge is reproducing the role of will and intention in suicide mechanics. To overcome this limitation, current investigations on animals focus on every single step leading to suicide in humans. The most promising endophenotypes worth investigating in animals are the cortisol social-stress response and the aggression/impulsivity trait, involving the serotonergic system. Astroglia, neurotrophic factors and neurotrophins are implied in suicide, too. The prevention of suicide rests on the identification and treatment of every element increasing the risk.", "title": "" }, { "docid": "2e85640668c9aa53993a3095117b6307", "text": "The article examines the structure of resultative participles in English: participles that denote a state resulting from a prior event, such as The cake is flattened or The metal is hammered.The analysis identifies distinct stative participles that derive from the different heights at which aspectual morphemes attach in a verbalizing structure.The Aspect head involved in resultative participles is shown to attach to a vP that is also found in (a) the formation of deadjectival verbs and (b) verb phrases with resultative secondary predicates, like John hammered the metal flat. These distinct constructions are shown to have a shared structural subcomponent.The analysis proposed here is compared with Lexicalist approaches employing the verbal versus adjectival passive distinction.It is shown that a uniformly syntactic analysis of the participles is superior to the Lexicalist alternative.", "title": "" }, { "docid": "87fa8c6c894208e24328aa9dbb71a889", "text": "In this paper, the design and measurements of a 8-12GHz high-efficiency MMIC high power amplifier (HPA) implemented in a 0.25μm GaAS pHEMT process is described. The 3-stage amplifier has demonstrated from 37% to 54% power-added efficiency (PAE) with 12W of output power and up to 27dB of small signal gain range from 8-12GHz. In particular, over the frequency band of 9-11 GHz, the circuit achieved above 45% PAE. The key to this design is determining and matching the optimum source and load impedance for PAE at the first two harmonics in output stage.", "title": "" }, { "docid": "bde4e8743d2146d3ee9af39f27d14b5a", "text": "For several decades now, there has been sporadic interest in automatically characterizing the speech impairment due to Parkinson's disease (PD). Most early studies were confined to quantifying a few speech features that were easy to compute. More recent studies have adopted a machine learning approach where a large number of potential features are extracted and the models are learned automatically from the data. In the same vein, here we characterize the disease using a relatively large cohort of 168 subjects, collected from multiple (three) clinics. We elicited speech using three tasks - the sustained phonation task, the diadochokinetic task and a reading task, all within a time budget of 4 minutes, prompted by a portable device. From these recordings, we extracted 1582 features for each subject using openSMILE, a standard feature extraction tool. We compared the effectiveness of three strategies for learning a regularized regression and find that ridge regression performs better than lasso and support vector regression for our task. We refine the feature extraction to capture pitch-related cues, including jitter and shimmer, more accurately using a time-varying harmonic model of speech. Our results show that the severity of the disease can be inferred from speech with a mean absolute error of about 5.5, explaining 61% of the variance and consistently well-above chance across all clinics. Of the three speech elicitation tasks, we find that the reading task is significantly better at capturing cues than diadochokinetic or sustained phonation task. In all, we have demonstrated that the data collection and inference can be fully automated, and the results show that speech-based assessment has promising practical application in PD. The techniques reported here are more widely applicable to other paralinguistic tasks in clinical domain.", "title": "" }, { "docid": "70ea4bbe03f2f733ff995dc4e8fea920", "text": "The spread of malicious or accidental misinformation in social media, especially in time-sensitive situations, such as real-world emergencies, can have harmful effects on individuals and society. In this work, we developed models for automated verification of rumors (unverified information) that propagate through Twitter. To predict the veracity of rumors, we identified salient features of rumors by examining three aspects of information spread: linguistic style used to express rumors, characteristics of people involved in propagating information, and network propagation dynamics. The predicted veracity of a time series of these features extracted from a rumor (a collection of tweets) is generated using Hidden Markov Models. The verification algorithm was trained and tested on 209 rumors representing 938,806 tweets collected from real-world events, including the 2013 Boston Marathon bombings, the 2014 Ferguson unrest, and the 2014 Ebola epidemic, and many other rumors about various real-world events reported on popular websites that document public rumors. The algorithm was able to correctly predict the veracity of 75% of the rumors faster than any other public source, including journalists and law enforcement officials. The ability to track rumors and predict their outcomes may have practical applications for news consumers, financial markets, journalists, and emergency services, and more generally to help minimize the impact of false information on Twitter.", "title": "" }, { "docid": "1bb5e01e596d09e4ff89d7cb864ff205", "text": "A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent generalpurpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at http://vis-www.cs.umass.edu/texture.", "title": "" }, { "docid": "a17fba415483bd148887c9f9e4d7a5a5", "text": "Geometry theorem proving forms a major and challenging component in the K-12 mathematics curriculum. A particular difficult task is to add auxiliary constructions (i.e., additional lines or points) to aid proof discovery. Although there exist many intelligent tutoring systems proposed for geometry proofs, few teach students how to find auxiliary constructions. And the few exceptions are all limited by their underlying reasoning processes for supporting auxiliary constructions. This paper tackles these weaknesses of prior systems by introducing an interactive geometry tutor, the Advanced Geometry Proof Tutor (AGPT). It leverages a recent automated geometry prover to provide combined benefits that any geometry theorem prover or intelligent tutoring system alone cannot accomplish. In particular, AGPT not only can automatically process images of geometry problems directly, but also can interactively train and guide students toward discovering auxiliary constructions on their own. We have evaluated AGPT via a pilot study with 78 high school students. The study results show that, on training students how to find auxiliary constructions, there is no significant perceived difference between AGPT and human tutors, and AGPT is significantly more effective than the state-of-the-art geometry solver that produces human-readable proofs.", "title": "" }, { "docid": "f4027b390c838a16e29be35778a44b93", "text": "Objective: our aim is to report the simultaneous occurence of cheilitis glandularis and actinic cheilitis on the lower lip of a middle-aged female patient. Case Report: the patient presented clinical features compatible with these two lesions, confirmed by histopathological exam. Conclusion: the importance of the present case is the rare concomitant occurrence of both conditions, with special concern towards the malignancy potential related to both diseases.", "title": "" }, { "docid": "893dd13c60e3e0366f0cf88368cc5bd2", "text": "Because privacy today is a major concern for mobile applications, network anonymizers are widely available on smartphones, such as Android. However despite the use of such anonymizers, in many cases applications are still able to identify the user and the device by different means than the IP address. The reason is that very often applications require device services and information that go beyond the capabilities of anonymous networks in protecting users’ identity and privacy. In this paper, we propose two solutions that address this problem. The first solution is based on an approach that shadows user and application data, device information, and resources that can reveal the user identity. Data shadowing is executed when the smartphone switches to the “anonymous modality”. Once the smartphone returns to work in the normal (i.e. non-anonymous) modality, application data, device information and resources are returned back to the state they had before the anonymous connection. The second solution is based on run-time modifications of Android application permissions. Permissions associated with sensitive information are dynamically revoked at run-time from applications when the smartphone is used under the anonymous modality. They are re-instated back when the smartphone returns to work in the normal modality. In addition, both solutions offer protection from applications that identify their users through traces left in the application’s data storage or through exchanging identifying data messages. We developed IdentiDroid, a customized Android operating system, to deploy these solutions and built IdentiDroid Profile Manager, a profile-based configuration tool that allows one to set different configurations for each installed Android application. With this tool, applications running within the same device are configured to be given different identifications and privileges to limit the uniqueness of device and user information. We analyzed 250 Android applications to determine what information, services, and permissions can identify users and devices. Our experiments show that when IdentiDroid is deployed and properly configured on Android devices, users’ anonymity is better guaranteed by either of the proposed solutions with no significant impact on most device applications.", "title": "" }, { "docid": "7f39974c1eb5dcecf2383ec9cd5abc42", "text": "Edited volumes are an imperfect format for the presentation of ideas, not least because their goals vary. Sometimes they aim simply to survey the field, at other times to synthesize and advance the field. I prefer the former for disciplines that by their nature are not disposed to achieve definitive statements (philosophy, for example). A volume on an empirical topic, however, by my judgment falls short if it closes without firm conclusions, if not on the topic itself, at least on the state of the art of its study. Facial Attractiveness does fall short of this standard, but not for lack of serious effort (especially appreciated are such features as the summary table in Chapter 5). Although by any measure an excellent and thorough review of the major strands of its topic, the volume’s authors are often in such direct conflict that the reader is disappointed that the editors do not, in the end, provide sufficient guidance about where the most productive research avenues lie. Every contribution is persuasive, but as they cannot all be correct, who is to win the day? An obvious place to begin is with the question, What is “attractiveness”? Most writers seem unaware of the problem, and how it might impact their research methodology. What, the reader wants to know, is the most defensible conceptualization of the focal phenomenon? Often an author focuses explicitly on the aesthetic dimension of “attractive,” treating it as a synonym for “beauty.” A recurring phrase in the book is that “beauty is in the eye of the beholder,” with the authors undertaking to argue whether this standard accurately describes social reality. They reach contradictory conclusions. Chapter 1 (by Adam Rubenstein et al.) finds the maxim to be a “myth” which, by chapter’s end, is presumably dispelled; Anthony Little and his co-authors in Chapter 3, however, view their contribution as “help[ing] to place beauty back into the eye of the beholder.” Other chapters take intermediate positions. Besides the aesthetic, “attractive” can refer to raw sexual appeal, or to more long-term relationship evaluations. Which kind of attractiveness one intends will determine the proper methodology to use, and thereby impact the likely experimental results. As only one example, if one intends to investigate aesthetic attraction, the sexual orientation of the judges does not matter, whereas it matters a great deal if one intends to investigate sexual or relationship attraction. Yet no study discussed in these", "title": "" }, { "docid": "50b91bfca95a61ffdad552694dc78315", "text": "The use of sensors and actuators as a form of controlling cyber-physical systems in resource networks has been integrated and referred to as the Internet of Things (IoT). However, the connectivity of many stand-alone IoT systems through the Internet introduces numerous cybersecurity challenges as sensitive information is prone to be exposed to malicious users. This paper focuses on the improvement of IoT cybersecurity from an ontological analysis, proposing appropriate security services adapted to the threats. The authors propose an ontology-based cybersecurity framework using knowledge reasoning for IoT, composed of two approaches: (1) design time, which provides a dynamic method to build security services through the application of a model-driven methodology considering the existing enterprise processes; and (2) run time, which involves monitoring the IoT environment, classifying threats and vulnerabilities, and actuating in the environment ensuring the correct adaptation of the existing services. Two validation approaches demonstrate the feasibility of our concept. This entails an ontology assessment and a case study with an industrial implementation.", "title": "" } ]
scidocsrr
18df8d0e591ead43c0f6920edff7f5a1
Event Coreference Resolution by Iteratively Unfolding Inter-dependencies among Events
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" }, { "docid": "20acbae6f76e3591c8b696481baffc90", "text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.", "title": "" } ]
[ { "docid": "acbe30f1834836a0a6fc4a3ccfbd8e51", "text": "Let   H K T n n    : 1 be a countable infinite family of k-strictly Pseudo contractive, uniformly weakly closed and inward mappings on a non empty, closed and strictly convex subset K of a real Hilbert space H in to H with     1 ) ( k k T F F is non empty. Let ) 1 , (k   and for each n ,   K hn : be defined by   K x T x x h n n          ) 1 ( : 0 inf ) ( .Then for each , 1 K x    k x h      , ) ( , max 1 1 1 , we define the Krasnoselskii-Mann type algorithm by n n n n n n x T x x    ) 1 ( 1     , where   ,... 2 , 1 , ) ( , max 1 1 1      n x h n n n n   and we prove the weak and strong convergence of the sequence  n x to a common fixed point of the family  1 n n T . Also we prove the weak and strong convergence theorems for the algorithm to the family of nonexpansive mappings in uniformly convex Banach space, which is more general than Hilbert space.", "title": "" }, { "docid": "41774102456b9ef6ab13f054ad3126e5", "text": "BACKGROUND\nThe current study aimed to explore the correct recognition of mental disorders across dementia, alcohol abuse, obsessive compulsive disorder (OCD), schizophrenia and depression, along with its correlates in a nursing student population. The belief in a continuum of symptoms from mental health to mental illness and its relationship with the non-identification of mental illness was also explored.\n\n\nMETHODS\nFive hundred students from four nursing institutions in Singapore participated in this cross-sectional online study. Respondents were randomly assigned to a vignette describing one of the five mental disorders before being asked to identify what the person in the vignette is suffering from. Continuum belief was assessed by rating their agreeableness with the following statement: \"Sometimes we all behave like X. It is just a question of how severe or obvious this condition is\".\n\n\nRESULTS\nOCD had the highest correct recognition rate (86%), followed by depression (85%), dementia (77%), alcohol abuse (58%) and schizophrenia (46%). For continuum belief, the percentage of respondents who endorsed symptom continuity were 70% for depression, 61% for OCD, 58% for alcohol abuse, 56% for dementia and 46% for schizophrenia. Of concern, we found stronger continuum belief to be associated with the non-identification of mental illness after controlling for covariates.\n\n\nCONCLUSIONS\nThere is a need to improve mental health literacy among nursing students. Almost a quarter of the respondents identified excessive alcohol drinking as depression, even though there was no indication of any mood symptom in the vignette on alcohol abuse. Further education and training in schizophrenia may need to be conducted. Healthcare trainees should also be made aware on the possible influence of belief in symptom continuity on one's tendency to under-attribute mental health symptoms as a mental illness.", "title": "" }, { "docid": "4ac3affdf995c4bb527229da0feb411d", "text": "Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.\n Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.\n We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users.", "title": "" }, { "docid": "ca73a14417bbd9e56a155cd2309af20f", "text": "At frequencies below 1 GHz, vegetation is becoming transparent, the more so the lower the frequency. Tree clutter on the other hand tends to be as strong as in the microwave regime as at frequencies above 200 MHz. Below 100 MHz, i.e. in the VHF band, tree clutter levels are significantly smaller. Foliage penetration SAR is feasible at both UHF and VHF but has to overcome significant challenges. For one, resolution must be high, viz. of meter order at VHF and submeter order for UHF. In both cases resolution of wavelength order is thus called for, requiring special processing methods which will be discussed here. Secondly, the signal-to-noise budget is critical due to the severe radio frequency interference below 1 GHz. In fact SAR operation at these frequencies is not feasible unless there are some means to identify and remove the RF1. Thirdly, for SAR surveillance the target detection method is crucial. VHF resolution is too low to make any target recognition scheme effective as a means to reduce clutter false alarms. At UHF, even though resolution can be made high, intense forest clutter level creates a very difficult environment for target discrimination. These concerns and their remedies are discussed in the paper.", "title": "" }, { "docid": "9d1f1766cd8a43e89e6c8a09e37b081c", "text": "INTRODUCTION\nCardiac surgery has been the intervention of choice in many cases of cardiovascular diseases. Susceptibility to postoperative complications, cardiac rehabilitation is indicated. Therapeutic resources, such as virtual reality has been helping the rehabilitational process. The aim of the study was to evaluate the use of virtual reality in the functional rehabilitation of patients in the postoperative period.\n\n\nMETHODS\nPatients were randomized into two groups, Virtual Reality (VRG, n = 30) and Control (CG, n = 30). The response to treatment was assessed through the functional independence measure (FIM), by the 6-minute walk test (6MWT) and the Nottingham Health Profile (NHP). Evaluations were performed preoperatively and postoperatively.\n\n\nRESULTS\nOn the first day after surgery, patients in both groups showed decreased functional performance. However, the VRG showed lower reduction (45.712.3) when compared to CG (35.0612.09, P<0.05) in first postoperative day, and no significant difference in performance on discharge day (P>0.05). In evaluating the NHP field, we observed a significant decrease in pain score at third assessment (P<0.05). These patients also had a higher energy level in the first evaluation (P<0.05). There were no differences with statistical significance for emotional reactions, physical ability, and social interaction. The length of stay was significantly shorter in patients of VRG (9.410.5 days vs. 12.2 1 0.9 days, P<0.05), which also had a higher 6MWD (319.9119.3 meters vs. 263.5115.4 meters, P<0.02).\n\n\nCONCLUSION\nAdjunctive treatment with virtual reality demonstrated benefits, with better functional performance in patients undergoing cardiac surgery.", "title": "" }, { "docid": "f6679ca9f6c9efcb4093a33af15176d3", "text": "This paper reports our recent finding that a laser that is radiated on a thin light-absorbing elastic medium attached on the skin can elicit a tactile sensation of mechanical tap. Laser radiation to the elastic medium creates inner elastic waves on the basis of thermoelastic effects, which subsequently move the medium and stimulate the skin. We characterize the associated stimulus by measuring its physical properties. In addition, the perceptual identity of the stimulus is confirmed by comparing it to mechanical and electrical stimuli by means of perceptual spaces. All evidence claims that indirect laser radiation conveys a sensation of short mechanical tap with little individual difference. To the best of our knowledge, this is the first study that discovers the possibility of using indirect laser radiation for mid-air tactile rendering.", "title": "" }, { "docid": "5637bed8be75d7e79a2c2adb95d4c28e", "text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.", "title": "" }, { "docid": "e42192f9d4d33f92939a04361e1bb706", "text": "Today bone fractures are very common in our country because of road accidents or through other injuries. The X-Ray images are the most common accessibility of peoples during the accidents. But the minute fracture detection in X-Ray image is not possible due to low resolution and quality of the original X-Ray image. The complexity of bone structure and the difference in visual characteristics of fracture by their location. So it is difficult to accurately detect and locate the fractures also determine the severity of the injury. The automatic detection of fractures in X-Ray images is a significant contribution for assisting the physicians in making faster and more accurate patient diagnostic decisions and treatment planning. In this paper, an automatic hierarchical algorithm for detecting bone fracture in X-Ray image is proposed. It uses the Gray level cooccurrence matrix for detecting the fracture. The results are promising, demonstrating that the proposed method is capable of automatically detecting both major and minor fractures accurately, and shows potential for clinical application. Statistical results also indicate the superiority of the proposed methods compared to other techniques. This paper examines the development of such a system, for the detection of long-bone fractures. This project fully employed MATLAB 7.8.0 (.r2009a) as the programming tool for loading image, image processing and user interface development. Results obtained demonstrate the performance of the pelvic bone fracture detection system with some limitations.", "title": "" }, { "docid": "dbdbdf3df12ef47c778e0e9f4ddfc7d6", "text": "In the recent years, research on speech recognition has given much diligence to the automatic transcription of speech data such as broadcast news (BN), medical transcription, etc. Large Vocabulary Continuous Speech Recognition (LVCSR) systems have been developed successfully for Englishes (American English (AE), British English (BE), etc.) and other languages but in case of Indian English (IE), it is still at infancy stage. IE is one of the varieties of English spoken in Indian subcontinent and is largely different from the English spoken in other parts of the world. In this paper, we have presented our work on LVCSR of IE video lectures. The speech data contains video lectures on various engineering subjects given by the experts from all over India as part of the NPTEL project which comprises of 23 hours. We have used CMU Sphinx for training and decoding in our large vocabulary continuous speech recognition experiments. The results analysis instantiate that building IE acoustic model for IE speech recognition is essential due to the fact that it has given 34% less average word error rate (WER) than HUB-4 acoustic models. The average WER before and after adaptation of IE acoustic model is 38% and 31% respectively. Even though, our IE acoustic model is trained with limited training data and the corpora used for building the language models do not mimic the spoken language, the results are promising and comparable to the results reported for AE lecture recognition in the literature.", "title": "" }, { "docid": "0186ead8a32677289f73920af5a65d19", "text": "The tall building is the most dominating symbol of the cities and a human-made marvel that defies gravity by reaching to the clouds. It embodies unrelenting human aspirations to build even higher. It conjures a number of valid questions in our minds. The foremost and fundamental question that is often asked: Why tall buildings? This review paper seeks to answer the question by laying out arguments against and for tall buildings. Then, it provides a brief account of the historic and recent developments of tall buildings including their status during the current economic recession. The paper argues that as cities continue to expand horizontally, to safeguard against their reaching an eventual breaking point, the tall building as a building type is a possible solution by way of conquering vertical space through agglomeration and densification. Case studies of some recently built tall buildings are discussed to illustrate the nature of tall building development in their respective cities. The paper attempts to dispel any discernment about tall buildings as mere pieces of art and architecture by emphasizing their truly speculative, technological, sustainable, and evolving nature. It concludes by projecting a vision of tall buildings and their integration into the cities of the 21st century.", "title": "" }, { "docid": "57a2ef4a644f0fc385185a381f309fcd", "text": "Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i.e., object boundary) and detailed motion (i.e., articulated body movement). To this end, we present a structure preserving video prediction framework to explicitly address above issues and enhance video prediction quality. On one hand, our framework contains a two-stream generation architecture which deals with high frequency video content (i.e., detailed object or articulated motion structure) and low frequency video content (i.e., location or moving directions) in two separate streams. On the other hand, we propose a RNN structure for video prediction, which employs temporal-adaptive convolutional kernels to capture time-varying motion patterns as well as tiny objects within a scene. Extensive experiments on diverse scenes, ranging from human motion to semantic layout prediction, demonstrate the effectiveness of the proposed video prediction approach.", "title": "" }, { "docid": "957e103d533b3013e24aebd3617edd87", "text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.", "title": "" }, { "docid": "9735cecc4d8419475c72c4bd52ab556e", "text": "Information diffusion and virus propagation are fundamental processes talking place in networks. While it is often possible to directly observe when nodes become infected, observing individual transmissions (i.e., who infects whom or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and in practice gives provably near-optimal performance. We demonstrate the effectiveness of our approach by tracing information cascades in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.", "title": "" }, { "docid": "837662e22fb3bac9389b186d2f0e7e0a", "text": "Machine learning has a long tradition of helping to solve complex information security problems that are difficult to solve manually. Machine learning techniques learn models from data representations to solve a task. These data representations are hand-crafted by domain experts. Deep Learning is a sub-field of machine learning, which uses models that are composed of multiple layers. Consequently, representations that are used to solve a task are learned from the data instead of being manually designed. In this survey, we study the use of DL techniques within the domain of information security. We systematically reviewed 77 papers and presented them from a data-centric perspective. This data-centric perspective reflects one of the most crucial advantages of DL techniques – domain independence. If DL-methods succeed to solve problems on a data type in one domain, they most likely will also succeed on similar data from another domain. Other advantages of DL methods are unrivaled scalability and efficiency, both regarding the number of examples that can be analyzed as well as with respect of dimensionality of the input data. DL methods generally are capable of achieving high-performance and generalize well. However, information security is a domain with unique requirements and challenges. Based on an analysis of our reviewed papers, we point out shortcomings of DL-methods to those requirements and discuss further research opportunities.", "title": "" }, { "docid": "e9cc899155bd5f88ae1a3d5b88de52af", "text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.", "title": "" }, { "docid": "bceee7f893a5cddd464e7e8bc1f8f8dd", "text": "We present an articulatory-based method for real-time accent conversion using deep neural networks (DNN). The approach consists of two steps. First, we train a DNN articulatory synthesizer for the non-native speaker that estimates acoustics from contextualized articulatory gestures. Then we drive the DNN with articulatory gestures from a reference native speaker –mapped to the nonnative articulatory space via a Procrustes transform. We evaluate the accent-conversion performance of the DNN through a series of listening tests of intelligibility, voice identity and nonnative accentedness. Compared to a baseline method based on Gaussian mixture models, the DNN accent conversions were found to be 31% more intelligible, and were perceived more native-like in 68% of the cases. The DNN also succeeded in preserving the voice identity of the nonnative speaker.", "title": "" }, { "docid": "f3d3bbc14cfe33ee303c384cb7550f58", "text": "The standard control problem of the pendubot refers to the task of stabilizing its equilibrium configuration with the highest potential energy. Linearization of the dynamics of the pendubot about this equilibrium results in a completely controllable system and allows a linear controller to be designed for local asymptotic stability. For the underactuated pendubot, the important task is, therefore, to design a controller that will swing up both links and bring the configuration variables of the system within the region of attraction of the desired equilibrium. This paper provides a new method for swing-up control based on a series of rest-to-rest maneuvers of the first link about its vertically upright configuration. The rest-to-rest maneuvers are designed such that each maneuver results in a net gain in energy of the second link. This results in swing-up of the second link and the pendubot configuration reaching the region of attraction of the desired equilibrium. A four-step algorithm is provided for swing-up control followed by stabilization. Simulation results are presented to demonstrate the efficacy of the approach.", "title": "" }, { "docid": "1e59c6cc3dcc34ec26b912a5162635ed", "text": "Finding clusters with widely differing sizes, shapes and densities in presence of noise and outliers is a challenging job. The DBSCAN is a versatile clustering algorithm that can find clusters with differing sizes and shapes in databases containing noise and outliers. But it cannot find clusters based on difference in densities. We extend the DBSCAN algorithm so that it can also detect clusters that differ in densities. Local densities within a cluster are reasonably homogeneous. Adjacent regions are separated into different clusters if there is significant change in densities. Thus the algorithm attempts to find density based natural clusters that may not be separated by any sparse region. Computational complexity of the algorithm is O(n log n).", "title": "" }, { "docid": "8e74464613bf1fb43640f2294ce274a2", "text": "M ANY developments of recent years indicate the emergence of a general science of human behavior. Of major significance in this movement toward integration in the social sciences was a volume representing the collaboration of nine specialists from the fields of psychology , sociology, and cultural anthropology .^ The more limited objective here is to call attention to some trends in contemporary psychology whicb are favorable to this synthesis and to application in marketing and economics. Many fields of thought are marked by opposing schools. Psychology was for many years the most sectarian of all the fields laying claim to the name of science. The market analyst or economist admonished to learn more about psychology could scarcely tell where to begin. If he picked two books at random, be might find each author denying tbat what the otber was writing about was psychology at all. The illustrations in one book might be devoted entirely to representations of the brain and nervous system, while tbe text would imply that the whole subject was merely a brancb of pbysiol-• This paper was first presented on August a6, 1952, at the second Marketing Theory Seminar at the University of Colorado. ^ Toward a General Theory of Action, edited by Talcott Parsons and E. A. Shils (Cambridge: Har-vard University Press, 1951), the collaborative product of nine social scientists. ogy. Another book might deal with perception as if no otber topic mattered in psychology and be profusely illustrated witb optical illusions and otber strange configurations. Still a tbird migbt present case studies in tbe interpretation of dreams, tbe interpreter drawing on theoretical principles of breatbtaking scope but little in tbe way of visible support from empirical investigations. It is to be boped tbat the student and seeker, tbougb cbastened by tbese per-plexities, will not be discouraged. Tbe need for psychological perspective in marketing and economics remains. Tbere is a main line of development in tbe psychological analysis of buman bebavior in general wbicb is bigbly pertinent to tbe investigation of behavior in tbe market place. More is needed as to tbeoretical perspective tban tbe principles of rational cboice assunied by tbe matbemati-cal economist or tbe kind of instinct tbeory whicb is extemporized by an advertising executive in order to meet a speaking engagement. Marketing will be increasingly recognized as a segment of tbe bebavioral sciences. It will offer opportunities for tbe application of psycbo-logical principles …", "title": "" }, { "docid": "de4e2e131a0ceaa47934f4e9209b1cdd", "text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.", "title": "" } ]
scidocsrr
2817f2862b206c3d7af24b7119a48029
Predicting ICU Mortality Risk by Grouping Temporal Trends from a Multivariate Panel of Physiologic Measurements
[ { "docid": "8ca30cd6fd335024690837c137f0d1af", "text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.", "title": "" } ]
[ { "docid": "d805537f8414273cb2211306a8b81935", "text": "Optical Character Recognition which could be defined as the process of isolating textual scripts from a scanned document, is not in its 100% efficiency when it comes to a complex Dravidian language, Malayalam. Here, we present a different approach of combining n-gram segmentation along with geometric feature extraction methodology to train a Support Vector Machine in order to obtain a recognizing accuracy better than the existing methods. N-gram isolation has not been implemented so far for the curvy language Malayalam and thus such an approach gives a competence of 98% which uses Otsu Algorithm as its base. Highly efficient segmentation process gives better accuracy in feature extraction which is being fed as the input of SVM. The proposed tactic gives an adept output of 95.6% efficacy in recognizing Malayalam printed scripts and word snippets.", "title": "" }, { "docid": "552d253f8cce654dd5ea289ab9520a4c", "text": "Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page. This paper is a systematic review of the literature on organizational learning and knowledge with relevance to public service organizations. Organizational learning and knowledge are important to public sector organizations, which share complex external challenges with private organizations, but have different drivers and goals for knowledge. The evidence shows that the concepts of organizational learning and knowledge are under-researched in relation to the public sector and, importantly, this raises wider questions about the extent to which context is taken into consideration in terms of learning and knowledge more generally across all sectors. A dynamic model of organizational learning within and across organizational boundaries is developed that depends on four sets of factors: features of the source organization; features of the recipient organization; the characteristics of the relationship between organizations; and the environmental context. The review concludes, first, that defining 'organization' is an important element of understanding organizational learning and knowledge. Second, public organizations constitute an important, distinctive context for the study of organizational learning and knowledge. Third, there continues to be an over-reliance on the private sector as the principal source of theoretical understanding and empirical research and this is conceptually limiting for the understanding of organizational learning and knowledge. Fourth, differences as well as similarities between organizational sectors require conceptualization and research that acknowledge sector-specific aims, values and structures. Finally, it is concluded that frameworks for explaining processes of organizational learning at different levels need to be sufficiently dynamic and complex to accommodate public organizations.", "title": "" }, { "docid": "c09e5f5592caab9a076d92b4f40df760", "text": "Producing a comprehensive overview of the chemical content of biologically-derived material is a major challenge. Apart from ensuring adequate metabolome coverage and issues of instrument dynamic range, mass resolution and sensitivity, there are major technical difficulties associated with data pre-processing and signal identification when attempting large scale, high-throughput experimentation. To address these factors direct infusion or flow infusion electrospray mass spectrometry has been finding utility as a high throughput metabolite fingerprinting tool. With little sample pre-treatment, no chromatography and instrument cycle times of less than 5 min it is feasible to analyse more than 1,000 samples per week. Data pre-processing is limited to aligning extracted mass spectra and mass-intensity matrices are generally ready in a working day for a month’s worth of data mining and hypothesis generation. ESI-MS fingerprinting has remained rather qualitative by nature and as such ion suppression does not generally compromise data information content as originally suggested when the methodology was first introduced. This review will describe how the quality of data has improved through use of nano-flow infusion and mass-windowing approaches, particularly when using high resolution instruments. The increasingly wider availability of robust high accurate mass instruments actually promotes ESI-MS from a merely fingerprinting tool to the ranks of metabolite profiling and combined with MS/MS capabilities of hybrid instruments improved structural information is available concurrently. We summarise current applications in a wide range of fields where ESI-MS fingerprinting has proved to be an excellent tool for “first pass” metabolome analysis of complex biological samples. The final part of the review describes a typical workflow with reference to recently published data to emphasise key aspects of overall experimental design.", "title": "" }, { "docid": "91599bb49aef3e65ee158ced65277d80", "text": "We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.", "title": "" }, { "docid": "863c806d29c15dd9b9160eae25316dfc", "text": "This paper presents new structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants. The SZM is based on the cooccurrences of size/intensity of each flat zone (connected pixels with the same gray level). The first improvement increases the information processed by merging multiple gray-level quantizations and reduces the required parameter numbers. New improved descriptors were especially designed for supervised cell texture classification. They are illustrated thanks to two different databases built from quantitative cell biology. The second alternative characterizes the DNA organization during the mitosis, according to zone intensities radial distribution. The third variant is a matrix structure generalization for the fibrous texture analysis, by changing the intensity/size pair into the length/orientation pair of each region.", "title": "" }, { "docid": "3e94875b3229fc621ec90915414b9b22", "text": "Inflammation, endothelial dysfunction, and mineral bone disease are critical factors contributing to morbidity and mortality in hemodialysis (HD) patients. Physical exercise alleviates inflammation and increases bone density. Here, we investigated the effects of intradialytic aerobic cycling exercise on HD patients. Forty end-stage renal disease patients undergoing HD were randomly assigned to either an exercise or control group. The patients in the exercise group performed a cycling program consisting of a 5-minute warm-up, 20 minutes of cycling at the desired workload, and a 5-minute cool down during 3 HD sessions per week for 3 months. Biochemical markers, inflammatory cytokines, nutritional status, the serum endothelial progenitor cell (EPC) count, bone mineral density, and functional capacity were analyzed. After 3 months of exercise, the patients in the exercise group showed significant improvements in serum albumin levels, the body mass index, inflammatory cytokine levels, and the number of cells positive for CD133, CD34, and kinase insert domain-conjugating receptor. Compared with the exercise group, the patients in the control group showed a loss of bone density at the femoral neck and no increases in EPCs. The patients in the exercise group also had a significantly greater 6-minute walk distance after completing the exercise program. Furthermore, the number of EPCs significantly correlated with the 6-minute walk distance both before and after the 3-month program. Intradialytic aerobic cycling exercise programs can effectively alleviate inflammation and improve nutrition, bone mineral density, and exercise tolerance in HD patients.", "title": "" }, { "docid": "1394eaac58304e5d6f951ca193e0be40", "text": "We introduce low-cost hardware for performing non-invasive side-channel attacks on Radio Frequency Identi cation Devices (RFID) and develop techniques for facilitating a correlation power analysis (CPA) in the presence of the eld of an RFID reader. We practically verify the e ectiveness of the developed methods by analysing the security of commercial contactless smartcards employing strong cryptography, pinpointing weaknesses in the protocol and revealing a vulnerability towards side-channel attacks. Employing the developed hardware, we present the rst successful key-recovery attack on commercially available contactless smartcards based on the Data Encryption Standard (DES) or TripleDES (3DES) cipher that are widely used for security-sensitive applications, e.g., payment purposes.", "title": "" }, { "docid": "f5b6dba70d19e8327a885c912dac23b6", "text": "Genital warts affect 1% of the sexually active U.S. population and are commonly seen in primary care. Human papillomavirus types 6 and 11 are responsible for most genital warts. Warts vary from small, flat-topped papules to large, cauliflower-like lesions on the anogenital mucosa and surrounding skin. Diagnosis is clinical, but atypical lesions should be confirmed by histology. Treatments may be applied by patients, or by a clinician in the office. Patient-applied treatments include topical imiquimod, podofilox, and sinecatechins, whereas clinician-applied treatments include podophyllin, bichloroacetic acid, and trichloroacetic acid. Surgical treatments include excision, cryotherapy, and electrosurgery. The quadrivalent human papillomavirus vaccine is active against virus subtypes that cause genital warts in men and women. Additionally, male circumcision may be effective in decreasing the transmission of human immunodeficiency virus, human papillomavirus, and herpes simplex virus.", "title": "" }, { "docid": "e3bbd0ccc00cd545f11d05ab1421ed01", "text": "The expectation-confirmation model (ECM) of IT continuance is a model for investigating continued information technology (IT) usage behavior. This paper reports on a study that attempts to expand the set of post-adoption beliefs in the ECM, in order to extend the application of the ECM beyond an instrumental focus. The expanded ECM, incorporating the post-adoption beliefs of perceived usefulness, perceived enjoyment and perceived ease of use, was empirically validated with data collected from an on-line survey of 811 existing users of mobile Internet services. The data analysis showed that the expanded ECM has good explanatory power (R 1⁄4 57:6% of continued IT usage intention and R 1⁄4 67:8% of satisfaction), with all paths supported. Hence, the expanded ECM can provide supplementary information that is relevant for understanding continued IT usage. The significant effects of post-adoption perceived ease of use and perceived enjoyment signify that the nature of the IT can be an important boundary condition in understanding the continued IT usage behavior. At a practical level, the expanded ECM presents IT product/service providers with deeper insights into how to address IT users’ satisfaction and continued patronage. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a65d67cdd3206a99f91774ae983064b4", "text": "BACKGROUND\nIn recent years there has been a progressive rise in the number of asylum seekers and refugees displaced from their country of origin, with significant social, economic, humanitarian and public health implications. In this population, up-to-date information on the rate and characteristics of mental health conditions, and on interventions that can be implemented once mental disorders have been identified, are needed. This umbrella review aims at systematically reviewing existing evidence on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in adult and children asylum seekers and refugees resettled in low, middle and high income countries.\n\n\nMETHODS\nWe conducted an umbrella review of systematic reviews summarizing data on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in asylum seekers and/or refugees. Methodological quality of the included studies was assessed with the AMSTAR checklist.\n\n\nRESULTS\nThirteen reviews reported data on the prevalence of common mental disorders while fourteen reviews reported data on the efficacy of psychological or pharmacological interventions. Although there was substantial variability in prevalence rates, we found that depression and anxiety were at least as frequent as post-traumatic stress disorder, accounting for up to 40% of asylum seekers and refugees. In terms of psychosocial interventions, cognitive behavioral interventions, in particular narrative exposure therapy, were the most studied interventions with positive outcomes against inactive but not active comparators.\n\n\nCONCLUSIONS\nCurrent epidemiological data needs to be expanded with more rigorous studies focusing not only on post-traumatic stress disorder but also on depression, anxiety and other mental health conditions. In addition, new studies are urgently needed to assess the efficacy of psychosocial interventions when compared not only with no treatment but also each other. Despite current limitations, existing epidemiological and experimental data should be used to develop specific evidence-based guidelines, possibly by international independent organizations, such as the World Health Organization or the United Nations High Commission for Refugees. Guidelines should be applicable to different organizations of mental health care, including low and middle income countries as well as high income countries.", "title": "" }, { "docid": "9e953bdc98bc87398c37a62b0ec295c9", "text": "● Compare and contrast nursing and non-nursing health promotion theories. ● Examine health promotion theories for consistency with accepted health promotion priorities and values. ● Articulate how health promotion theories move the profession forward. ● Discuss strengths and limitations associated with each health promotion theory or model. ● Describe the difference between a model and a theory. ● Identify theoretical assumptions and concepts within nursing and non-nursing theories. ● Develop his or her own health promotion model.", "title": "" }, { "docid": "ece75610b34e3c5353bceb757bb7d90b", "text": "Biometric system provides a way of automatic verification or identification a person. But nowadays due to lack of secrecy, there is lot of security threat due to spoofing. Spoofing with photograph or video is one of the most common manners to attack a face recognition system. Liveness detection is a technique that can be used for validating whether the data originate is from a valid user or not. Liveness detection can be hardware based or software based or a combination of both. In this paper, we present a non intrusive and real time method to address this problem, based on skin elasticity of human face. In this technique user is asked to do some movement like chewing and forehead movement simultaneously, so that a full movement to face skin can be given and then sequence of face images is captured with a gap of few milliseconds. Then by applying correlation coefficient between images and then discriminate analysis using some method, face skin is discriminate from the other materials like gelatin, rubber, cadaver, clay etc. In comparison to other face liveness detection, this method will be much user friendly. On the other hand, one of the images captured for liveness detection can be used for face recognition. Keywords— Biometrics, Face Recognition, Fake Face Detection, Liveness Detection, Skin Elasticity,", "title": "" }, { "docid": "eeb9eb624d2eaf0d4649d048bbbb20d3", "text": "The well-known distinction between field-based and objectbased approaches to spatial information is generalised to arbitrar y locational frameworks, including in particular space, time and space-time. W systematically explore the different ways in which these approaches can be c ombined, and address the relative merits of a fully four-dimensional app roach as against a more conventional ‘three-plus-one’-dimensional approac h. We single out as especially interesting in this respect a class of phenomena , here calledmultiaspect phenomena , which seem to present different aspects when considered from different points of view. Such phenomena (e.g., floods, wildfires, processions) are proposed as the most natural candidates for tr eatment as fully four-dimensional entities (‘hyperobjects’), but it remai ns problematic how to model them so as to do justice to their multi-aspectual nat ure. The paper ends with a range of important researchable questions aimed at clearing up some of the difficulties raised.", "title": "" }, { "docid": "003afe7b4c7a35264a4a6714167e8a68", "text": "Dr. Scalia is an orthodontic postgraduate student, Dr. Perinetti is a Research Fellow, Dr. Locatelli is a Clinical Instructor, and Dr. Contardo is an Assistant Professor, Department of Medical, Surgical and Health Sciences, School of Dentistry, University of Trieste, Piazza Ospitale 1, Trieste, Friuli-Venezia Giulia 34129, Italy. Dr. Locatelli is also in the private practice of orthodontics in Portogruaro, Italy. E-mail Dr. Scalia at dott.alessandro. scalia@gmail.com. ALESSANDRO SCALIA, DDS GIUSEPPE PERINETTI, DDS, MS, PhD RANIERI LOCATELLI, MD, MS LUCA CONTARDO, DDS, MS", "title": "" }, { "docid": "5726125455c629340859ef5b214dc18a", "text": "One of the key challenges in applying reinforcement learning to complex robotic control tasks is the need to gather large amounts of experience in order to find an effective policy for the task at hand. Model-based reinforcement learning can achieve good sample efficiency, but requires the ability to learn a model of the dynamics that is good enough to learn an effective policy. In this work, we develop a model-based reinforcement learning algorithm that combines prior knowledge from previous tasks with online adaptation of the dynamics model. These two ingredients enable highly sample-efficient learning even in regimes where estimating the true dynamics is very difficult, since the online model adaptation allows the method to locally compensate for unmodeled variation in the dynamics. We encode the prior experience into a neural network dynamics model, adapt it online by progressively refitting a local linear model of the dynamics, and use model predictive control to plan under these dynamics. Our experimental results show that this approach can be used to solve a variety of complex robotic manipulation tasks in just a single attempt, using prior data from other manipulation behaviors.", "title": "" }, { "docid": "27fd27cf86b68822b3cfb73cff2e2cb6", "text": "Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.", "title": "" }, { "docid": "694add359ddb1ba8ebad89e5c9a2c6ce", "text": "Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.", "title": "" }, { "docid": "2aa324628b082f1fd6d1e1e0221d1bad", "text": "Recent behavioral investigations have revealed that autistics perform more proficiently on Raven's Standard Progressive Matrices (RSPM) than would be predicted by their Wechsler intelligence scores. A widely-used test of fluid reasoning and intelligence, the RSPM assays abilities to flexibly infer rules, manage goal hierarchies, and perform high-level abstractions. The neural substrates for these abilities are known to encompass a large frontoparietal network, with different processing models placing variable emphasis on the specific roles of the prefrontal or posterior regions. We used functional magnetic resonance imaging to explore the neural bases of autistics' RSPM problem solving. Fifteen autistic and eighteen non-autistic participants, matched on age, sex, manual preference and Wechsler IQ, completed 60 self-paced randomly-ordered RSPM items along with a visually similar 60-item pattern matching comparison task. Accuracy and response times did not differ between groups in the pattern matching task. In the RSPM task, autistics performed with similar accuracy, but with shorter response times, compared to their non-autistic controls. In both the entire sample and a subsample of participants additionally matched on RSPM performance to control for potential response time confounds, neural activity was similar in both groups for the pattern matching task. However, for the RSPM task, autistics displayed relatively increased task-related activity in extrastriate areas (BA18), and decreased activity in the lateral prefrontal cortex (BA9) and the medial posterior parietal cortex (BA7). Visual processing mechanisms may therefore play a more prominent role in reasoning in autistics.", "title": "" }, { "docid": "cd0e7cace1b89af72680f9d8ef38bdf3", "text": "Analyzing stock market trends and sentiment is an interdisciplinary area of research being undertaken by many disciplines such as Finance, Computer Science, Statistics, and Economics. It has been well established that real time news plays a strong role in the movement of stock prices. With the advent of electronic and online news sources, analysts have to deal with enormous amounts of real-time, unstructured streaming data. In this paper, we present an automated text mining based approach to aggregate news stories from diverse sources and create a News Corpus. The Corpus is filtered down to relevant sentences and analyzed using Natural Language Processing (NLP) techniques. A sentiment metric, called NewsSentiment, utilizing the count of positive and negative polarity words is proposed as a measure of the sentiment of the overall news corpus. We have used various open source packages and tools to develop the news collection and aggregation engine as well as the sentiment evaluation engine. Extensive experimentation has been done using news stories about various stocks. The time variation of NewsSentiment shows a very strong correlation with the actual stock price movement. Our proposed metric has many applications in analyzing current news stories and predicting stock trends for specific companies and sectors of the economy.", "title": "" }, { "docid": "b4462bf06bac13af9e40023019619a78", "text": "Successful schools ensure that all students master basic skills such as reading and math and have strong backgrounds in other subject areas, including science, history, and foreign language. Recently, however, educators and parents have begun to support a broader educational agenda – one that enhances teachers’ and students’ social and emotional skills. Research indicates that social and emotional skills are associated with success in many areas of life, including effective teaching, student learning, quality relationships, and academic performance. Moreover, a recent meta-analysis of over 300 studies showed that programs designed to enhance social and emotional learning significantly improve students’ social and emotional competencies as well as academic performance. Incorporating social and emotional learning programs into school districts can be challenging, as programs must address a variety of topics in order to be successful. One organization, the Collaborative for Academic, Social, and Emotional Learning (CASEL), provides leadership for researchers, educators, and policy makers to advance the science and practice of school-based social and emotional learning programs. According to CASEL, initiatives to integrate programs into schools should include training on social and emotional skills for both teachers and students, and should receive backing from all levels of the district, including the superintendent, school principals, and teachers. Additionally, programs should be field-tested, evidence-based, and founded on sound", "title": "" } ]
scidocsrr
de4127afaba5147042c2630ba3895938
FacePoseNet: Making a Case for Landmark-Free Face Alignment
[ { "docid": "3726f6ddd4166c431f0847cdf23eb415", "text": "We introduce an approach that leverages surface normal predictions, along with appearance cues, to retrieve 3D models for objects depicted in 2D still images from a large CAD object library. Critical to the success of our approach is the ability to recover accurate surface normals for objects in the depicted scene. We introduce a skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction. Our model achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface normal prediction, and recovers fine object detail compared to previous methods. Furthermore, we develop a two-stream network over the input image and predicted surface normals that jointly learns pose and style for CAD model retrieval. When using the predicted surface normals, our two-stream network matches prior work using surface normals computed from RGB-D images on the task of pose prediction, and achieves state of the art when using RGB-D input. Finally, our two-stream network allows us to retrieve CAD models that better match the style and pose of a depicted object compared with baseline approaches.", "title": "" } ]
[ { "docid": "14c24e919e8ce12a51c1f270aef4aad8", "text": "The present study is a waitlist-controlled investigation of the impact of a Mindfulness-Based Stress Reduction (MBSR) program on mindful attentiveness, rumination and blood pressure (BP) in women with cancer. Female post-treatment cancer patients were recruited from the MBSR program waitlist. Participants completed self-report measures of mindfulness and rumination and measured casual BP at home before and after the 8-week MBSR program or waiting period. MBSR group participants demonstrated higher levels of mindful attentiveness and decreased ruminative thinking following the intervention but no difference in BP, when compared to controls. In the MBSR group, decreases in rumination correlated with decreases in SBP and increases in mindful attention. When participants were assigned to “Higher BP” and “Lower BP” conditions based on mean BP values at week 1, “Higher BP” participants in the MBSR group (n = 19) had lower SBP at week 8 relative to the control group (n = 16). A MBSR program may be efficacious in increasing mindful attention and decreasing rumination in women with cancer. Randomized controlled trials are needed to evaluate an impact on clinically elevated BP.", "title": "" }, { "docid": "f53be608e9a27d5de0a87c03b953ca28", "text": "In this work, we present and analyze an image denoising method, the NL-means algorithm, based on a non local averaging of all pixels in the image. We also introduce the concept of method noise, that is, the difference between the original (always slightly noisy) digital image and its denoised version. Finally, we present some experiences comparing the NL-means results with some classical denoising methods.", "title": "" }, { "docid": "b1313b777c940445eb540b1e12fa559e", "text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.", "title": "" }, { "docid": "cab874a37c348491c85bfacb46d669b8", "text": "Recent advances in meta-learning are providing the foundations to construct meta-learning assistants and task-adaptive learners. The goal of this special issue is to foster an interest in meta-learning by compiling representative work in the field. The contributions to this special issue provide strong insights into the construction of future meta-learning tools. In this introduction we present a common frame of reference to address work in meta-learning through the concept of meta-knowledge. We show how meta-learning can be simply defined as the process of exploiting knowledge about learning that enables us to understand and improve the performance of learning algorithms.", "title": "" }, { "docid": "325772543e172b1a5bd08d20092b1069", "text": "Despite considerable research on passwords, empirical studies of password strength have been limited by lack of access to plaintext passwords, small data sets, and password sets specifically collected for a research study or from low-value accounts. Properties of passwords used for high-value accounts thus remain poorly understood.\n We fill this gap by studying the single-sign-on passwords used by over 25,000 faculty, staff, and students at a research university with a complex password policy. Key aspects of our contributions rest on our (indirect) access to plaintext passwords. We describe our data collection methodology, particularly the many precautions we took to minimize risks to users. We then analyze how guessable the collected passwords would be during an offline attack by subjecting them to a state-of-the-art password cracking algorithm. We discover significant correlations between a number of demographic and behavioral factors and password strength. For example, we find that users associated with the computer science school make passwords more than 1.5 times as strong as those of users associated with the business school. while users associated with computer science make strong ones. In addition, we find that stronger passwords are correlated with a higher rate of errors entering them.\n We also compare the guessability and other characteristics of the passwords we analyzed to sets previously collected in controlled experiments or leaked from low-value accounts. We find more consistent similarities between the university passwords and passwords collected for research studies under similar composition policies than we do between the university passwords and subsets of passwords leaked from low-value accounts that happen to comply with the same policies.", "title": "" }, { "docid": "96704e139fd4d72cb64b0acbfb887475", "text": "Project Failure is the major problem undergoing nowadays as seen by software project managers. Imprecision of the estimation is the reason for this problem. As software grew in size and importance it also grew in complexity, making it very difficult to accurately predict the cost of software development. This was the dilemma in past years. The greatest pitfall of software industry was the fast changing nature of software development which has made it difficult to develop parametric models that yield high accuracy for software development in all domains. Development of useful models that accurately predict the cost of developing a software product. It is a very important objective of software industry. In this paper, several existing methods for software cost estimation are illustrated and their aspects will be discussed. This paper summarizes several classes of software cost estimation models and techniques. To achieve all these goals we implement the simulators. No single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" }, { "docid": "337f30f6cf19b09447d740c397b8759a", "text": "BACKGROUND\nChildbirth is regarded as an important life event for women, and growing numbers of them are making the choice to give birth by Caesarean Delivery. The aim of this study was to identify the factors influencing the decision that women make on their mode of delivery, underpinned by the Health Belief Model.\n\n\nMETHODS\nThis was a cross-sectional study. Hong Kong Chinese women aged 18-45, who were pregnant or had given birth within the last three years were recruited. The participants were asked to complete a structured self-administered questionnaire consisting of 62 questions.\n\n\nRESULTS\nA total of 319 women were recruited, of whom 73 (22.9%) preferred to have a cesarean section delivery (CD). The results showed that women preferred CD because they were concerned about being pregnant at an advanced age, were worried about labor pain and perineum tearing, wanted to have a better plan for maternity leave, had chosen an auspicious date to deliver, and perceived that CD is a more convenience way to deliver. The perceived benefits and severity of a vaginal birth (VB), and the perceived benefits, severity, and cues to action of CD, affected the decision to undergo either a VB or CD.\n\n\nCONCLUSIONS\nThe data indicated that the constructs of the Health Belief Model--perceived benefits, perceived severity, and cues to action--affect the decision that women make on their mode of delivery. This research indicates that there is value in designing educational programs for pregnant women to educate them on the benefits, risks, and severity of the two different modes of birth based on the constructs of HBM. This will enable women to be active participants in choosing the mode of birth that they believe is right for them.", "title": "" }, { "docid": "dab8aa9867fb842c1e37570924a9d81c", "text": "Ellenberg indicator values (EIV) have been widely used to estimate habitat variables from floristic data and to predict vegetation composition based on habitat properties. Geographical Information Systems (GIS) and Digital Elevation Models (DEM) are valuable tools for studying the relationships between topographic and ecological characters of river systems. A 3-meter resolution DEM was derived for a. 3-km-long break section of the Szum River (SE Poland) from a 1:10,000 topographic map. Data on the diversity and ecological requirements of the local vascular flora were obtained while making floristic charts for 32 sections of the river valley (each 200 m long) and physical and chemical soil measurements; next, the data were translated into EIV. The correlations of the primary and secondary topographic attributes of the valley, species richness, and EIV (adapted for the Polish vascular flora) were assessed for all species recognized in each valley section. The total area and proportion of a flat area, mean slope, slope curvature, solar radiation (SRAD), and topographic wetness index (TWI) are the most important factors influencing local flora richness and diversity. The highest correlations were found for three ecological indicators, namely light, soil moisture, and soil organic content. The DEM seems to be useful in determination of correlations between topographic and ecological attributes along a minor river valley.", "title": "" }, { "docid": "4e85e81ad8da04ff6960b26e7c5def7a", "text": "One of the most challenging issues in learning analytics is the development of techniques and tools that facilitate the evaluation of the learning activities carried out by learners. In this paper, we faced this issue through a process mining-based platform, called Soft Learn, that is able to discover complete, precise and simple learning paths from event logs. This platform has a graphical interface that allows teachers to better understand the real learning paths undertaken by learners.", "title": "" }, { "docid": "b4bca1a35fca1cca92b4f2e2f77152e1", "text": "This paper proposed design and development of a flexible UWB wearable antenna using flexible and elastic polymer substrate. Polydimethylsiloxane (PDMS) was chosen to be used as flexible substrate for the proposed antenna which is a kind of silicone elastic, it has attractive mechanical and electrical properties such as flexibility, softness, water resistance low permittivity and transparency. The proposed antenna consists of a rectangular patch with two steps notches in the upper side of the patch, resulting in a more compact and increase in the bandwidth. In addition, the proposed antenna has an elliptical slot for an enhancement of the bandwidth and gain. The bottom side edges of the patch have been truncated to provide an additional surface current path. The proposed UWB wearable antenna functions from 2.5 GHz to 12.4 GHz frequency range and it was successfully designed and the simulated result showed that the return loss was maintained less than -10 dB and VSWR kept less than 2 over the entire desired frequency range (2.5 GHz - 12.4 GHz). The gain of the proposed antenna varies with frequency and the maximum gain recorded is 4.56 dB at 6.5 GHz. Simultaneously, The radiation patterns of the proposed antenna are also presented. The performance of the antenna under bending condition is comparable with the normal condition's performance.", "title": "" }, { "docid": "7458adc935d2b8d265354cf38b8f9f14", "text": "Inflammation contributes to important traits that cancer cells acquire during malignant progression. Gene array data recently identified upregulation of interferon-induced protein with tetratricopeptide repeats 3 (IFIT3) in aggressive pancreatic cancer cells. IFIT3 belongs to the group of interferon stimulated genes (ISG), can be induced by several cellular stress stimuli and by its tetratricopeptide repeats interacts with a multitude of cellular proteins. Upregulation of IFIT3 was confirmed in the aggressive pancreatic cancer cell line L3.6pl compared with its less aggressive cell line of origin, COLO357FG. Transgenic induction of IFIT3 expression in COLO357FG resulted in greater mass of orthotopic tumors and higher prevalence of metastases. Several important traits that mediate malignancy were altered by IFIT3: increased VEGF and IL-6 secretion, chemoresistance and decreased starvation-induced apoptosis. IFIT3 showed binding to JNK and STAT1, the latter being an important inducer of IFIT3 expression. Despite still being alterable by \"classical\" IFN or NFκB signaling, our findings indicate constitutive - possibly auto-regulated - upregulation of IFIT3 in L3.6pl without presence of an adequate inflammatory stimulus. The transcription factor SOX9, which is linked to regulation of hypoxia-related genes, was identified as a key mediator of upregulation of the oncogene IFIT3 and thereby sustaining a \"pseudoinflammatory\" cellular condition.", "title": "" }, { "docid": "f8e5f366b0199373170dbdbcf1c88456", "text": "There is increasing interest in evaluating the use of nonpharmacologic interventions such as music to minimize potential adverse effects of anxiety-reducing medications. This study used a quasi-experimental design to evaluate the effects of a perioperative music intervention (provided continuously throughout the preoperative, intraoperative, and postoperative periods) on changes in mean arterial pressure (MAP), heart rate, anxiety, and pain in women with a diagnosis of breast cancer undergoing mastectomy. A total of 30 women were assigned randomly to a control group or to the music intervention group. Findings indicated that women in the intervention group had a greater decrease in MAP and anxiety with less pain from the preoperative period to the time of discharge from the recovery room compared with women in the control group. Music is a noninvasive and low-cost intervention that can be easily implemented in the perioperative setting, and these findings suggest that perioperative music can reduce MAP, anxiety, and pain among women undergoing mastectomy for breast cancer.", "title": "" }, { "docid": "9a656353fe994515036c847c70865959", "text": "Imagine in the future that autonomous vehicles coordinated and guided by signal-free autonomous intersections are able to pass through the intersections immediately after the vehicles in the conflicting direction leave. Meanwhile, with the coordination of the autonomous intersections, autonomous vehicles on any approaching lane are able to turn onto any downstream lane, expecting that driving in such a road system, autonomous vehicles can reach their destinations without any on-road lane changes, and high traffic efficiency will be achieved as well as great traffic safety. To draw the picture in detail, this paper designs a signal-free autonomous intersection with all-direction turn lanes (ADTL) under the environment of autonomous vehicles, and proposes a conflict-avoidance-based approach to coordinate all approaching vehicles in different directions. Communicating with the approaching autonomous vehicles and utilizing the approach, the autonomous ADTL intersection is able to coordinate the approaching vehicles in all directions and guide them to safely and efficiently pass through the intersection. Two simulation scenarios are conducted in a road network with an isolated intersection composed of four three-lane arms. One scenario validates the collision-free design of the system, and the other shows that the designed ADTL intersection outperforms the conventional signal controlled intersection in terms of traffic efficiency, and is potentially better than the autonomous intersection with specific-direction turn lanes. The autonomous ADTL intersection can be an important basis for designing a future autonomous urban road traffic system.", "title": "" }, { "docid": "52ab2c3f6f47d3b9e5ce60fbbe3385a6", "text": "Nosologically, Alzheimer disease may not be considered to be a single disorder in spite of a common clinical phenotype. Only a small proportion of about 5% to 10% of all Alzheimer cases is due to genetic mutations (type I) whereas the great majority of patients was found to be sporadic in origin. It may be assumed that susceptibility genes along with lifestyle risk factors contribute to the causation of the age-related sporadic Alzheimer disease (type II). In this context, the desensitization of the neuronal insulin receptor similar to not-insulin dependent diabetes mellitus may be of pivotal significance. This abnormality along with a reduction in brain insulin concentration is assumed to induce a cascade-like process of disturbances including cellular glucose, acetylcholine, cholesterol, and ATP associated with abnormalities in membrane pathology and the formation of both amyloidogenic derivatives and hyperphosphorylated tau protein. Sporadic Alzheimer disease may, thus, be considered to be the brain type of diabetes mellitus II. Experimental evidence is provided and discussed.", "title": "" }, { "docid": "d6379e449f1b7c6d845a004c59c1023c", "text": "Phase-shifted ZVS PWM full-bridge converter realizes ZVS and eliminates the voltage oscillation caused by the reverse recovery of the rectifier diodes by introducing a resonant inductance and two clamping diodes. This paper improves the converter just by exchanging the position of the resonant inductance and the transformer such that the transformer is connected with the lagging leg. The improved converter has several advantages over the original counterpart, e.g., the clamping diodes conduct only once in a switching cycle, and the resonant inductance current is smaller in zero state, leading to a higher efficiency and reduced duty cycle loss. A blocking capacitor is usually introduced to the primary side to prevent the transformer from saturating, this paper analyzes the effects of the blocking capacitor in different positions, and a best scheme is determined. A 2850 W prototype converter is built to verify the effectiveness of the improved converter and the best scheme for the blocking capacitor.", "title": "" }, { "docid": "aecd7a910b52b6e34e10f10a12d0f966", "text": "Language processing is an example of implicit learning of multiple statistical cues that provide probabilistic information regarding word structure and use. Much of the current debate about language embodiment is devoted to how action words are represented in the brain, with motor cortex activity evoked by these words assumed to selectively reflect conceptual content and/or its simulation. We investigated whether motor cortex activity evoked by manual action words (e.g., caress) might reflect sensitivity to probabilistic orthographic–phonological cues to grammatical category embedded within individual words. We first review neuroimaging data demonstrating that nonwords evoke activity much more reliably than action words along the entire motor strip, encompassing regions proposed to be action category specific. Using fMRI, we found that disyllabic words denoting manual actions evoked increased motor cortex activity compared with non-body-part-related words (e.g., canyon), activity which overlaps that evoked by observing and executing hand movements. This result is typically interpreted in support of language embodiment. Crucially, we also found that disyllabic nonwords containing endings with probabilistic cues predictive of verb status (e.g., -eve) evoked increased activity compared with nonwords with endings predictive of noun status (e.g., -age) in the identical motor area. Thus, motor cortex responses to action words cannot be assumed to selectively reflect conceptual content and/or its simulation. Our results clearly demonstrate motor cortex activity reflects implicit processing of ortho-phonological statistical regularities that help to distinguish a word's grammatical class.", "title": "" }, { "docid": "e4b02298a2ff6361c0a914250f956911", "text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.", "title": "" }, { "docid": "893055dc643e80996e3b195d504b981c", "text": "This article describes the algorithms, features, and implementation of PyDEC, a Python library for computations related to the discretization of exterior calculus. PyDEC facilitates inquiry into both physical problems on manifolds as well as purely topological problems on abstract complexes. We describe efficient algorithms for constructing the operators and objects that arise in discrete exterior calculus, lowest-order finite element exterior calculus, and in related topological problems. Our algorithms are formulated in terms of high-level matrix operations which extend to arbitrary dimension. As a result, our implementations map well to the facilities of numerical libraries such as NumPy and SciPy. The availability of such libraries makes Python suitable for prototyping numerical methods. We demonstrate how PyDEC is used to solve physical and topological problems through several concise examples.", "title": "" }, { "docid": "e90e16629cca6dfe12e5538fd5c93c31", "text": "In this paper, we address two complex issues: 1) Text frame classification and 2) Multi-oriented text detection in video text frame. We first divide a video frame into 16 blocks and propose a combination of wavelet and median-moments with k-means clustering at the block level to identify probable text blocks. For each probable text block, the method applies the same combination of feature with k-means clustering over a sliding window running through the blocks to identify potential text candidates. We introduce a new idea of symmetry on text candidates in each block based on the observation that pixel distribution in text exhibits a symmetric pattern. The method integrates all blocks containing text candidates in the frame and then all text candidates are mapped on to a Sobel edge map of the original frame to obtain text representatives. To tackle the multi-orientation problem, we present a new method called Angle Projection Boundary Growing (APBG) which is an iterative algorithm and works based on a nearest neighbor concept. APBG is then applied on the text representatives to fix the bounding box for multi-oriented text lines in the video frame. Directional information is used to eliminate false positives. Experimental results on a variety of datasets such as non-horizontal, horizontal, publicly available data (Hua’s data) and ICDAR-03 competition data (camera images) show that the proposed method outperforms existing methods proposed for video and the state of the art methods for scene text as well.", "title": "" }, { "docid": "86820c43e63066930120fa5725b5b56d", "text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.", "title": "" } ]
scidocsrr
fa53a4ff95d811a1f39fdd8a7bec2ce5
No compromises: distributed transactions with consistency, availability, and performance
[ { "docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0", "text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.", "title": "" } ]
[ { "docid": "625c5c89b9f0001a3eed1ec6fb498c23", "text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.", "title": "" }, { "docid": "4182770927ae68e5047906df446bafe9", "text": "In this study, a square-shaped slot antenna is designed for the future fifth generation (5G) wireless applications. The antenna has a compact size of 0.64λg × 0.64λg at 38 GHz, which consists of ellipse shaped radiating patch fed by a 50 Q micro-strip line on the Rogers RT5880 substrates. A rectangle shaped slot is etched in the ground plane to enhance the antenna bandwidth. In order to obtain better impedance matching bandwidth of the antennas, some small circular radiating patches are added to the square-shaped slot. Simulations show that the measured impedance bandwidth of the proposed antenna ranges from 20 to 42 GHz for a reflection coefficient of Su less than −10dB which is cover 5G bands (28/38GHz). The proposed antenna provides almost omni-directional patterns, relatively flat gain, and high radiation efficiency through the frequency band.", "title": "" }, { "docid": "d09d9d9f74079981f8f09e829e2af255", "text": "Determination of sensitive and specific markers of very early AD progression is intended to aid researchers and clinicians to develop new treatments and monitor their effectiveness, as well as to lessen the time and cost of clinical trials. Magnetic Resonance (MR)-related biomarkers have been recently identified by the use of machine learning methods for the in vivo differential diagnosis of AD. However, the vast majority of neuroimaging papers investigating this topic are focused on the difference between AD and patients with mild cognitive impairment (MCI), not considering the impact of MCI patients who will (MCIc) or not convert (MCInc) to AD. Morphological T1-weighted MRIs of 137 AD, 76 MCIc, 134 MCInc, and 162 healthy controls (CN) selected from the Alzheimer's disease neuroimaging initiative (ADNI) cohort, were used by an optimized machine learning algorithm. Voxels influencing the classification between these AD-related pre-clinical phases involved hippocampus, entorhinal cortex, basal ganglia, gyrus rectus, precuneus, and cerebellum, all critical regions known to be strongly involved in the pathophysiological mechanisms of AD. Classification accuracy was 76% AD vs. CN, 72% MCIc vs. CN, 66% MCIc vs. MCInc (nested 20-fold cross validation). Our data encourage the application of computer-based diagnosis in clinical practice of AD opening new prospective in the early management of AD patients.", "title": "" }, { "docid": "b505c23c5b3c924242ca6cf65fd4efc7", "text": "Adolescent idiopathic scoliosis is a common disease with an overall prevalence of 0.47-5.2 % in the current literature. The female to male ratio ranges from 1.5:1 to 3:1 and increases substantially with increasing age. In particular, the prevalence of curves with higher Cobb angles is substantially higher in girls than in boys: The female to male ratio rises from 1.4:1 in curves from 10° to 20° up to 7.2:1 in curves >40°. Curve pattern and prevalence of scoliosis is not only influenced by gender, but also by genetic factors and age of onset. These data obtained from school screening programs have to be interpreted with caution, since methods and cohorts of the different studies are not comparable as age groups of the cohorts and diagnostic criteria differ substantially. We do need data from studies with clear standards of diagnostic criteria and study protocols that are comparable to each other.", "title": "" }, { "docid": "22572394c6f522b70e1f14b8156a5601", "text": "A new substrate integrated horn antenna with hard side walls combined with a couple of soft surfaces is introduced. The horn takes advantage of the air medium for propagation inside, while having a thickness of dielectric on the walls to realize hard conditions. The covering layers of the air-filled horn are equipped with strip-via arrays, which act as soft surfaces around the horn aperture to reduce the back radiations. The uniform amplitude distribution of the aperture resulting from the hard conditions and the phase correction combined with the profiled horn walls provided a narrow beamwidth and −13 dB sidelobe levels in the frequency of the hard condition, which is validated by the simulated and measured results.", "title": "" }, { "docid": "af08bf07cc59217f0763275e04b3d62b", "text": "Modern machine learning algorithms are increasingly being used in neuroimaging studies, such as the prediction of Alzheimer's disease (AD) from structural MRI. However, finding a good representation for multivariate brain MRI features in which their essential structure is revealed and easily extractable has been difficult. We report a successful application of a machine learning framework that significantly improved the use of brain MRI for predictions. Specifically, we used the unsupervised learning algorithm of local linear embedding (LLE) to transform multivariate MRI data of regional brain volume and cortical thickness to a locally linear space with fewer dimensions, while also utilizing the global nonlinear data structure. The embedded brain features were then used to train a classifier for predicting future conversion to AD based on a baseline MRI. We tested the approach on 413 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) who had baseline MRI scans and complete clinical follow-ups over 3 years with the following diagnoses: cognitive normal (CN; n=137), stable mild cognitive impairment (s-MCI; n=93), MCI converters to AD (c-MCI, n=97), and AD (n=86). We found that classifications using embedded MRI features generally outperformed (p<0.05) classifications using the original features directly. Moreover, the improvement from LLE was not limited to a particular classifier but worked equally well for regularized logistic regressions, support vector machines, and linear discriminant analysis. Most strikingly, using LLE significantly improved (p=0.007) predictions of MCI subjects who converted to AD and those who remained stable (accuracy/sensitivity/specificity: =0.68/0.80/0.56). In contrast, predictions using the original features performed not better than by chance (accuracy/sensitivity/specificity: =0.56/0.65/0.46). In conclusion, LLE is a very effective tool for classification studies of AD using multivariate MRI data. The improvement in predicting conversion to AD in MCI could have important implications for health management and for powering therapeutic trials by targeting non-demented subjects who later convert to AD.", "title": "" }, { "docid": "7d86abdf71d6c9dd05fc41e63952d7bf", "text": "Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.", "title": "" }, { "docid": "b9838e512912f4bcaf3c224df3548d95", "text": "In this paper, we develop a system for training human calligraphy skills. For such a development, the so-called dynamic font and augmented reality (AR) are employed. The dynamic font is used to generate a model character, in which the character are formed as the result of 3-dimensional motion of a virtual writing device on a virtual writing plane. Using the AR technology, we then produce a visual information consisting of not only static writing path but also dynamic writing process of model character. Such a visual information of model character is given some trainee through a head mounted display. The performance is demonstrated by some experimental studies.", "title": "" }, { "docid": "92377bb2bc4e2daee041c5b78a5fcaf9", "text": "Online discussions forums, known as forums for short, are conversational social cyberspaces constituting rich repositories of content and an important source of collaborative knowledge. However, most of this knowledge is buried inside the forum infrastructure and its extraction is both complex and difficult. The ability to automatically rate postings in online discussion forums, based on the value of their contribution, enhances the ability of users to find knowledge within this content. Several key online discussion forums have utilized collaborative intelligence to rate the value of postings made by users. However, a large percentage of posts go unattended and hence lack appropriate rating.\n In this paper, we focus on automatic rating of postings in online discussion forums. A set of features derived from the posting content and the threaded discussion structure are generated for each posting. These features are grouped into five categories, namely (i) relevance, (ii) originality, (iii) forum-specific features, (iv) surface features, and (v) posting-component features. Using a non-linear SVM classifier, the value of each posting is categorized into one of three levels High, Medium, or Low. This rating represents a seed value for each posting that is leveraged in filtering forum content. Experimental results have shown promising performance on forum data.", "title": "" }, { "docid": "1153287a3a5cde9f6bbacb83dffecdf3", "text": "This communication deals with the design of a <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$ </tex-math></inline-formula> slot array antenna fed by inverted microstrip gap waveguide (IMGW). The whole structure designed in this communication consists of radiating slots, a groove gap cavity layer, a distribution feeding network, and a transition from standard WR-15 waveguide to the IMGW. First, a <inline-formula> <tex-math notation=\"LaTeX\">$2\\times 2$ </tex-math></inline-formula> cavity-backed slot subarray is designed with periodic boundary condition to achieve good performances of radiation pattern and directivity. Then, a complete IMGW feeding network with a transition from WR-15 rectangular waveguide to the IMGW has been realized to excite the radiating slots. The complete antenna array is designed at 60-GHz frequency band and fabricated using Electrical Discharging Machining Technology. The measurements show that the antenna has a 16.95% bandwidth covering 54–64-GHz frequency range. The measured gain of the antenna is more than 28 dBi with the efficiency higher than 40% covering 54–64-GHz frequency range.", "title": "" }, { "docid": "d2541bdc0eb9bf65fdeb1e50358c62eb", "text": "Data management is a crucial aspect in the Internet of Things (IoT) on Cloud. Big data is about the processing and analysis of large data repositories on Cloud computing. Big document summarization method is an important technique for data management of IoT. Traditional document summarization methods are restricted to summarize suitable information from the exploding IoT big data on Cloud. This paper proposes a big data (i.e., documents, texts) summarization method using the extracted semantic feature which it is extracted by distributed parallel processing of NMF based cloud technique of Hadoop. The proposed method can well represent the inherent structure of big documents set using the semantic feature by the non-negative matrix factorization (NMF). In addition, it can summarize the big data size of document for IoT using the distributed parallel processing based on Hadoop. The experimental results demonstrate that the proposed method can summarize the big data document comparing with the single node of summarization methods. 1096 Yoo-Kang Ji et al.", "title": "" }, { "docid": "8439dbba880179895ab98a521b4c254f", "text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI", "title": "" }, { "docid": "62b8d1ecb04506794f81a47fccb63269", "text": "This paper addresses the mode collapse for generative adversarial networks (GANs). We view modes as a geometric structure of data distribution in a metric space. Under this geometric lens, we embed subsamples of the dataset from an arbitrary metric space into the `2 space, while preserving their pairwise distance distribution. Not only does this metric embedding determine the dimensionality of the latent space automatically, it also enables us to construct a mixture of Gaussians to draw latent space random vectors. We use the Gaussian mixture model in tandem with a simple augmentation of the objective function to train GANs. Every major step of our method is supported by theoretical analysis, and our experiments on real and synthetic data confirm that the generator is able to produce samples spreading over most of the modes while avoiding unwanted samples, outperforming several recent GAN variants on a number of metrics and offering new features.", "title": "" }, { "docid": "77af48f5bb5bc77565665944b16d144e", "text": "We examine a protocol πbeacon that outputs unpredictable and publicly verifiable randomness, meaning that the output is unknown at the time that πbeacon starts, yet everyone can verify that the output is close to uniform after πbeacon terminates. We show that πbeacon can be instantiated via Bitcoin under sensible assumptions; in particular we consider an adversary with an arbitrarily large initial budget who may not operate at a loss indefinitely. In case the adversary has an infinite budget, we provide an impossibility result that stems from the similarity between the Bitcoin model and Santha-Vazirani sources. We also give a hybrid protocol that combines trusted parties and a Bitcoin-based beacon.", "title": "" }, { "docid": "e1c927d7fbe826b741433c99fff868d0", "text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.", "title": "" }, { "docid": "448d4704991a2bdc086df8f0d7920ec5", "text": "Global progress in the industrial field, which has led to the definition of the Industry 4.0 concept, also affects other spheres of life. One of them is the education. The subject of the article is to summarize the emerging trends in education in relation to the requirements of Industry 4.0 and present possibilities of their use. One option is using augmented reality as part of a modular learning system. The main idea is to combine the elements of the CPS technology concept with modern IT features, with emphasis on simplicity of solution and hardware ease. The synthesis of these principles can combine in a single image on a conventional device a realistic view at the technological equipment, complemented with interactive virtual model of the equipment, the technical data and real-time process information.", "title": "" }, { "docid": "0cb6bbe889acb5b54043ba9cedbb4496", "text": "This paper presents a fusion design approach of high-performance filtering balun based on the ringshaped dielectric resonator (DR) for the first time. According to the electromagnetic (EM) field properties of the TE01δ mode of the DR cavity, it can be differentially driven or extracted by reasonably placing the orientations of the feeding probes, which answers for the realization of unbalanced-to-balanced conversion. As a result, the coupling between the resonators can refer to the traditional single-ended design, regardless of the feeding scheme. Based on this, a second-order DR filtering balun is designed by converting a four-port balanced filter to a three-port device. Within the passband, the excellent performance of amplitude balance and 180° phase difference at the balun outputs can be achieved. To improve the stopband rejection by suppressing the spurious responses of the DR cavity, a third-order filtering balun using the hybrid DR and coaxial resonator is designed. It is not rigorously symmetrical, which is different from the traditional designs. The simulated and measured results with good accordance showcase good filter and balun functions at the same time.", "title": "" }, { "docid": "66d5e414e54c657c026fe0e7537c94ee", "text": "A mode-reconfigurable Butterworth bandpass filter, which can be switched between operating as a single-mode-dual-band (SMDB) and a dual-mode-single-band (DMSB) filter is presented. The filter is realized using a substrate integrated waveguide in a square cuboid geometry. Switching is enabled by using empty vias for the SMDB and liquid metal filled vias for the DMSB. The first two modes of the SMDB resonate 3 GHz apart, whereas the first two modes of the DMSB are degenerate and resonate only at the higher frequency. This is due to mode shifting of the first frequency band to the second frequency band. Measurements confirm the liquid-metal reconfiguration between the two operating modes.", "title": "" }, { "docid": "64d3ecaa2f9e850cb26aac0265260aff", "text": "The case of the Frankfurt Airport attack in 2011 in which a 21-year-old man shot several U.S. soldiers, murdering 2 U.S. airmen and severely wounding 2 others, is assessed with the Terrorist Radicalization Assessment Protocol (TRAP-18). The study is based on an extensive qualitative analysis of investigation and court files focusing on the complex interconnection among offender personality, specific opportunity structures, and social contexts. The role of distal psychological factors and proximal warning behaviors in the run up to the deed are discussed. Although in this case the proximal behaviors of fixation on a cause and identification as a “soldier” for the cause developed over years, we observed only a very brief and accelerated pathway toward the violent act. This represents an important change in the demands placed upon threat assessors.", "title": "" }, { "docid": "2f9f21740603b7a84abd57d7c7c02c11", "text": "Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC).\n In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory.<sup;>1</sup;> The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical.\n The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5% for memory-intensive single-threaded benchmarks and 10.8% for multicore workloads. It yields a geometric mean speedup of 5.1% for single-thread applications and 7.6% for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1% for single-thread applications and 7.6% for multicore workloads.", "title": "" } ]
scidocsrr
b748f0b146ddf052bd5f154905e8db12
Flexible Multimodal Tactile Sensing System for Object Identification
[ { "docid": "f8435db6c6ea75944d1c6b521e0f3dd3", "text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "90ca336fa0d6aae07914f03df9bbc2ad", "text": "Planning-based techniques are a very powerful tool for automated story generation. However, as the number of possible actions increases, traditional planning techniques suffer from a combinatorial explosion due to large branching factors. In this work, we apply Monte Carlo Tree Search (MCTS) techniques to generate stories in domains with large numbers of possible actions (100+). Our approach employs a Bayesian story evaluation method to guide the planning towards believable stories that reach a user defined goal. We generate stories in a novel domain with different type of story goals. Our approach shows an order of magnitude improvement in performance over traditional search techniques.", "title": "" }, { "docid": "260c12152d9bd38bd0fde005e0394e17", "text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.", "title": "" }, { "docid": "0e0c1004ad3bf29c5a855531a5185991", "text": "At Facebook, our data systems process huge volumes of data, ranging from hundreds of terabytes in memory to hundreds of petabytes on disk. We categorize our systems as “small data” or “big data” based on the type of queries they run. Small data refers to OLTP-like queries that process and retrieve a small amount of data, for example, the 1000s of objects necessary to render Facebook's personalized News Feed for each person. These objects are requested by their ids; indexes limit the amount of data accessed during a single query, regardless of the total volume of data. Big data refers to queries that process large amounts of data, usually for analysis: trouble-shooting, identifying trends, and making decisions. Big data stores are the workhorses for data analysis at Facebook. They grow by millions of events (inserts) per second and process tens of petabytes and hundreds of thousands of queries per day. In this tutorial, we will describe our data systems and the current challenges we face. We will lead a discussion on these challenges, approaches to solve them, and potential pitfalls. We hope to stimulate interest in solving these problems in the research community.", "title": "" }, { "docid": "213acf777983f4339d6ee25a4467b1be", "text": "RoadGraph is a graph based environmental model for driver assistance systems. It integrates information from different sources like digital maps, onboard sensors and V2X communication into one single model about vehicle's environment. At the moment of information aggregation some function independent situation analysis is done. In this paper the concept of the RoadGraph is described in detail and first results are shown.", "title": "" }, { "docid": "5dbc520fbac51f9cc1d13480e7bfb603", "text": "In 1899, Nikola Tesla, who had devised a type of resonant transformer called the Tesla coil, achieved a major breakthrough in his work by transmitting 100 million volts of electric power wirelessly over a distance of 26 miles to light up a bank of 200 light bulbs and run one electric motor. Tesla claimed to have achieved 95% efficiency, but the technology had to be shelved because the effects of transmitting such high voltages in electric arcs would have been disastrous to humans and electrical equipment in the vicinity. This technology has been languishing in obscurity for a number of years, but the advent of portable devices such as mobiles, laptops, smartphones, MP3 players, etc warrants another look at the technology. We propose the use of a new technology, based on strongly coupled magnetic resonance. It consists of a transmitter, a current carrying copper coil, which acts as an electromagnetic resonator and a receiver, another copper coil of similar dimensions to which the device to be powered is attached. The transmitter emits a non-radiative magnetic field resonating at MHz frequencies, and the receiving unit resonates in that field. The resonant nature of the process ensures a strong interaction between the sending and receiving unit, while interaction with rest of the environment is weak.", "title": "" }, { "docid": "ee6906550c2f9d294e411688bae5db71", "text": "This position paper formalises an abstract model for complex negotiation dialogue. This model is to be used for the benchmark of optimisation algorithms ranging from Reinforcement Learning to Stochastic Games, through Transfer Learning, One-Shot Learning or others.", "title": "" }, { "docid": "5eab47907e673449ad73ec6cef30bc07", "text": "Three-dimensional circuits built upon multiple layers of polyimide are required for constructing Si/SiGe monolithic microwave/mm-wave integrated circuits on low resistivity Si wafers. However, the closely spaced transmission lines are susceptible to high levels of cross-coupling, which degrades the overall circuit performance. In this paper, theoretical and experimental results on coupling of Finite Ground Coplanar (FGC) waveguides embedded in polyimide layers are presented for the first time. These results show that FGC lines have approximately 8 dB lower coupling than coupled Coplanar Waveguides. Furthermore, it is shown that the forward and backward coupling characteristics for FGC lines do not resemble the coupling characteristics of other transmission lines such as microstrip.", "title": "" }, { "docid": "a0aa33c4afa58bd4dff7eb209bfb7924", "text": "OBJECTIVE\nTo assess whether frequent marijuana use is associated with residual neuropsychological effects.\n\n\nDESIGN\nSingle-blind comparison of regular users vs infrequent users of marijuana.\n\n\nPARTICIPANTS\nTwo samples of college undergraduates: 65 heavy users, who had smoked marijuana a median of 29 days in the last 30 days (range, 22 to 30 days) and who also displayed cannabinoids in their urine, and 64 light users, who had smoked a median of 1 day in the last 30 days (range, 0 to 9 days) and who displayed no urinary cannabinoids.\n\n\nINTERVENTION\nSubjects arrived at 2 PM on day 1 of their study visit, then remained at our center overnight under supervision. Neuropsychological tests were administered to all subjects starting at 9 AM on day 2. Thus, all subjects were abstinent from marijuana and other drugs for a minimum of 19 hours before testing.\n\n\nMAIN OUTCOME MEASURES\nSubjects received a battery of standard neuropsychological tests to assess general intellectual functioning, abstraction ability, sustained attention, verbal fluency, and ability to learn and recall new verbal and visuospatial information.\n\n\nRESULTS\nHeavy users displayed significantly greater impairment than light users on attention/executive functions, as evidenced particularly by greater perseverations on card sorting and reduced learning of word lists. These differences remained after controlling for potential confounding variables, such as estimated levels of premorbid cognitive functioning, and for use of alcohol and other substances in the two groups.\n\n\nCONCLUSIONS\nHeavy marijuana use is associated with residual neuropsychological effects even after a day of supervised abstinence from the drug. However, the question remains open as to whether this impairment is due to a residue of drug in the brain, a withdrawal effect from the drug, or a frank neurotoxic effect of the drug. from marijuana", "title": "" }, { "docid": "c75328d500b9a399ee9f5eeb8a0f979d", "text": "Denial of Service (DoS) attacks continue to grow in magnitude, duration, and frequency increasing the demand for techniques to protect services from disruption, especially at a low cost. We present Denial of Service Elusion (DoSE) as an inexpensive method for mitigating network layer attacks by utilizing cloud infrastructure and content delivery networks to protect services from disruption. DoSE uses these services to create a relay network between the client and the protected service that evades attack by selectively releasing IP address information. DoSE incorporates client reputation as a function of prior behavior to stop attackers along with a feedback controller to limit costs. We evaluate DoSE by modeling relays, clients, and attackers in an agent-based MATLAB simulator. The results show DoSE can mitigate a single-insider attack on 1,000 legitimate clients in 3.9 minutes while satisfying an average of 88.2% of requests during the attack.", "title": "" }, { "docid": "a42b9567dfc9e9fe92bc9aeb38ef5e5a", "text": "This paper presents a physical model for planar spiral inductors on silicon, which accounts for eddy current effect in the conductor, crossover capacitance between the spiral and center-tap, capacitance between the spiral and substrate, substrate ohmic loss, and substrate capacitance. The model has been confirmed with measured results of inductors having a wide range of layout and process parameters. This scalable inductor model enables the prediction and optimization of inductor performance.", "title": "" }, { "docid": "733ddc5a642327364c2bccb6b1258fac", "text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.", "title": "" }, { "docid": "36f73143b6f4d80e8f1d77505fabbfcf", "text": "Progress of IoT and ubiquitous computing technologies has strong anticipation to realize smart services in households such as efficient energy-saving appliance control and elderly monitoring. In order to put those applications into practice, high-accuracy and low-cost in-home living activity recognition is essential. Many researches have tackled living activity recognition so far, but the following problems remain: (i)privacy exposure due to utilization of cameras and microphones; (ii) high deployment and maintenance costs due to many sensors used; (iii) burden to force the user to carry the device and (iv) wire installation to supply power and communication between sensor node and server; (v) few recognizable activities; (vi) low recognition accuracy. In this paper, we propose an in-home living activity recognition method to solve all the problems. To solve the problems (i)--(iv), our method utilizes only energy harvesting PIR and door sensors with a home server for data collection and processing. The energy harvesting sensor has a solar cell to drive the sensor and wireless communication modules. To solve the problems (v) and (vi), we have tackled the following challenges: (a) determining appropriate features for training samples; and (b) determining the best machine learning algorithm to achieve high recognition accuracy; (c) complementing the dead zone of PIR sensor semipermanently. We have conducted experiments with the sensor by five subjects living in a home for 2-3 days each. As a result, the proposed method has achieved F-measure: 62.8% on average.", "title": "" }, { "docid": "c78e0662b9679a70f1ec4416b3abd2b4", "text": "This article offers possibly the first peer-reviewed study on the training routines of elite eathletes, with special focus on the subjects’ physical exercise routines. The study is based on a sample of 115 elite e-athletes. According to their responses, e-athletes train approximately 5.28 hours every day around the year on the elite level. Approximately 1.08 hours of that training is physical exercise. More than half (55.6%) of the elite e-athletes believe that integrating physical exercise in their training programs has a positive effect on esport performance; however, no less than 47.0% of the elite e-athletes do their physical exercise chiefly to maintain overall health. Accordingly, the study indicates that elite e-athletes are active athletes as well, those of age 18 and older exercising physically more than three times the daily 21-minute activity recommendation given by World Health Organization.", "title": "" }, { "docid": "68058500fd6dbbc60104a0985fecd4a8", "text": "Instagram, a popular global mobile photo-sharing platform, involves various user interactions centered on posting images accompanied by hashtags. Participatory hashtagging, one of these diverse tagging practices, has great potential to be a communication channel for various organizations and corporations that would like to interact with users on social media. In this paper, we aim to characterize participatory hashtagging behaviors on Instagram by conducting a case study of its representative hashtagging practice, the Weekend Hashtag Project, or #WHP. By conducting a user study using both quantitative and qualitative methods, we analyzed the way Instagram users respond to participation calls and identified factors that motivate users to take part in the project. Based on these findings, we provide design strategies for any interested parties to interact with users on social media.", "title": "" }, { "docid": "8c6ec02821d17fbcf79d1a42ed92a971", "text": "OBJECTIVE\nTo explore whether an association exists between oocyte meiotic spindle morphology visualized by polarized light microscopy at the time of intracytoplasmic sperm injection and the ploidy of the resulting embryo.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nPrivate IVF clinic.\n\n\nPATIENT(S)\nPatients undergoing preimplantation genetic screening/diagnosis (n = 113 patients).\n\n\nINTERVENTION(S)\nOocyte meiotic spindles were assessed by polarized light microscopy and classified at the time of intracytoplasmic sperm injection as normal, dysmorphic, translucent, telophase, or no visible spindle. Single blastomere biopsy was performed on day 3 of culture for analysis by array comparative genomic hybridization.\n\n\nMAIN OUTCOME MEASURE(S)\nSpindle morphology and embryo ploidy association was evaluated by regression methods accounting for non-independence of data.\n\n\nRESULT(S)\nThe frequency of euploidy in embryos derived from oocytes with normal spindle morphology was significantly higher than all other spindle classifications combined (odds ratio [OR] 1.93, 95% confidence interval [CI] 1.33-2.79). Oocytes with translucent (OR 0.25, 95% CI 0.13-0.46) and no visible spindle morphology (OR 0.35, 95% CI 0.19-0.63) were significantly less likely to result in euploid embryos when compared with oocytes with normal spindle morphology. There was no significant difference between normal and dysmorphic spindle morphology (OR 0.73, 95% CI 0.49-1.08), whereas no telophase spindles resulted in euploid embryos (n = 11). Assessment of spindle morphology was found to be independently associated with embryo euploidy after controlling for embryo quality (OR 1.73, 95% CI 1.16-2.60).\n\n\nCONCLUSION(S)\nOocyte spindle morphology is associated with the resulting embryo's ploidy. Oocytes with normal spindle morphology are significantly more likely to produce euploid embryos compared with oocytes with meiotic spindles that are translucent or not visible.", "title": "" }, { "docid": "134f44bb808d5e873161819ebb175af5", "text": "Like most behavior, consumer behavior too is goal driven. In turn, goals constitute cognitive constructs that can be chronically active as well as primed by features of the environment. Goal systems theory outlines the principles that characterize the dynamics of goal pursuit and explores their implications for consumer behavior. In this vein, we discuss from a common, goal systemic, perspective a variety of well known phenomena in the realm of consumer behavior including brand loyalty, variety seeking, impulsive buying, preferences, choices and regret. The goal systemic perspective affords guidelines for subsequent research on the dynamic aspects of consummatory behavior as well as offering insights into practical matters in the area of marketing. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f80458241f0a33aebd8044bf85bd25ec", "text": "Brachial–ankle pulse wave velocity (baPWV) is a promising technique to assess arterial stiffness conveniently. However, it is not known whether baPWV is associated with well-established indices of central arterial stiffness. We determined the relation of baPWV with aortic (carotid-femoral) PWV, leg (femoral-ankle) PWV, and carotid augmentation index (AI) by using both cross-sectional and interventional approaches. First, we studied 409 healthy adults aged 18–76 years. baPWV correlated significantly with aortic PWV (r=0.76), leg PWV (r=0.76), and carotid AI (r=0.52). A stepwise regression analysis revealed that aortic PWV was the primary independent correlate of baPWV, explaining 58% of the total variance in baPWV. Additional 23% of the variance was explained by leg PWV. Second, 13 sedentary healthy men were studied before and after a 16-week moderate aerobic exercise intervention (brisk walking to jogging; 30–45 min/day; 4–5 days/week). Reductions in aortic PWV observed with the exercise intervention were significantly and positively associated with the corresponding changes in baPWV (r=0.74). A stepwise regression analysis revealed that changes in aortic PWV were the only independent correlate of changes in baPWV (β=0.74), explaining 55% of the total variance. These results suggest that baPWV may provide qualitatively similar information to those derived from central arterial stiffness although some portions of baPWV may be determined by peripheral arterial stiffness.", "title": "" }, { "docid": "3e28cbfc53f6c42bb0de2baf5c1544aa", "text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.", "title": "" }, { "docid": "7f94ebc8ebdde9e337e6dd345c5c529e", "text": "Forms are a standard way of gathering data into a database. Many applications need to support multiple users with evolving data gathering requirements. It is desirable to automatically link dynamic forms to the back-end database. We have developed the FormMapper system, a fully automatic solution that accepts user-created data entry forms, and maps and integrates them into an existing database in the same domain. The solution comprises of two components: tree extraction and form integration. The tree extraction component leverages a probabilistic process, Hidden Markov Model (HMM), for automatically extracting a semantic tree structure of a form. In the form integration component, we develop a merging procedure that maps and integrates a tree into an existing database and extends the database with desired properties. We conducted experiments evaluating the performance of the system on several large databases designed from a number of complex forms. Our experimental results show that the FormMapper system is promising: It generated databases that are highly similar (87% overlapped) to those generated by the human experts, given the same set of forms.", "title": "" } ]
scidocsrr
61f4f3a66785a2663084c549494753b1
Theoretical Foundations and Algorithms for Outlier Ensembles
[ { "docid": "1e3d8ab33f0dda81e4f06eb57803852c", "text": "Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data.", "title": "" }, { "docid": "0909789d0f2ad990ec7f530546cf56b1", "text": "The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. In fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. Consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. In this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set.", "title": "" } ]
[ { "docid": "cf5d0f7079bd7bc1a197573e28b5569a", "text": "More and more people rely on mobile devices to access the Internet, which also increases the amount of private information that can be gathered from people's devices. Although today's smartphone operating systems are trying to provide a secure environment, they fail to provide users with adequate control over and visibility into how third-party applications use their private data. Whereas there are a few tools that alert users when applications leak private information, these tools are often hard to use by the average user or have other problems. To address these problems, we present PrivacyGuard, an open-source VPN-based platform for intercepting the network traffic of applications. PrivacyGuard requires neither root permissions nor any knowledge about VPN technology from its users. PrivacyGuard does not significantly increase the trusted computing base since PrivacyGuard runs in its entirety on the local device and traffic is not routed through a remote VPN server. We implement PrivacyGuard on the Android platform by taking advantage of the VPNService class provided by the Android SDK.\n PrivacyGuard is configurable, extensible, and useful for many different purposes. We investigate its use for detecting the leakage of multiple types of sensitive data, such as a phone's IMEI number or location data. PrivacyGuard also supports modifying the leaked information and replacing it with crafted data for privacy protection. According to our experiments, PrivacyGuard can detect more leakage incidents by applications and advertisement libraries than TaintDroid. We also demonstrate that PrivacyGuard has reasonable overhead on network performance and almost no overhead on battery consumption.", "title": "" }, { "docid": "5802a9b6f95783d78ceb22410b0d6c18", "text": "Social Internet of Things (SIoT) is a new paradigm where Internet of Things (IoT) merges with social networks, allowing people and devices to interact, and facilitating information sharing. However, security and privacy issues are a great challenge for IoT but they are also enabling factors to create a “trust ecosystem.” In fact, the intrinsic vulnerabilities of IoT devices, with limited resources and heterogeneous technologies, together with the lack of specifically designed IoT standards, represent a fertile ground for the expansion of specific cyber threats. In this paper, we try to bring order on the IoT security panorama providing a taxonomic analysis from the perspective of the three main key layers of the IoT system model: 1) perception; 2) transportation; and 3) application levels. As a result of the analysis, we will highlight the most critical issues with the aim of guiding future research directions.", "title": "" }, { "docid": "233ee357b5785572f50b79d6dd936e7c", "text": "graph is a simple, powerful, elegant abstraction with broad applicability in computer science and many related fields. Algorithms that operate on graphs see heavy use in both theoretical and practical contexts. Graphs have a very natural visual representation as nodes and connecting links arranged in space. Seeing this structure explicitly can aid tasks in many domains. Many people automatically sketch such a picture when thinking about small graphs, often including simple annotations. The pervasiveness of visual representations of small graphs testifies to their usefulness. On the other hand, although many large data sets can be expressed as graphs, few such visual representations exist. What causes this discrepancy? For one thing, graph layout poses a hard problem, 1 one that current tools just can't overcome. Conventional systems often falter when handling hundreds of edges, and none can handle more than a few thousand edges. 2 However, nonvisual manipulation of graphs with 50,000 edges is commonplace , and much larger instances exist. We can consider the Web as an extreme example of a graph with many millions of nodes and edges. Although many individual Web sites stay quite small, a significant number have more than 20,000 documents. The Unix file system reachable from a single networked workstation might include more than 100,000 files scattered across dozens of gigabytes worth of remotely mounted disk drives. Computational complexity is not the only reason that software to visually manipulate large graphs has lagged behind software to computationally manipulate them. Many previous graph layout systems have focused on fine-tuning the layout of relatively small graphs in support of polished presentations. A graph drawing system that focuses on the interactive browsing of large graphs can instead target the quite different tasks of browsing and exploration. Many researchers in scientific visual-ization have recognized the split between explanatory and exploratory goals. This distinction proves equally relevant for graph drawing. Contribution This article briefly describes a software system that explicitly attempts to handle much larger graphs than previous systems and support dynamic exploration rather than final presentation. I'll then discuss the applicability of this system to goals beyond simple exploration. A software system that supports graph exploration should include both a layout and an interactive drawing component. I have developed new algorithms for both layout and drawing—H3 and H3Viewer. A paper from InfoVis 97 contains a more extensive presentation of the H3 layout algorithm. 3 The H3Viewer drawing algorithm remains …", "title": "" }, { "docid": "b1bb5751e409d0fe44754624a4145e70", "text": "Capacity planning determines the optimal product mix based on the available tool sets and allocates production capacity according to the forecasted demands for the next few months. MaxIt is the previous capacity planning system for Intel's Flash Product Group (FPG) Assembly & Test Manufacturing (ATM). It only applied to single product family scenarios with simple process routing. However, new Celluar Handhold Group (CHG) products need to go through flexible and reentrant ATM routes. In this paper, we introduce MaxItPlus, which is an enhanced MaxIt using MILP (mixed integer linear programming) to conduct capacity planning of multiple product families with mixed process routes in a multifactory ATM environment. We also present the detailed mathematical formulation, the system architecture, and implementation results. The project will help Intel global Flash ATM to achieve a single and efficient capacity planning process for all FPG and CHG products and gain $10 M in marginal profit (as determined by the finance department)", "title": "" }, { "docid": "f919a742cda0da2a819f81663d9c594a", "text": "The partially observable Markov decision process (POMDP) model of environments was first explored in the engineering and operations research communities 40 years ago. More recently, the model has been embraced by researchers in artificial intelligence and machine learning, leading to a flurry of solution algorithms that can identify optimal or near-optimal behavior in many environments represented as POMDPs. The purpose of this article is to introduce the POMDP model to behavioral scientists who may wish to apply the framework to the problem of understanding normative behavior in experimental settings. The article includes concrete examples using a publicly-available POMDP solution package. © 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f7c9cf0cef0a24ba199401adc2a7260c", "text": "MOBA (Multiplayer Online Battle Arena) games are currently one of the most popular online video game genres. This paper discusses implementation of a typical MOBA game prototype for Windows platform in a popular game engine Unity 5. The focus is put on using the built-in Unity components in a MOBA setting, developing additional behaviours using Unity's Scripting API for C# and integrating third party components such as the networking engine, 3D models, and particle systems created for use with Unity and available through the Unity Asset Store. A brief overview of useful programming design patterns as well as design patterns already used in Unity is given. Various game state synchronization mechanisms available in the chosen networking engine, Photon Unity Networking, and their usage when synchronizing different types of game information over multiple clients are also discussed. The implemented game retains most of the main features of the modern MOBA games such as heroes with different play styles, skills, team versus team competition, resource collection and consumption, varied maps and defensive structures. The paper concludes with comments on Unity 5 as a MOBA game development environment and execution engine.", "title": "" }, { "docid": "fbac56ecc5d477586707c9bfc1bf8196", "text": "This paper presents implementation of a highly dynamic running gait with a hierarchical controller on the", "title": "" }, { "docid": "b83872038842111e87bbfd7aa64f055d", "text": "Celebrated Sequence to Sequence learning (Seq2Seq) and its fruitful variants are powerful models to achieve excellent performance on the tasks that map sequences to sequences. However, these are many machine learning tasks with inputs naturally represented in a form of graphs, which imposes significant challenges to existing Seq2Seq models for lossless conversion from its graph form to the sequence. In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.", "title": "" }, { "docid": "a7e369d89b4203609c8cbcfbb008e427", "text": "Learning to estimate 3D geometry in a single image by watching unlabeled videos via deep convolutional network has made significant process recently. Current state-of-the-art (SOTA) methods, are based on the learning framework of rigid structure-from-motion, where only 3D camera ego motion is modeled for geometry estimation. However, moving objects also exist in many videos, e.g. moving cars in a street scene. In this paper, we tackle such motion by additionally incorporating per-pixel 3D object motion into the learning framework, which provides holistic 3D scene flow understanding and helps single image geometry estimation. Specifically, given two consecutive frames from a video, we adopt a motion network to predict their relative 3D camera pose and a segmentation mask distinguishing moving objects and rigid background. An optical flow network is used to estimate dense 2D per-pixel correspondence. A single image depth network predicts depth maps for both images. The four types of information, i.e. 2D flow, camera pose, segment mask and depth maps, are integrated into a differentiable holistic 3D motion parser (HMP), where per-pixel 3D motion for rigid background and moving objects are recovered. We design various losses w.r.t. the two types of 3D motions for training the depth and motion networks, yielding further error reduction for estimated geometry. Finally, in order to solve the 3D motion confusion from monocular videos, we combine stereo images into joint training. Experiments on KITTI 2015 dataset show that our estimated geometry, 3D motion and moving object masks, not only are constrained to be consistent, but also significantly outperforms other SOTA algorithms, demonstrating the benefits of our approach.", "title": "" }, { "docid": "f496fd06a9c20a6e145e86c0e54d105e", "text": "This paper proposes the combination of a novel modified quasi-Z-source (MqZS) inverter with a single-phase symmetrical hybrid three-level inverter in order to boost the inverter three-level output voltage. The proposed single-phase MqZS hybrid three-level inverter provides a higher boost ability and reduces the number of inductors in the source impedance, compared with both the single-phase three-level neural-point clamped quasi-Z-source inverter and the single-phase quasi-Z-source cascaded multilevel inverter. Additionally, it can be extended to obtain the nine-level output voltage by cascading two three-level pulse width modulation switching cells with a separate MqZS and a dc source, which herein is called a single-phase MqZS cascaded hybrid five-level inverter (MqZS-CHI). A modified modulation technique based on an alternative phase opposition disposition scheme is suggested to effectively control the shoot-through state for boosting the dc-link voltage and balancing the two series capacitor voltages of the MqZS. The performances of both the proposed MqZS-CHI and the modulation techniques are verified through simulation and experimental results.", "title": "" }, { "docid": "a57dec9f4fb85c64e03cebdfd3fea894", "text": "OF THESIS CAPACITOR SWITCHING TRANSIENT MODELING AND ANALYSIS ON AN ELECTRICAL UTILITY DISTRIBUTION SYSTEM USING SIMULINK SOFTWARE The quality of electric power has been a constant topic of study, mainly because inherent problems to it can bring great economic losses in industrial processes. Among the factors that affect power quality, those related to transients originated from capacitor bank switching in the primary distribution systems must be highlighted. In this thesis, the characteristics of the transients resulting from the switching of utility capacitor banks are analyzed, as well as factors that influence there intensities. A practical application of synchronous closing to reduce capacitor bank switching transients is presented. A model that represents a real distribution system 12.47kV from Shelbyville sub-station was built and simulated using MATLAB/SIMULINK software for purposes of this study. A spectral analysis of voltage and current waves is made to extract the acceptable capacitor switching times by observing the transient over-voltages and, harmonic components. An algorithm is developed for practical implementation of zero-crossing technique by taking the results obtained from the SIMULINK model.", "title": "" }, { "docid": "5259c661992baa926173348c4e0b0cd2", "text": "A controller assistant system is developed based on the closed-form solution of an offline optimization problem for a four-wheel-drive front-wheel-steerable vehicle. The objective of the controller is to adjust the actual vehicle attitude and motion according to the driver's manipulating commands. The controller takes feedback from acceleration signals, and the imposed conditions and limitations on the controller are studied through the concept of state-derivative feedback control systems. The controller gains are optimized using linear matrix inequality (LMI) and genetic algorithm (GA) techniques. Reference signals are calculated using a driver command interpreter module (DCIM) to accurately interpret the driver's intentions for vehicle motion and to allow the controller to generate proper control actions. It is shown that the controller effectively enhances the handling performance and stability of the vehicle under different road conditions and driving scenarios. Although controller performance is studied for a four-wheel-drive front-wheel-steerable vehicle, the algorithm can also be applied to other vehicle configurations with slight changes.", "title": "" }, { "docid": "95b3c332334b002c8fa086d97a471c17", "text": "Reliability is becoming more and more important as the size and number of installed Wind Turbines (WTs) increases. Very high reliability is especially important for offshore WTs because the maintenance and repair of such WTs in case of failures can be very expensive. WT manufacturers need to consider the reliability aspect when they design new power converters. By designing the power converter considering the reliability aspect the manufacturer can guarantee that the end product will ensure high availability. This paper represents an overview of the various aspects of reliability prediction of high power Insulated Gate Bipolar Transistors (IGBTs) in the context of wind power applications. At first the latest developments and future predictions about wind energy are briefly discussed. Next the dominant failure mechanisms of high power IGBTs are described and the most commonly used lifetime prediction models are reviewed. Also the concept of Accelerated Life Testing (ALT) is briefly reviewed.", "title": "" }, { "docid": "8f0801de787ccea72bb0c61aefbd0ec8", "text": "Recent fMRI studies demonstrated that functional connectivity is altered following cognitive tasks (e.g., learning) or due to various neurological disorders. We tested whether real-time fMRI-based neurofeedback can be a tool to voluntarily reconfigure brain network interactions. To disentangle learning-related from regulation-related effects, we first trained participants to voluntarily regulate activity in the auditory cortex (training phase) and subsequently asked participants to exert learned voluntary self-regulation in the absence of feedback (transfer phase without learning). Using independent component analysis (ICA), we found network reconfigurations (increases in functional network connectivity) during the neurofeedback training phase between the auditory target region and (1) the auditory pathway; (2) visual regions related to visual feedback processing; (3) insula related to introspection and self-regulation and (4) working memory and high-level visual attention areas related to cognitive effort. Interestingly, the auditory target region was identified as the hub of the reconfigured functional networks without a-priori assumptions. During the transfer phase, we again found specific functional connectivity reconfiguration between auditory and attention network confirming the specific effect of self-regulation on functional connectivity. Functional connectivity to working memory related networks was no longer altered consistent with the absent demand on working memory. We demonstrate that neurofeedback learning is mediated by widespread changes in functional connectivity. In contrast, applying learned self-regulation involves more limited and specific network changes in an auditory setup intended as a model for tinnitus. Hence, neurofeedback training might be used to promote recovery from neurological disorders that are linked to abnormal patterns of brain connectivity.", "title": "" }, { "docid": "fa44652ecd36d99d18535966727fb3d4", "text": "Spatio-temporal cuboid pyramid (STCP) for action recognition using depth motion sequences [1] is influenced by depth camera error which leads the depth motion sequence (DMS) existing many kinds of noise, especially on the surface. It means that the dimension of DMS is awfully high and the feature for action recognition becomes less apparent. In this paper, we present an effective method to reduce noise, which is to segment foreground. We firstly segment and extract human contour in the color image using convolutional network model. Then, human contour is re-segmented utilizing depth information. Thirdly we project each frame of the segmented depth sequence onto three views. We finally extract features from cuboids and recognize human actions. The proposed approach is evaluated on three public benchmark datasets, i.e., UTKinect-Action Dataset, MSRActionPairs Dataset and 3D Online Action Dataset. Experimental results show that our method achieves state-of-the-art performance.", "title": "" }, { "docid": "1d60437cbd2cec5058957af291ca7cde", "text": "Œe behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used. However, the cross-domain relationships between items and user consumption paŠerns are not simple, especially when there are few or no common users and items across domains. To address this problem, we propose a content-based cross-domain recommendation method for cold-start users that does not require userand itemoverlap. We formulate recommendation as extreme multi-class classi€cation where labels (items) corresponding to the users are predicted. With this formulation, the problem is reduced to a domain adaptation seŠing, in which a classi€er trained in the source domain is adapted to the target domain. For this, we construct a neural network that combines an architecture for domain adaptation, Domain Separation Network, with a denoising autoencoder for item representation. We assess the performance of our approach in experiments on a pair of data sets collected from movie and news services of Yahoo! JAPAN and show that our approach outperforms several baseline methods including a cross-domain collaborative €ltering method.", "title": "" }, { "docid": "d1be704e4d81ab1466482a4924f00474", "text": "Fetus-in-fetu (FIF) is a rare congenital condition in which a fetiform mass is detected in the host abdomen and also in other sites such as the intracranium, thorax, head, and neck. This condition has been rarely reported in the literature. Herein, we report the case of a fetus presenting with abdominal cystic mass and ascites and prenatally diagnosed as meconium pseudocyst. Explorative laparotomy revealed an irregular fetiform mass in the retroperitoneum within a fluid-filled cyst. The mass contained intestinal tract, liver, pancreas, and finger. Fetal abdominal cystic mass has been identified in a broad spectrum of diseases. However, as in our case, FIF is often overlooked during differential diagnosis. FIF should also be differentiated from other conditions associated with fetal abdominal masses.", "title": "" }, { "docid": "1d7b7ea9f0cc284f447c11902bad6685", "text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.", "title": "" }, { "docid": "17a6ac933c6aa864180ba3ae05a99366", "text": "A formal approach to security in the software life cycle is essential to protect corporate resources. However, little thought has been given to this aspect of software development. Traditionally, software security has been treated as an afterthought leading to a cycle of ‘penetrate and patch.’ Due to its criticality, security should be integrated as a formal approach in the software life cycle. Both a software security checklist and assessment tools should be incorporated into this life cycle process. The current research at JPL addresses both of these areas through the development of a Software Security Assessment Instrument (SSAI). This paper focuses on the development of a Software Security Checklist (SSC) for the life cycle. It includes the critical areas of requirements gathering and specification, design and code issues, and maintenance and decommissioning of software and systems.", "title": "" }, { "docid": "0a842427c2c03d08f9950765ee0fb625", "text": "For centuries, several hundred pesticides have been used to control insects. These pesticides differ greatly in their mode of action, uptake by the body, metabolism, elimination from the body, and toxicity to humans. Potential exposure from the environment can be estimated by environmental monitoring. Actual exposure (uptake) is measured by the biological monitoring of human tissues and body fluids. Biomarkers are used to detect the effects of pesticides before adverse clinical health effects occur. Pesticides and their metabolites are measured in biological samples, serum, fat, urine, blood, or breast milk by the usual analytical techniques. Biochemical responses to environmental chemicals provide a measure of toxic effect. A widely used biochemical biomarker, cholinesterase depression, measures exposure to organophosphorus insecticides. Techniques that measure DNA damage (e.g., detection of DNA adducts) provide a powerful tool in measuring environmental effects. Adducts to hemoglobin have been detected with several pesticides. Determination of chromosomal aberration rates in cultured lymphocytes is an established method of monitoring populations occupationally or environmentally exposed to known or suspected mutagenic-carcinogenic agents. There are several studies on the cytogenetic effects of work with pesticide formulations. The majority of these studies report increases in the frequency of chromosomal aberrations and/or sister chromatid exchanges among the exposed workers. Biomarkers will have a major impact on the study of environmental risk factors. The basic aim of scientists exploring these issues is to determine the nature and consequences of genetic change or variation, with the ultimate purpose of predicting or preventing disease.", "title": "" } ]
scidocsrr
86fd4e9b1519bb0b90d0b51b08b66d48
Delivering on the promise of universal memory for spin-transfer torque RAM (STT-RAM)
[ { "docid": "476bb80edf6c54f0b6415d19f027ee19", "text": "Spin-transfer torque (STT) switching demonstrated in submicron sized magnetic tunnel junctions (MTJs) has stimulated considerable interest for developments of STT switched magnetic random access memory (STT-MRAM). Remarkable progress in STT switching with MgO MTJs and increasing interest in STTMRAM in semiconductor industry have been witnessed in recent years. This paper will present a review on the progress in the intrinsic switching current density reduction and STT-MRAM prototype chip demonstration. Challenges to overcome in order for STT-MRAM to be a mainstream memory technology in future technology nodes will be discussed. Finally, potential applications of STT-MRAM in embedded and standalone memory markets will be outlined.", "title": "" }, { "docid": "7413b87b42f71bba294f060c5a7fdfee", "text": "Phase change memory (PCM) is one of the most promising technology among emerging non-volatile random access memory technologies. Implementing a cache memory using PCM provides many benefits such as high density, non-volatility, low leakage power, and high immunity to soft error. However, its disadvantages such as high write latency, high write energy, and limited write endurance prevent it from being used as a drop-in replacement of an SRAM cache. In this paper, we study a set of techniques to design an energy- and endurance-aware PCM cache. We also modeled the timing, energy, endurance, and area of PCM caches and integrated them into a PCM cache simulator to evaluate the techniques. Experiments show that our PCM cache design can achieve 8% of energy saving and 3.8 years of lifetime compared with a baseline PCM cache having less than a hour of lifetime.", "title": "" } ]
[ { "docid": "9bf080ff635459649dd16867f191ed95", "text": "An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. In this paper we present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each Mobile Host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic selfstarting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance-vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm.", "title": "" }, { "docid": "8f6d8c96c51f210a6711802a2ff32dde", "text": "People are drawn to play different types of videogames and find enjoyment in a range of gameplay experiences. Envisaging a representative game player or persona allows game designers to personalize game content; however, there are many ways to characterize players and little guidance on which approaches best model player behavior and preference. To provide knowledge about how player characteristics contribute to game experience, we investigate how personality traits as well as player styles from the BrianHex model moderate the prediction of player motivation with a social network game. Our results show that several player characteristics impact motivation, expressed in terms of enjoyment and effort. We also show that player enjoyment and effort, as predicted by our models, impact players’ in-game behaviors, illustrating both the predictive power and practical utility of our models for guiding user adaptation.", "title": "" }, { "docid": "0cfda368edafe21e538f2c1d7ed75056", "text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.", "title": "" }, { "docid": "01ea2d3c28382459aafa064e70e582d3", "text": "* In recent decades, an intriguing view of human cognition has garnered increasing support. According to this view, which I will call 'the hypothesis of extended cognition' ('HEC', hereafter), human cognitive processing literally extends into the environment surrounding the organism, and human cognitive states literally comprise—as wholes do their proper parts— elements in that environment; in consequence, while the skin and scalp may encase the human organism, they do not delimit the thinking subject. 1 The hypothesis of extended cognition should provoke our critical interest. Acceptance of HEC would alter our approach to research and theorizing in cognitive science and, it would seem, significantly change our conception of persons. Thus, if HEC faces substantive difficulties, these should be brought to light; this paper is meant to do just that, exposing some of the problems HEC must overcome if it is to stand among leading views of the nature of human cognition. The essay unfolds as follows: The first section consists of preliminary remarks, mostly about the scope and content of HEC as I will construe it. Sections II and III clarify HEC by situating it with respect to related theses one finds in the literature—the hypothesis of embedded cognition Association. I would like to express my appreciation to members of all three audiences for their useful feedback (especially William Lycan at the Mountain-Plains and David Chalmers at the APA), as well as to my conference commentators, Robert Welshon and Tadeusz Zawidzki. I also benefited from discussing extended cognition with 2 and content-externalism. The remaining sections develop a series of objections to HEC and the arguments that have been offered in its support. The first objection appeals to common sense: HEC implies highly counterintuitive attributions of belief. Of course, HEC-theorists can take, and have taken, a naturalistic stand. They claim that HEC need not be responsive to commonsense objections, for HEC is being offered as a theoretical postulate of cognitive science; whether we should accept HEC depends, they say, on the value of the empirical work premised upon it. Thus, I consider a series of arguments meant to show that HEC is a promising causal-explanatory hypothesis, concluding that these arguments fail and that, ultimately, HEC appears to be of marginal interest as part of a philosophical foundation for cognitive science. If the cases canvassed here are any indication, adopting HEC results in a significant loss of explanatory power or, at the …", "title": "" }, { "docid": "4acfb49be406de472af9080d3cdc6fa4", "text": "Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.", "title": "" }, { "docid": "e40228513cb17052c182dd1f421c659a", "text": "This manuscript describes our participation in the International Skin Imaging Collaboration’s 2017 Skin Lesion Analysis Towards Melanoma Detection competition. We participated in Part 3: Lesion Classification. The two stated goals of this binary image classification challenge were to distinguish between (a) melanoma and (b) nevus and seborrheic keratosis, followed by distinguishing between (a) seborrheic keratosis and (b) nevus and melanoma. We chose a deep neural network approach with a transfer learning strategy, using a pre-trained Inception V3 network as both a feature extractor to provide input for a multi-layer perceptron as well as fine-tuning an augmented Inception network. This approach yielded validation set AUC’s of 0.84 on the second task and 0.76 on the first task, for an average AUC of 0.80. We joined the competition unfortunately late, and we look forward to improving on these results. Keywords—transfer learning; melanoma; seborrheic keratosis; nevus;", "title": "" }, { "docid": "a2842352924cbd1deff52976425a0bd6", "text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.", "title": "" }, { "docid": "19672ead8c41fa723099b30d152fb466", "text": "-Fractal dimension is an interesting parameter to characterize roughness in an image. It can be used in texture segmentation, estimation of three-dimensional (3D) shape and other information. A new method is proposed to estimate fractal dimension in a two-dimensional (2D) image which can readily be extended to a 3D image as well. The method has been compared with other existing methods to show that our method is both efficient and accurate. Fractal dimension Texture analysis Image roughness measure Image segmentation Computer vision", "title": "" }, { "docid": "ef3b9dd6b463940bc57cdf7605c24b1e", "text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.", "title": "" }, { "docid": "458a02043f943be7caf655513838fbef", "text": "The traditional apparel product development process is a typical iterative ‘optimization’ process that involves trial-and-error. In order to confirm the design and achieve a satisfactory fit, a number of repeated cycles of sample preparation, trial fitting and pattern alteration must be conducted. The process itself is time-consuming, costly, and dependent on the designer’s skills and experience. In this paper, a novel computer aided design (CAD) solution for virtual try-on, fitting evaluation and style editing is proposed to speed up the clothing design process. A series of new techniques from cross parameterization, geometrical and physical integrated deformation, to novel editing methods are proposed. First, a cross parameterization technique is employed to map clothing pattern pieces on a model surface. The pattern can be precisely positioned to form the initial shapewith low distortion. Next, a new deformationmethod called hybrid pop-up is proposed to approximate the virtual try-on shape. This method is an integration of geometrical reconstruction and physical based simulation. In addition, user interactive operations are introduced for style editing and pattern alteration in both 2D and 3D manners. The standard rules regulating pattern editing in the fashion industry can be incorporated in the system, so that the resulting clothing patterns are suitable for everyday production. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "265bf26646113a56101c594f563cb6dc", "text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.", "title": "" }, { "docid": "d38df66fe85b4d12093965e649a70fe1", "text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.", "title": "" }, { "docid": "7c9d35fb9cec2affbe451aed78541cef", "text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.", "title": "" }, { "docid": "5ca36b7877ebd3d05e48d3230f2dceb0", "text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.", "title": "" }, { "docid": "b60474e6e2fa0f08241819bac709d6fd", "text": "Patriarchy is the prime obstacle to women’s advancement and development. Despite differences in levels of domination the broad principles remain the same, i.e. men are in control. The nature of this control may differ. So it is necessary to understand the system, which keeps women dominated and subordinate, and to unravel its workings in order to work for women’s development in a systematic way. In the modern world where women go ahead by their merit, patriarchy there creates obstacles for women to go forward in society. Because patriarchal institutions and social relations are responsible for the inferior or secondary status of women. Patriarchal society gives absolute priority to men and to some extent limits women’s human rights also. Patriarchy refers to the male domination both in public and private spheres. In this way, feminists use the term ‘patriarchy’ to describe the power relationship between men and women as well as to find out the root cause of women’s subordination. This article, hence, is an attempt to analyse the concept of patriarchy and women’s subordination in a theoretical perspective.", "title": "" }, { "docid": "3850b4ec9b23868f3b67b984d8af026b", "text": "This paper presents a first reported passive-charge-sharing SAR ADC that achieves 16 bit linearity. It is known that on chip passive-charge-sharing suffers from poor linearity due to the unregulated reference voltage during bit trials. The proposed unique ADC architecture and calibration technique addresses the issue of signal dependent reference voltage droop during SAR ADC bit trials and orthogonalize the bit weights to achieve 16bit linearity. In addition, the proposed architecture maximizes SNR by sampling on to the bit cap, the first reported in this type of SAR ADC. Measurement result from a prototype test chip shows +/−0.8 LSB (16-bit level) INL at 1MSPS.", "title": "" }, { "docid": "01bd8fcce2f4b94e206a1ea91898fcff", "text": "With deep learning becoming the dominant approach in computer vision, the use of representations extracted from Convolutional Neural Nets (CNNs) is quickly gaining ground on Fisher Vectors (FVs) as favoured state-of-the-art global image descriptors for image instance retrieval. While the good performance of CNNs for image classification are unambiguously recognised, which of the two has the upper hand in the image retrieval context is not entirely clear yet. In this work, we propose a comprehensive study that systematically evaluates FVs and CNNs for image retrieval. The first part compares the performances of FVs and CNNs on multiple publicly available data sets. We investigate a number of details specific to each method. For FVs, we compare sparse descriptors based on interest point detectors with dense single-scale and multi-scale variants. For CNNs, we focus on understanding the impact of depth, architecture and training data on retrieval results. Our study shows that no descriptor is systematically better than the other and that performance gains can usually be obtained by using both types together. The second part of the study focuses on the impact of geometrical transformations such as rotations and scale changes. FVs based on interest point detectors are intrinsically resilient to such transformations while CNNs do not have a built-in mechanism to ensure such invariance. We show that performance of CNNs can quickly degrade in presence of rotations while they are far less affected by changes in scale. We then propose a number of ways to incorporate the required invariances in the CNN pipeline. Overall, our work is intended as a reference guide offering practically useful and simply implementable guidelines to anyone looking for state-of-the-art global descriptors best suited to their specific image instance retrieval problem.", "title": "" }, { "docid": "bca81a5b34376e5a6090e528a583b4f4", "text": "There has been considerable debate in the literature about the relative merits of information processing versus dynamical approaches to understanding cognitive processes. In this article, we explore the relationship between these two styles of explanation using a model agent evolved to solve a relational categorization task. Specifically, we separately analyze the operation of this agent using the mathematical tools of information theory and dynamical systems theory. Information-theoretic analysis reveals how task-relevant information flows through the system to be combined into a categorization decision. Dynamical analysis reveals the key geometrical and temporal interrelationships underlying the categorization decision. Finally, we propose a framework for directly relating these two different styles of explanation and discuss the possible implications of our analysis for some of the ongoing debates in cognitive science.", "title": "" }, { "docid": "bd49abf84e7bbe71fc3d116523065b71", "text": "For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions.", "title": "" } ]
scidocsrr
a534f6103e172ec4afde9daebd5edcab
Tensorial Mixture Models
[ { "docid": "9ece98aee7056ff6c686c12bcdd41d31", "text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "title": "" } ]
[ { "docid": "fcdf96ef1c2798169f05c96ba58c96a9", "text": "This paper develops a novel methodology for using symbolic knowledge in deep learning. We define a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. An experimental evaluation shows that our semantic loss function effectively guides the learner to achieve (near-)state-of-the-art results on semi-supervised multi-class classification. Moreover, it significantly increases the ability of the neural network to predict structured objects under weak supervision, such as rankings and shortest paths.", "title": "" }, { "docid": "a10804fe3d5648a014a164c92ffa0c25", "text": "OBJECTIVES\nThe aim of this study was to compare the long-term outcomes of implants placed in patients treated for periodontitis periodontally compromised patients (PCP) and in periodontally healthy patients (PHP) in relation to adhesion to supportive periodontal therapy (SPT).\n\n\nMATERIAL AND METHODS\nOne hundred and twelve partially edentulous patients were consecutively enrolled in private specialist practice and divided into three groups according to their initial periodontal condition: PHP, moderate PCP and severe PCP. Perio and implant treatment was carried out as needed. Solid screws (S), hollow screws (HS) and hollow cylinders (HC) were installed to support fixed prostheses, after successful completion of initial periodontal therapy (full-mouth plaque score <25% and full-mouth bleeding score <25%). At the end of treatment, patients were asked to follow an individualized SPT program. At 10 years, clinical measures and radiographic bone changes were recorded by two calibrated operators, blinded to the initial patient classification.\n\n\nRESULTS\nEleven patients were lost to follow-up. During the period of observation, 18 implants were removed because of biological complications. The implant survival rate was 96.6%, 92.8% and 90% for all implants and 98%, 94.2% and 90% for S-implants only, respectively, for PHP, moderate PCP and severe PCP. The mean bone loss was 0.75 (+/- 0.88) mm in PHP, 1.14 (+/- 1.11) mm in moderate PCP and 0.98 (+/- 1.22) mm in severe PCP, without any statistically significant difference. The percentage of sites, with bone loss > or =3 mm, was, respectively, 4.7% for PHP, 11.2% for moderate PCP and 15.1% for severe PCP, with a statistically significant difference between PHP and severe PCP (P<0.05). Lack of adhesion to SPT was correlated with a higher incidence of bone loss and implant loss.\n\n\nCONCLUSION\nPatients with a history of periodontitis presented a lower survival rate and a statistically significantly higher number of sites with peri-implant bone loss. Furthermore, PCP, who did not completely adhere to the SPT, were found to present a higher implant failure rate. This underlines the value of the SPT in enhancing the long-term outcomes of implant therapy, particularly in subjects affected by periodontitis, in order to control reinfection and limit biological complications.", "title": "" }, { "docid": "1170077ab8ca8e1f27937a7024014dd0", "text": "BACKGROUND\nOn December 6 and 7, 2017, the US Department of Health and Human Services (HHS) hosted its first Code-a-Thon event aimed at leveraging technology and data-driven solutions to help combat the opioid epidemic. The authors—an interdisciplinary team from academia, the private sector, and the US Centers for Disease Control and Prevention—participated in the Code-a-Thon as part of the prevention track.\n\n\nOBJECTIVE\nThe aim of this study was to develop and deploy a methodology using machine learning to accurately detect the marketing and sale of opioids by illicit online sellers via Twitter as part of participation at the HHS Opioid Code-a-Thon event.\n\n\nMETHODS\nTweets were collected from the Twitter public application programming interface stream filtered for common prescription opioid keywords in conjunction with participation in the Code-a-Thon from November 15, 2017 to December 5, 2017. An unsupervised machine learning–based approach was developed and used during the Code-a-Thon competition (24 hours) to obtain a summary of the content of the tweets to isolate those clusters associated with illegal online marketing and sale using a biterm topic model (BTM). After isolating relevant tweets, hyperlinks associated with these tweets were reviewed to assess the characteristics of illegal online sellers.\n\n\nRESULTS\nWe collected and analyzed 213,041 tweets over the course of the Code-a-Thon containing keywords codeine, percocet, vicodin, oxycontin, oxycodone, fentanyl, and hydrocodone. Using BTM, 0.32% (692/213,041) tweets were identified as being associated with illegal online marketing and sale of prescription opioids. After removing duplicates and dead links, we identified 34 unique “live” tweets, with 44% (15/34) directing consumers to illicit online pharmacies, 32% (11/34) linked to individual drug sellers, and 21% (7/34) used by marketing affiliates. In addition to offering the “no prescription” sale of opioids, many of these vendors also sold other controlled substances and illicit drugs.\n\n\nCONCLUSIONS\nThe results of this study are in line with prior studies that have identified social media platforms, including Twitter, as a potential conduit for supply and sale of illicit opioids. To translate these results into action, authors also developed a prototype wireframe for the purposes of detecting, classifying, and reporting illicit online pharmacy tweets selling controlled substances illegally to the US Food and Drug Administration and the US Drug Enforcement Agency. Further development of solutions based on these methods has the potential to proactively alert regulators and law enforcement agencies of illegal opioid sales, while also making the online environment safer for the public.", "title": "" }, { "docid": "9504c6c6286f6bd57e5e443d6fdcced9", "text": "Comparisons of two assessment measures for ADHD: the ADHD Behavior Checklist and the Integrated Visual and Auditory Continuous Performance Test (IVA CPT) were examined using undergraduates (n=44) randomly assigned to a control or a simulated malingerer condition and undergraduates with a valid diagnosis of ADHD (n=16). It was predicted that malingerers would successfully fake ADHD on the rating scale but not on the CPT for which they would overcompensate, scoring lower than all other groups. Analyses indicated that the ADHD Behavior Rating Scale was successfully faked for childhood and current symptoms. IVA CPT could not be faked on 81% of its scales. The CPT's impairment index results revealed: sensitivity 94%, specificity 91%, PPP 88%, NPP 95%. Results provide support for the inclusion of a CPT in assessment of adult ADHD.", "title": "" }, { "docid": "44e7ba0be5275047587e9afd22f1de2a", "text": "Dialogue state tracking plays an important role in statistical dialogue management. Domain-independent rule-based approaches are attractive due to their efficiency, portability and interpretability. However, recent rule-based models are still not quite competitive to statistical tracking approaches. In this paper, a novel framework is proposed to formulate rule-based models in a general way. In the framework, a rule is considered as a special kind of polynomial function satisfying certain linear constraints. Under some particular definitions and assumptions, rule-based models can be seen as feasible solutions of an integer linear programming problem. Experiments showed that the proposed approach can not only achieve competitive performance compared to statistical approaches, but also have good generalisation ability. It is one of the only two entries that outperformed all the four baselines in the third Dialog State Tracking Challenge.", "title": "" }, { "docid": "de7adeaded669f10ff63bc36269ca384", "text": "The posterior cruciate ligament (PCL) is recognized as an essential stabilizer of the knee. However, the complexity of the ligament has generated controversy about its definitive role and the recommended treatment after injury. A proper understanding of the functional role of the PCL is necessary to minimize residual instability, osteoarthritic progression, and failure of additional concomitant ligament graft reconstructions or meniscal repairs after treatment. Recent anatomic and biomechanical studies have elucidated the surgically relevant quantitative anatomy and confirmed the codominant role of the anterolateral and posteromedial bundles of the PCL. Although nonoperative treatment has historically been the initial treatment of choice for isolated PCL injury, possibly biased by the historically poorer objective outcomes postoperatively compared with anterior cruciate ligament reconstructions, surgical intervention has been increasingly used for isolated and combined PCL injuries. Recent studies have more clearly elucidated the biomechanical and clinical effects after PCL tears and resultant treatments. This article presents a thorough review of updates on the clinically relevant anatomy, epidemiology, biomechanical function, diagnosis, and current treatments for the PCL, with an emphasis on the emerging clinical and biomechanical evidence regarding each of the treatment choices for PCL reconstruction surgery. It is recommended that future outcomes studies use PCL stress radiographs to determine objective outcomes and that evidence level 1 and 2 studies be performed to assess outcomes between transtibial and tibial inlay reconstructions and also between single- and double-bundle PCL reconstructions.", "title": "" }, { "docid": "e5f995100ed9049da0aa67749b0568c8", "text": "This paper presents a novel convolutional neural network (CNN) -based method for high-accuracy real-time car license plate detection. Many contemporary methods for car license plate detection are reasonably effective under the specific conditions or strong assumptions only. However, they exhibit poor performance when the assessed car license plate images have a degree of rotation, as a result of manual capture by traffic police or deviation of the camera. Therefore, we propose the a CNN-based MD-YOLO framework for multi-directional car license plate detection. Using accurate rotation angle prediction and a fast intersection-over-union evaluation strategy, our proposed method can elegantly manage rotational problems in real-time scenarios. A series of experiments have been carried out to establish that the proposed method outperforms over other existing state-of-the-art methods in terms of better accuracy and lower computational cost.", "title": "" }, { "docid": "09132f8695e6f8d32d95a37a2bac46ee", "text": "Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.", "title": "" }, { "docid": "074407b66a0a81f27625edc98dead4fc", "text": "A new method, modified Bagging (mBagging) of Maximal Information Coefficient (mBoMIC), was developed for genome-wide identification. Traditional Bagging is inadequate to meet some requirements of genome-wide identification, in terms of statistical performance and time cost. To improve statistical performance and reduce time cost, an mBagging was developed to introduce Maximal Information Coefficient (MIC) into genomewide identification. The mBoMIC overcame the weakness of original MIC, i.e., the statistical power is inadequate and MIC values are volatile. The three incompatible measures of Bagging, i.e. time cost, statistical power and false positive rate, were significantly improved simultaneously. Compared with traditional Bagging, mBagging reduced time cost by 80%, improved statistical power by 15%, and decreased false positive rate by 31%. The mBoMIC has sensitivity and university in genome-wide identification. The SNPs identified only by mBoMIC have been reported as SNPs associated with cardiac disease.", "title": "" }, { "docid": "0ccaeab89b4acbcadfdfe40a56356383", "text": "This paper is a review of the book Discovering Data Mining: From Concept to Implementation – Peter Cabena, Pablo Hadjinian, Rolf Stadler, Jaap Verhees, and Alessandro Zanasi (New Jersey: Prentice Hall, 195 pp., 1998).", "title": "" }, { "docid": "2975f30e0de5864559a6e391618ff66d", "text": "195 With commercial bank lending to developing economies drying up in the 1980s, most countries eased restrictions on foreign direct investment (FDI) and many aggressively offered tax incentives and subsidies to attract foreign capital (Aitken and Harrison 1999; World Bank 1997a, 1997b). Along with these policy changes, a surge of noncommercial bank private capital flows to developing economies in the 1990s occurred. Private capital flows to emerging-market economies exceeded $320 billion in 1996 and reached almost $200 billion in 2000. Even the 2000 figure is almost four times larger than the peak commercial bank lending years of the 1970s and early 1980s. Furthermore, FDI now accounts for over 60 percent of private capital flows. While the explosion of FDI flows is unmistakable, the growth effects remain unclear. Theory provides conflicting predictions concerning the growth effects of FDI. The economic rationale for offering special incentives to attract FDI frequently derives from the belief that foreign investment produces externalities in the form of technology transfers and spillovers. Romer (1993), for example, argues that important “idea gaps” between rich and poor countries exist. He notes that foreign investment can ease the transfer of technological 8", "title": "" }, { "docid": "015449616e6a0526ea3b1f79420bfb26", "text": "Online fraud, described as dubious business transactions and deceit carried out electronically, has reached an alarming rate worldwide and has become a major challenge to organizations and governments. In the Gulf region, particularly Saudi Arabia, where there is high Internet penetration and many online financial transactions, the need to put effective measures to deter, prevent and detect online fraud, has become imperative. This paper examines how online fraud control measures in financial institutions in Saudi Arabia are organized and managed. Through qualitative interviews with experts in Saudi Arabia, the study found that people’s perceptions (from their moral, social, cultural and religious backgrounds) have significant effect on awareness and fraud prevention and detection. It also argues that technological measures alone may not be adequate. Deterrence, prevention, detection and remedy activities, together making General Deterrence Theory (GDT) as an approach for systematically and effectively combatting online fraud in Saudi.", "title": "" }, { "docid": "95727de088955aff88366de2c0f57dfe", "text": "Current software for AI development requires the use of programming languages to develop intelligent agents. This can be disadvantageous for AI designers, as their work needs to be debugged and treated as a generic piece of software code. Moreover, such approaches are designed for experts; often requiring a steep initial learning curve, as they are tailored for programmers. This can be also disadvantageous for implementing transparency to agents, an important ethical consideration [1], [2], as additional work is needed to expose and represent information to end users. We are working towards the development of a new editor, ABOD3. It allows the graphical visualisation of Behaviour Oriented Design based plans [3], including its two major derivatives: Parallel-rooted, Ordered Slip-stack Hierarchical (POSH) and Instinct [4]. The new editor is designed to allow not only the development of reactive plans, but also to debug such plans in real time to reduce the time required to develop an agent. This allows the development and testing of plans from a same application.", "title": "" }, { "docid": "a59f82d98f978701d6a4271db1674d2a", "text": "Hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image; however, when used in statistical pattern-classification tasks, the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations. Popular dimensionality-reduction techniques such as principal component analysis, linear discriminant analysis, and their variants typically assume a Gaussian distribution. The quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-Gaussian class-conditional distributions. Departing from this single-Gaussian assumption, a classification paradigm designed to exploit the rich statistical structure of the data is proposed. The proposed framework employs local Fisher's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, while a subsequent Gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data. Experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives.", "title": "" }, { "docid": "f4b7b9747c0ba994b60326a568aa4173", "text": "Unmanned Aerial Vehicles (UAV) facilitate the development of Internet of Things (IoT) ecosystems for smart city and smart environment applications. This paper proposes the adoption of Edge and Fog computing principles to the UAV based forest fire detection application domain through a hierarchical architecture. This three-layer ecosystem combines the powerful resources of cloud computing, the rich resources of fog computing and the sensing capabilities of the UAVs. These layers efficiently cooperate to address the key challenges imposed by the early forest fire detection use case. Initial experimental evaluations measuring crucial performance metrics indicate that critical resources, such as CPU/RAM, battery life and network resources, can be efficiently managed and dynamically allocated by the proposed approach.", "title": "" }, { "docid": "b5cb64a0a17954310910d69c694ad786", "text": "This paper proposes a hybrid of handcrafted rules and a machine learning method for chunking Korean. In the partially free word-order languages such as Korean and Japanese, a small number of rules dominate the performance due to their well-developed postpositions and endings. Thus, the proposed method is primarily based on the rules, and then the residual errors are corrected by adopting a memory-based machine learning method. Since the memory-based learning is an efficient method to handle exceptions in natural language processing, it is good at checking whether the estimates are exceptional cases of the rules and revising them. An evaluation of the method yields the improvement in F-score over the rules or various machine learning methods alone.", "title": "" }, { "docid": "255de21131ccf74c3269cc5e7c21820b", "text": "This paper discusses the effect of driving current on frequency response of the two types of light emitting diodes (LEDs), namely, phosphor-based LED and single color LED. The experiments show that the influence of the change of driving current on frequency response of phosphor-based LED is not obvious compared with the single color LED(blue, red and green). The experiments also find that the bandwidth of the white LED was expanded from 1MHz to 32MHz by the pre-equalization strategy and 26Mbit/s transmission speed was taken under Bit Error Ratio of 7.55×10-6 within 3m by non-return-to-zero on-off-keying modulation. Especially, the frequency response intensity of the phosphor-based LED is little influenced by the fluctuation of the driving current, which meets the requirements that the indoor light source needs to be adjusted in real-time by driving current. As the bandwidth of the single color LED is changed by the driving current obviously, the LED modulation bandwidth should be calculated according to the minimum driving current while we consider the requirement of the VLC transmission speed.", "title": "" }, { "docid": "1a834cb0c5d72c6bc58c4898d318cfc2", "text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.", "title": "" }, { "docid": "8f8f249e7be54e0696cac03aedf25d73", "text": "The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters​ ​than​ ​SegNet.", "title": "" }, { "docid": "2fd8725adfb4a2f8d6639c59386159f6", "text": "Collective human knowledge has clearly benefited from the fact that innovations by individuals are taught to others through communication. Similar to human social groups, agents in distributed learning systems would likely benefit from communication to share knowledge and teach skills. The problem of teaching to improve agent learning has been investigated by prior works, but these approaches make assumptions that prevent application of teaching to general multiagent problems, or require domain expertise for problems they can apply to. This learning to teach problem has inherent complexities related to measuring long-term impacts of teaching that compound the standard multiagent coordination challenges. In contrast to existing works, this paper presents the first general framework and algorithm for intelligent agents to learn to teach in a multiagent environment. Our algorithm, Learning to Coordinate and Teach Reinforcement (LeCTR), addresses peer-to-peer teaching in cooperative multiagent reinforcement learning. Each agent in our approach learns both when and what to advise, then uses the received advice to improve local learning. Importantly, these roles are not fixed; these agents learn to assume the role of student and/or teacher at the appropriate moments, requesting and providing advice in order to improve teamwide performance and learning. Empirical comparisons against state-of-the-art teaching methods show that our teaching agents not only learn significantly faster, but also learn to coordinate in tasks where existing methods fail.", "title": "" } ]
scidocsrr
217252cdffc61fdc324f353606ae4470
Explorer Modeling Local Coherence : An Entity-Based Approach
[ { "docid": "57a5a0039469438f83875b3653176e62", "text": "This note describes a scoring scheme for the coreference task in MUC6 . It improves o n the original approach l by: (1) grounding the scoring scheme in terms of a model ; (2) producing more intuitive recall and precision scores ; and (3) not requiring explici t computation of the transitive closure of coreference . The principal conceptual differenc e is that we have moved from a syntactic scoring model based on following coreferenc e links to an approach defined by the model theory of those links .", "title": "" } ]
[ { "docid": "350f7694198d1b2c0a2c8cc1b75fc3c2", "text": "We present a methodology, called fast repetition rate (FRR) fluorescence, that measures the functional absorption cross-section (sigmaPS II) of Photosystem II (PS II), energy transfer between PS II units (p), photochemical and nonphotochemical quenching of chlorophyll fluorescence, and the kinetics of electron transfer on the acceptor side of PS II. The FRR fluorescence technique applies a sequence of subsaturating excitation pulses ('flashlets') at microsecond intervals to induce fluorescence transients. This approach is extremely flexible and allows the generation of both single-turnover (ST) and multiple-turnover (MT) flashes. Using a combination of ST and MT flashes, we investigated the effect of excitation protocols on the measured fluorescence parameters. The maximum fluorescence yield induced by an ST flash applied shortly (10 &mgr;s to 5 ms) following an MT flash increased to a level comparable to that of an MT flash, while the functional absorption cross-section decreased by about 40%. We interpret this phenomenon as evidence that an MT flash induces an increase in the fluorescence-rate constant, concomitant with a decrease in the photosynthetic-rate constant in PS II reaction centers. The simultaneous measurements of sigmaPS II, p, and the kinetics of Q-A reoxidation, which can be derived only from a combination of ST and MT flash fluorescence transients, permits robust characterization of the processes of photosynthetic energy-conversion.", "title": "" }, { "docid": "d874ab5fd259fbc5e4afd66432ef5497", "text": "Camera tracking for uncalibrated image sequences has now reached a level of maturity where 3D point structure and cameras can be recovered automatically for a significant class of scene types and camera motions. However, problems still occur, and their solution requires a combination of theoretical analysis and good engineering. We describe several such problems including missing data, degeneracy and deviations from the pinhole camera model, and discuss their solutions. We also discuss the incorporation of prior knowledge and the case of multiple rigid motions.", "title": "" }, { "docid": "f384b2db44cc662336096d691cabd80c", "text": "OBJECTIVES\nWe compare positioning with orthotic therapy in 298 consecutive infants referred for correction of head asymmetry.\n\n\nSTUDY DESIGN\nWe evaluated 176 infants treated with repositioning, 159 treated with helmets, and 37 treated with initial repositioning followed by helmet therapy when treatment failed. We compared reductions in diagonal difference (RDD) between repositioning and cranial orthotic therapy. Helmets were routinely used for infants older than 6 months with DD >1 cm.\n\n\nRESULTS\nFor infants treated with repositioning at a mean age of 4.8 months, the mean RDD was 0.55 cm (from an initial mean DD of 1.05 cm). For infants treated with cranial orthotics at a mean age of 6.6 months, the mean RDD was 0.71 cm (from an initial mean DD of 1.13 cm).\n\n\nCONCLUSIONS\nInfants treated with orthotics were older and required a longer length of treatment (4.2 vs 3.5 months). Infants treated with orthosis had a mean final DD closer to the DD in unaffected infants (0.3 +/- 0.1 cm), orthotic therapy was more effective than repositioning (61% decrease versus 52% decrease in DD), and early orthosis was significantly more effective than later orthosis (65% decrease versus 51% decrease in DD).", "title": "" }, { "docid": "9b798afbe00a54edcdbe646871060ecf", "text": "Compressed sensing is a novel research area, which was intro duced in 2006, and since then has already become a key concept in various areas of applied m athe atics, computer science, and electrical engineering. It surprisingly predicts that igh-dimensional signals, which allow a sparse representation by a suitable basis or, more general ly, a frame, can be recovered from what was previously considered highly incomplete linear me asurements by using efficient algorithms. This article shall serve as an introduction to a nd survey about compressed sensing.", "title": "" }, { "docid": "cd0ad1783e0ef64300cd59bb2fab27d1", "text": "Game Theory (GT) has been used with excellent results to model and optimize the operation of a huge number of real-world systems, including in communications and networking. Using a tutorial style, this paper surveys and updates the literature contributions that have applied a diverse set of theoretical games to solve a variety of challenging problems, namely in wireless data communication networks. During our literature discussion, the games are initially divided into three groups: classical, evolutionary, and incomplete information. Then, the classical games are further divided into three subgroups: non-cooperative, repeated, and cooperative. This paper reviews applications of games to develop adaptive algorithms and protocols for the efficient operation of some standardized uses cases at the edge of emerging heterogeneous networks. Finally, we highlight the important challenges, open issues, and future research directions where GT can bring beneficial outcomes to emerging wireless data networking applications.", "title": "" }, { "docid": "c45447fd682f730f350bae77c835b63a", "text": "In this paper, we demonstrate a high heat resistant bonding method by Cu/Sn transient liquid phase sintering (TLPS) method can be applied to die-attachment of silicon carbide (SiC)-MOSFET in high temperature operation power module. The die-attachment is made of nano-composite Cu/Sn TLPS paste. The die shear strength was 40 MPa for 3 × 3 mm2 SiC chip after 1,000 cycles of thermal cycle testing between −40 °C and 250 °C. This indicated a high reliability of Cu/Sn die-attachment. The thermal resistance of the Cu/Sn die-attachment was evaluated by transient thermal analysis using a sample in which the SiC-MOSFET (die size: 4.04 × 6.44 mm2) was bonded with Cu/Sn die-attachment. The thermal resistance of Cu/Sn die-attachment was 0.13 K/W, which was comparable to the one of Au/Ge die-attachment (0.12 K/W). The validity of nano-composite Cu/Sn TLPS paste as a die-attachment for high-temperature operation SiC power module is confirmed.", "title": "" }, { "docid": "31330cb0a7a3599049c8b40352f831e8", "text": "Although Facebook was created to help people feel connected with each other, data indicate that regular usage has both negative and positive connections to well-being. To explore these mixed results, we tested the role of social comparison and self-objectification as possible mediators of the link between Facebook use and three facets of psychological well-being: self-esteem, mental health, and body shame. Participants were 1,104 undergraduate women and men who completed surveys assessing their Facebook usage (minutes, passive use, and active use), social comparison, self-objectification, and well-being. Data were analyzed using structural equation modeling, testing separate models for women and men. Models for each gender fit the data well. For women and men, Facebook use was associated with greater social comparison and greater self-objectification, which, in turn, was each related to lower self-esteem, poorer mental health, and greater body shame. Mediated models provided better fits to the data than models testing direct pathways to the mediators and well-being variables. Implications are discussed for young people's social media use, and future directions are provided.", "title": "" }, { "docid": "6567ac7db83688e1bf290c7491a16bc7", "text": "In this paper we present our participation to SemEval-2018 Task 8 subtasks 1 & 2 respectively. We developed Convolution Neural Network system for malware sentence classification (subtask 1) and Conditional Random Fields system for malware token label prediction (subtask 2). We experimented with couple of word embedding strategies, feature sets and achieved competitive performance across the two subtasks. Code is made available at https://bitbucket.org/ vishnumani2009/securenlp", "title": "" }, { "docid": "a9dfddc3812be19de67fc4ffbc2cad77", "text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.", "title": "" }, { "docid": "d29eba4f796cb642d64e73b76767e59d", "text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.", "title": "" }, { "docid": "2caea7f13980ea4a48fb8e8bb71842f1", "text": "Internet of Things, commonly known as IoT is a promising area in technology that is growing day by day. It is a concept whereby devices connect with each other or to living things. Internet of Things has shown its great benefits in today’s life. Agriculture is one amongst the sectors which contributes a lot to the economy of Mauritius and to get quality products, proper irrigation has to be performed. Hence proper water management is a must because Mauritius is a tropical island that has gone through water crisis since the past few years. With the concept of Internet of Things and the power of the cloud, it is possible to use low cost devices to monitor and be informed about the status of an agricultural area in real time. Thus, this paper provides the design and implementation of a Smart Irrigation and Monitoring System which makes use of Microsoft Azure machine learning to process data received from sensors in the farm and weather forecasting data to better inform the farmers on the appropriate moment to start irrigation. The Smart Irrigation and Monitoring System is made up of sensors which collect data such as air humidity, air temperature, and most importantly soil moisture data. These data are used to monitor the air quality and water content of the soil. The raw data are transmitted to the", "title": "" }, { "docid": "13a777b2c5edcf9cb342b1290ec50a3c", "text": "Call for Book Chapters Introduction The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It will be comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book would be the first textbook to address challenges of constructing safe and secure advanced machine intelligence.", "title": "" }, { "docid": "4423a606fa4dd3093e801160cd72b6b2", "text": "High voltage insulating bushing is an important component of GIS of the high potential and ground potential insulation. The even electric field distribution and reasonable structure of bushing will guarantee the safe operation of GIS. In order to solve the problem of structural design and electric field distribution of 126 (kV) GIS bushing, a mathematical model to calculate the electric field distribution of high voltage insulating bushing was established in this study, and numerical simulation and visualization processing for electric field distribution of bushing was made by ANSYS. Furthermore, the insulation size was determined and verified. Consequently, the numerical foundation of insulation structural design and the development of 126 (kV) GIS bushing are provided.", "title": "" }, { "docid": "9ea0612f646228a3da41b7f55c23e825", "text": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent’s semantic perturbations (e.g., antonyms), we jointly improve the model’s semantic-relationship learning capabilities in addition to our AddSentDiversebased adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.", "title": "" }, { "docid": "3cc5648cab5d732d3d30bd95d9d06c00", "text": "We are concerned with the utility of social laws in a computational environment laws which guarantee the successful coexistence of multi ple programs and programmers In this paper we are interested in the o line design of social laws where we as designers must decide ahead of time on useful social laws In the rst part of this paper we sug gest the use of social laws in the domain of mobile robots and prove analytic results about the usefulness of this approach in that setting In the second part of this paper we present a general model of social law in a computational system and investigate some of its proper ties This includes a de nition of the basic computational problem involved with the design of multi agent systems and an investigation of the automatic synthesis of useful social laws in the framework of a model which refers explicitly to social laws This work was supported in part by a grant from the US Israel Binational Science Foundation", "title": "" }, { "docid": "ad55391c0bd240cb749a31e3815122cf", "text": "We describe the architecture and implementation of ffLink, a high-performance PCIe Gen3 interface for attaching reconfigurable accelerators on Xilinx Virtex 7 FPGA devices to Linux-based hosts. ffLink encompasses both hardware as well as flexible operating system components that allow a tailoring of the infrastructure to the specific data transfer needs of the application. When configured to use multiple DMA engines to hide transfer latencies, ffLink achieves a throughput of up to 7 GB/s, which is 95% of the maximum throughput of an eight-lane PCIe interface, while requiring just 11% of device area on a mid-size FPGA.", "title": "" }, { "docid": "8b78e568e58b6cd72c76bb11c86125be", "text": "A new scheme of a step-up converter with very high voltage gain is analyzed in this paper. The scheme is based on the combination of the switched-coupled-inductor boost converter and the diode-capacitor Cockoft-Walton multiplier. The scheme provides a soft commutation of the switch and the diodes. The paper analyzes the main modes of operation and obtained the formulas for determining the DC voltage gain, the boundary between continuous and discontinuous modes of operation and formulas to calculate the maximal voltage stresses on the transistor and on the diodes. The experimental results proved the theoretical expectations.", "title": "" }, { "docid": "ee87ac81d8e6589a32ce523dbe24bad3", "text": "Problem. Code migration between languages is challenging partly because different languages require developers to use different software libraries and frameworks. For example, in Java, Java Development Kit library (JDK) is a popular toolkit while .NET is the main framework used in C# software development. Code migration requires not only the mappings between the language constructs (e.g., statements, expressions) but also the mappings among the APIs of the libraries/frameworks used in two languages. For example, in Java, to write to a file, one can use FileWriter.write of FileWriter, and in C#, one can achieve the same function with StreamWriter.Write of StreamWriter. Such mapping is called API mapping.", "title": "" }, { "docid": "778d760ce03e559763112d365a3d8444", "text": "The growing market for smart home IoT devices promises new conveniences for consumers while presenting new challenges for preserving privacy within the home. Many smart home devices have always-on sensors that capture users’ offline activities in their living spaces and transmit information about these activities on the Internet. In this paper, we demonstrate that an ISP or other network observer can infer privacy sensitive in-home activities by analyzing Internet traffic from smart homes containing commercially-available IoT devices even when the devices use encryption. We evaluate several strategies for mitigating the privacy risks associated with smart home device traffic, including blocking, tunneling, and rate-shaping. Our experiments show that traffic shaping can effectively and practically mitigate many privacy risks associated with smart home IoT devices. We find that 40KB/s extra bandwidth usage is enough to protect user activities from a passive network adversary. This bandwidth cost is well within the Internet speed limits and data caps for many smart homes.", "title": "" }, { "docid": "83742a3fcaed826877074343232be864", "text": "In this paper we propose a design of the main modulation and demodulation units of a modem compliant with the new DVB-S2 standard (Int. J. Satellite Commun. 2004; 22:249–268). A typical satellite channel model consistent with the targeted applications of the aforementioned standard is assumed. In particular, non-linear pre-compensation as well as synchronization techniques are described in detail and their performance assessed by means of analysis and computer simulations. The proposed algorithms are shown to provide a good trade-off between complexity and performance and they apply to both the broadcast and the unicast profiles, the latter allowing the exploitation of adaptive coding and modulation (ACM) (Proceedings of the 20th AIAA Satellite Communication Systems Conference, Montreal, AIAA-paper 2002-1863, May 2002). Finally, end-to-end system performances in term of BER versus the signal-to-noise ratio are shown as a result of extensive computer simulations. The whole communication chain is modelled in these simulations, including the BCH and LDPC coder, the modulator with the pre-distortion techniques, the satellite transponder model with its typical impairments, the downlink chain inclusive of the RF-front-end phase noise, the demodulator with the synchronization sub-system units and finally the LDPC and BCH decoders. Copyright # 2004 John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
a19e6f7dc74c8fe186b370aedb83ad9b
Framework for Rumors Detection in Social Media
[ { "docid": "463ef40777aaf14406186d5d4d99ba13", "text": "Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR (\"Seriously Rapid Source Review\"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution.", "title": "" }, { "docid": "e4dd72a52d4961f8d4d8ee9b5b40d821", "text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.", "title": "" }, { "docid": "f8c1654abd0ffced4b5dbf3ef0724d36", "text": "The proposed social media crisis mapping platform for natural disasters uses locations from gazetteer, street map, and volunteered geographic information (VGI) sources for areas at risk of disaster and matches them to geoparsed real-time tweet data streams. The authors use statistical analysis to generate real-time crisis maps. Geoparsing results are benchmarked against existing published work and evaluated across multilingual datasets. Two case studies compare five-day tweet crisis maps to official post-event impact assessment from the US National Geospatial Agency (NGA), compiled from verified satellite and aerial imagery sources.", "title": "" }, { "docid": "1f8b3933dc49d87204ba934f82f2f84f", "text": "While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.", "title": "" } ]
[ { "docid": "e19c448b964c085cf938c288b5951392", "text": "This paper presents the design and simulation of a broadband 3×4 Butler matrix for the IEEE 802.11 b/g/n and ISM bands. The aim of this study is to develop an antenna array feeding networks for Multiple-Input Multiple-Output (MIMO) applications, based on an asymmetric Butler matrix. The asymmetric structure allows to create a further beam on the array's normal axis, in addition to the same beams which are created by the symmetrical version. The proposed circuit presents a high isolation and wideband features. The circuit can be used for both transmission and reception systems to ensure the Multi-User MIMO (MU-MIMO) service.", "title": "" }, { "docid": "de3ba8a5e83dc1fa153b9341ff7cbc76", "text": "The 1990s have seen a rapid growth of research interests in mobile ad hoc networking. The infrastructureless and the dynamic nature of these networks demands new set of networking strategies to be implemented in order to provide efficient end-to-end communication. This, along with the diverse application of these networks in many different scenarios such as battlefield and disaster recovery, have seen MANETs being researched by many different organisations and institutes. MANETs employ the traditional TCP/IP structure to provide end-to-end communication between nodes. However, due to their mobility and the limited resource in wireless networks, each layer in the TCP/IP model require redefinition or modifications to function efficiently in MANETs. One interesting research area in MANET is routing. Routing in the MANETs is a challenging task and has received a tremendous amount of attention from researches. This has led to development of many different routing protocols for MANETs, and each author of each proposed protocol argues that the strategy proposed provides an improvement over a number of different strategies considered in the literature for a given network scenario. Therefore, it is quite difficult to determine which protocols may perform best under a number of different network scenarios, such as increasing node density and traffic. In this paper, we provide an overview of a wide range of routing protocols proposed in the literature. We also provide a performance comparison of all routing protocols and suggest which protocols may perform best in large networks.", "title": "" }, { "docid": "ff93c200156cfe82fbbeccf66055fc54", "text": "According to the property of wavelet transform and fabric texture's Fourier spectrum, a new method for defect detection was presented. The proposed method is based on wavelet lifting transform with one resolution level. By using restoration scheme of the Fourier transform, the normal fabric textures of smooth sub-image in the spatial domain are removed by detecting the high-energy frequency components of sub-image in the Fourier domain, setting them to zero using frequency-domain filter, and back-transforming to a spatial domain sub-image. Then, the smooth and detail sub-images are segmented into many sub-windows, in which standard deviation are calculated as extracted features. The extracted features are compared with normal sub-window's features to determine whether there exists defect. Experimental results show that this method is validity and feasibility.", "title": "" }, { "docid": "f08c6829b353c45b6a9a6473b4f9a201", "text": "In this paper, we study the Symmetric Regularized Long Wave (SRLW) equations by finite difference method. We design some numerical schemes which preserve the original conservative properties for the equations. The first scheme is two-level and nonlinear-implicit. Existence of its difference solutions are proved by Brouwer fixed point theorem. It is proved by the discrete energy method that the scheme is uniquely solvable, unconditionally stable and second-order convergent for U in L1 norm, and for N in L2 norm on the basis of the priori estimates. The second scheme is three-level and linear-implicit. Its stability and second-order convergence are proved. Both of the two schemes are conservative so can be used for long time computation. However, they are coupled in computing so need more CPU time. Thus we propose another three-level linear scheme which is not only conservative but also uncoupled in computation, and give the numerical analysis on it. Numerical experiments demonstrate that the schemes are accurate and efficient. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "39321bc85746dc43736a0435c939c7da", "text": "We use recent network calculus results to study some properties of lossless multiplexing as it may be used in guaranteed service networks. We call network calculus a set of results that apply min-plus algebra to packet networks. We provide a simple proof that shaping a traffic stream to conform to a burstiness constraint preserves the original constraints satisfied by the traffic stream We show how all rate-based packet schedulers can be modeled with a simple rate latency service curve. Then we define a general form of deterministic effective bandwidth and equivalent capacity. We find that call acceptance regions based on deterministic criteria (loss or delay) are convex, in contrast to statistical cases where it is the complement of the region which is convex. We thus find that, in general, the limit of the call acceptance region based on statistical multiplexing when the loss probability target tends to 0 may be strictly larger than the call acceptance region based on lossless multiplexing. Finally, we consider the problem of determining the optimal parameters of a variable bit rate (VBR) connection when it is used as a trunk, or tunnel, given that the input traffic is known. We find that there is an optimal peak rate for the VBR trunk, essentially insensitive to the optimization criteria. For a linear cost function, we find an explicit algorithm for the optimal remaining parameters of the VBR trunk.", "title": "" }, { "docid": "8dee3ada764a40fce6b5676287496ccd", "text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.", "title": "" }, { "docid": "f568c4987b4c318567aa6b6a757d9510", "text": "Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems.", "title": "" }, { "docid": "4019beb9fa6ec59b4b19c790fe8ff832", "text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.", "title": "" }, { "docid": "6bb318e50887e972cbfe52936c82c26f", "text": "We model the photo cropping problem as a cascade of attention box regression and aesthetic quality classification, based on deep learning. A neural network is designed that has two branches for predicting attention bounding box and analyzing aesthetics, respectively. The predicted attention box is treated as an initial crop window where a set of cropping candidates are generated around it, without missing important information. Then, aesthetics assessment is employed to select the final crop as the one with the best aesthetic quality. With our network, cropping candidates share features within full-image convolutional feature maps, thus avoiding repeated feature computation and leading to higher computation efficiency. Via leveraging rich data for attention prediction and aesthetics assessment, the proposed method produces high-quality cropping results, even with the limited availability of training data for photo cropping. The experimental results demonstrate the competitive results and fast processing speed (5 fps with all steps).", "title": "" }, { "docid": "ea9c8ee7d22c0abc34fcf3ad073e20ac", "text": "Job performance is the most researched concept studied in industrial and organizational psychology, with the emphasis being on organizational citizenship behavior (OCB) and counterproductive work behavior (CWB) as two dimensions of it. The relationship between these two dimensions of job performance are unclear, hence the objective of the current study was to examine the relationship between organizational citizenship behavior and counterproductive work behavior. A total of 267 students studying psychology were given a questionnaire that measured organizational citizenship behavior and counterproductive work behavior (most have had part-time work experience). Correlational analysis found OCB and CWB to have only a moderate negative correlation which suggests OCB and CWB are two separate but related constructs. It was also found that females and longer-tenured individuals tend to show more OCB but no difference was found for CWB. The findings showed that individuals can engage in OCB and CWB at the same time, which necessitates organizations to find a way to encourage their employees to engage in OCB and not in CWB.", "title": "" }, { "docid": "20190b5523357be0e7565f84b96fefef", "text": "To accurately mimic the native tissue environment, tissue engineered scaffolds often need to have a highly controlled and varied display of three-dimensional (3D) architecture and geometrical cues. Additive manufacturing in tissue engineering has made possible the development of complex scaffolds that mimic the native tissue architectures. As such, architectural details that were previously unattainable or irreproducible can now be incorporated in an ordered and organized approach, further advancing the structural and chemical cues delivered to cells interacting with the scaffold. This control over the environment has given engineers the ability to unlock cellular machinery that is highly dependent upon the intricate heterogeneous environment of native tissue. Recent research into the incorporation of physical and chemical gradients within scaffolds indicates that integrating these features improves the function of a tissue engineered construct. This review covers recent advances on techniques to incorporate gradients into polymer scaffolds through additive manufacturing and evaluate the success of these techniques. As covered here, to best replicate different tissue types, one must be cognizant of the vastly different types of manufacturing techniques available to create these gradient scaffolds. We review the various types of additive manufacturing techniques that can be leveraged to fabricate scaffolds with heterogeneous properties and discuss methods to successfully characterize them.\n\n\nSTATEMENT OF SIGNIFICANCE\nAdditive manufacturing techniques have given tissue engineers the ability to precisely recapitulate the native architecture present within tissue. In addition, these techniques can be leveraged to create scaffolds with both physical and chemical gradients. This work offers insight into several techniques that can be used to generate graded scaffolds, depending on the desired gradient. Furthermore, it outlines methods to determine if the designed gradient was achieved. This review will help to condense the abundance of information that has been published on the creation and characterization of gradient scaffolds and to provide a single review discussing both methods for manufacturing gradient scaffolds and evaluating the establishment of a gradient.", "title": "" }, { "docid": "4fe8d749fd978627edb58d76f0e8d090", "text": "The more I study metrology, the more I get persuaded that the measuring activity is an implicit part of our lives, something we are not really aware of, though we do or rely on measurements several times a day. When we check time, put fuel in our cars, buy food, just to mention some everyday activity, either we measure something or we trust measurements done by somebody else. It is quite immediate to conclude that, nowadays, everything is measured and measurement results are the basis of many important decisions. Interestingly enough, measurement has always played an important role in mankind�s evolution and I fully agree with Bryan Kibble�s statement that the measuring stick came before the wheel, otherwise the wheel could not have been built [1]. The measuring stick is also one of the most ancient instruments, and we find it together with time measuring instruments and weighs in almost every civilization of the past, proving that measurement is one of the most important branches of science, and there is no civilization without measurement. It proves also the intimate connection existing between instrumentation and measurement, being the two sides of a single medal: the measurement science, or metrology.", "title": "" }, { "docid": "1007a655557a8e4c99cd9caf904ceb5c", "text": "OBJECTIVE\nTo compare the efficacy of 2 strategies, errorless learning (EL) and self-instruction training (SIT), for remediating emotion perception deficits in individuals with traumatic brain injury (TBI).\n\n\nDESIGN\nRandomized controlled trial comparing groups receiving 25 hours (across 10 weeks) of treatment with either EL or SIT with waitlist control.\n\n\nSETTING AND PARTICIPANTS\nEighteen adult outpatient volunteers with severe TBI who were at least 6 months postinjury.\n\n\nMAIN OUTCOMES MEASURES\nPhotograph-based emotion recognition tasks, The Awareness of Social Inferences Test, and questionnaire measures, for example, the Sydney Psychosocial Reintegration Scale.\n\n\nRESULTS\nBoth treatment groups showed modest improvement in emotion perception ability. Limited evidence suggests that SIT may be a favorable approach for this type of remediation.\n\n\nCONCLUSIONS\nAlthough further research is needed, there are reasons for optimism regarding rehabilitation of emotion perception following TBI.", "title": "" }, { "docid": "a3c0a5a570c9c7d4fda363c6b8f792c5", "text": "How do children identify promising hypotheses worth testing? Many studies have shown that preschoolers can use patterns of covariation together with prior knowledge to learn causal relationships. However, covariation data are not always available and myriad hypotheses may be commensurate with substantive knowledge about content domains. We propose that children can identify high-level abstract features common to effects and their candidate causes and use these to guide their search. We investigate children’s sensitivity to two such high-level features — proportion and dynamics, and show that preschoolers can use these to link effects and candidate causes, even in the absence of other disambiguating information.", "title": "" }, { "docid": "83637dc7109acc342d50366f498c141a", "text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.", "title": "" }, { "docid": "3d9e279afe4ba8beb1effd4f26550f67", "text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.", "title": "" }, { "docid": "03dcfd0b89b7eee84d678371c13e97c2", "text": "Recommender systems o‰en use latent features to explain the behaviors of users and capture the properties of items. As users interact with di‚erent items over time, user and item features can inƒuence each other, evolve and co-evolve over time. Œe compatibility of user and item’s feature further inƒuence the future interaction between users and items. Recently, point process based models have been proposed in the literature aiming to capture the temporally evolving nature of these latent features. However, these models o‰en make strong parametric assumptions about the evolution process of the user and item latent features, which may not reƒect the reality, and has limited power in expressing the complex and nonlinear dynamics underlying these processes. To address these limitations, we propose a novel deep coevolutionary network model (DeepCoevolve), for learning user and item features based on their interaction graph. DeepCoevolve use recurrent neural network (RNN) over evolving networks to de€ne the intensity function in point processes, which allows the model to capture complex mutual inƒuence between users and items, and the feature evolution over time. We also develop an ecient procedure for training the model parameters, and show that the learned models lead to signi€cant improvements in recommendation and activity prediction compared to previous state-of-the-arts parametric models.", "title": "" }, { "docid": "0701f4d74179857b736ebe2c7cdb78b7", "text": "Modern computer networks generate significant volume of behavioural system logs on a daily basis. Such networks comprise many computers with Internet connectivity, and many users who access the Web and utilise Cloud services make use of numerous devices connected to the network on an ad-hoc basis. Measuring the risk of cyber attacks and identifying the most recent modus-operandi of cyber criminals on large computer networks can be difficult due to the wide range of services and applications running within the network, the multiple vulnerabilities associated with each application, the severity associated with each vulnerability, and the ever-changing attack vector of cyber criminals. In this paper we propose a framework to represent these features, enabling real-time network enumeration and traffic analysis to be carried out, in order to produce quantified measures of risk at specific points in time. We validate the approach using data from a University network, with a data collection consisting of 462,787 instances representing threats measured over a 144 hour period. Our analysis can be generalised to a variety of other contexts. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).", "title": "" }, { "docid": "b5475fb64673f6be82e430d307b31fa2", "text": "We report a novel technique: a 1-stage transfer of 2 paddles of thoracodorsal artery perforator (TAP) flap with 1 pair of vascular anastomoses for simultaneous restoration of bilateral facial atrophy. A 47-year-old woman with a severe bilateral lipodystrophy of the face (Barraquer-Simons syndrome) was surgically treated using this procedure. Sufficient blood supply to each of the 2 flaps was confirmed with fluorescent angiography using the red-excited indocyanine green method. A good appearance was obtained, and the patient was satisfied with the result. Our procedure has advantages over conventional methods in that bilateral facial atrophy can be augmented simultaneously with only 1 donor site. Furthermore, our procedure requires only 1 pair of vascular anastomoses and the horizontal branch of the thoracodorsal nerve can be spared. To our knowledge, this procedure has not been reported to date. We consider that 2 paddles of TAP flap are safely elevated if the distal flap is designed on the descending branch, and this technique is useful for the reconstruction of bilateral facial atrophy or deformity.", "title": "" } ]
scidocsrr
dd1d441de80ffb46802bbe2f4f9801b4
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
[ { "docid": "8d83568ca0c89b1a6e344341bb92c2d0", "text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.", "title": "" }, { "docid": "8d7baff71cb5309fc00465b0d54a7224", "text": "Interactive image segmentation is characterized by multimodality. When the user clicks on a door, do they intend to select the door or the whole house? We present an end-to-end learning approach to interactive image segmentation that tackles this ambiguity. Our architecture couples two convolutional networks. The first is trained to synthesize a diverse set of plausible segmentations that conform to the user's input. The second is trained to select among these. By selecting a single solution, our approach retains compatibility with existing interactive segmentation interfaces. By synthesizing multiple diverse solutions before selecting one, the architecture is given the representational power to explore the multimodal solution space. We show that the proposed approach outperforms existing methods for interactive image segmentation, including prior work that applied convolutional networks to this problem, while being much faster.", "title": "" }, { "docid": "05a4ec72afcf9b724979802b22091fd4", "text": "Convolutional neural networks (CNNs) have greatly improved state-of-the-art performances in a number of fields, notably computer vision and natural language processing. In this work, we are interested in generalizing the formulation of CNNs from low-dimensional regular Euclidean domains, where images (2D), videos (3D) and audios (1D) are represented, to high-dimensional irregular domains such as social networks or biological networks represented by graphs. This paper introduces a formulation of CNNs on graphs in the context of spectral graph theory. We borrow the fundamental tools from the emerging field of signal processing on graphs, which provides the necessary mathematical background and efficient numerical schemes to design localized graph filters efficient to learn and evaluate. As a matter of fact, we introduce the first technique that offers the same computational complexity than standard CNNs, while being universal to any graph structure. Numerical experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs, as long as the graph is well-constructed.", "title": "" }, { "docid": "2bb535ff25532ccdbf85a301a872c8bd", "text": "Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?", "title": "" } ]
[ { "docid": "62bf93deeb73fab74004cb3ced106bac", "text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.", "title": "" }, { "docid": "236dcb6dd7e04c0600c2f0b90f94c5dd", "text": "Main call for Cloud computing is that users only utilize what they required and only pay for what they really use. Mobile Cloud Computing refers to an infrastructure where data processing and storage can happen away from mobile device. Portio research estimates that mobile subscribers worldwide will reach 6.9 billion by the end of 2013 and 8 billion by the end of 2016. Ericsson also forecasts that mobile subscriptions will reach 9 billion by 2017. Due to increasing use of mobile devices the requirement of cloud computing in mobile devices arise, which gave birth to Mobile Cloud Computing. Mobile devices do not need to have large storage capacity and powerful CPU speed. Due to storing data on cloud there is an issue of data security. Because of the risk associated with data storage many IT professionals are not showing their interest towards Mobile Cloud Computing. To ensure the correctness of users' data in the cloud, we propose an effective mechanism with salient feature of data integrity and confidentiality. This paper proposed a mechanism which uses the concept of RSA algorithm, Hash function along with several cryptography tools to provide better security to the data stored on the mobile cloud.", "title": "" }, { "docid": "40e9b22c5efe43517d03ce32fc2a9512", "text": "There have been some pioneering works concerning embedding cryptographic properties in Compressive Sampli ng (CS) but it turns out that the concise linear projection encoding process makes this approach ineffective. Here we introduce a bilevel protection (BLP) model for constructing secure compr essive sampling scheme. Then we propose several techniques to esta blish secret key-related sparsifying basis and deploy them into o ur new CS model. It is demonstrated that the encoding process is simply a random linear projection, which is the same as the traditional model. However, decoding the measurements req uires the knowledge of both the key-related sensing matrix and the key-related sparsifying basis. We apply the proposed model to construct digital image ciphe r under the parallel compressive sampling reconstruction fr amework. The main properties of this cipher, such as low computational complexity, compressibility, robustness and compu tational secrecy under known/chosen plaintext attacks, are thoroug hly studied. It is shown that compressive sampling schemes base d on our BLP model is robust under various attack scenarios although the encoding process is a simple linear projection.", "title": "" }, { "docid": "dfdf2581010777e51ff3e29c5b9aee7f", "text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.", "title": "" }, { "docid": "95365d5f04b2cefcca339fbc19464cbb", "text": "Manipulation and re-use of images in scientific publications is a concerning problem that currently lacks a scalable solution. Current tools for detecting image duplication are mostly manual or semi-automated, despite the availability of an overwhelming target dataset for a learning-based approach. This paper addresses the problem of determining if, given two images, one is a manipulated version of the other by means of copy, rotation, translation, scale, perspective transform, histogram adjustment, or partial erasing. We propose a data-driven solution based on a 3-branch Siamese Convolutional Neural Network. The ConvNet model is trained to map images into a 128-dimensional space, where the Euclidean distance between duplicate images is smaller than or equal to 1, and the distance between unique images is greater than 1. Our results suggest that such an approach has the potential to improve surveillance of the published and in-peer-review literature for image manipulation.", "title": "" }, { "docid": "a1c9f24275ce626552602cf068776a3c", "text": "The field of topology optimization seeks to optimize shapes under structural objectives, such as achieving the most rigid shape using a given quantity of material. Besides optimal shape design, these methods are increasingly popular as design tools, since they automatically produce structures having desirable physical properties, a task hard to perform by hand even for skilled designers. However, there is no simple way to control the appearance of the generated objects.\n In this paper, we propose to optimize shapes for both their structural properties and their appearance, the latter being controlled by a user-provided pattern example. These two objectives are challenging to combine, as optimal structural properties fully define the shape, leaving no degrees of freedom for appearance. We propose a new formulation where appearance is optimized as an objective while structural properties serve as constraints. This produces shapes with sufficient rigidity while allowing enough freedom for the appearance of the final structure to resemble the input exemplar.\n Our approach generates rigid shapes using a specified quantity of material while observing optional constraints such as voids, fills, attachment points, and external forces. The appearance is defined by examples, making our technique accessible to casual users. We demonstrate its use in the context of fabrication using a laser cutter to manufacture real objects from optimized shapes.", "title": "" }, { "docid": "d3956443e9e1f9dd0c0d995ecd12bfb4", "text": "Point clouds are an efficient data format for 3D data. However, existing 3D segmentation methods for point clouds either do not model local dependencies [21] or require added computations [14, 23]. This work presents a novel 3D segmentation framework, RSNet1, to efficiently model local structures in point clouds. The key component of the RSNet is a lightweight local dependency module. It is a combination of a novel slice pooling layer, Recurrent Neural Network (RNN) layers, and a slice unpooling layer. The slice pooling layer is designed to project features of unordered points onto an ordered sequence of feature vectors so that traditional end-to-end learning algorithms (RNNs) can be applied. The performance of RSNet is validated by comprehensive experiments on the S3DIS[1], ScanNet[3], and ShapeNet [34] datasets. In its simplest form, RSNets surpass all previous state-of-the-art methods on these benchmarks. And comparisons against previous state-of-the-art methods [21, 23] demonstrate the efficiency of RSNets.", "title": "" }, { "docid": "e82918cb388666499767bbd4d59daf84", "text": "The space around us is represented not once but many times in parietal cortex. These multiple representations encode locations and objects of interest in several egocentric reference frames. Stimulus representations are transformed from the coordinates of receptor surfaces, such as the retina or the cochlea, into the coordinates of effectors, such as the eye, head, or hand. The transformation is accomplished by dynamic updating of spatial representations in conjunction with voluntary movements. This direct sensory-to-motor coordinate transformation obviates the need for a single representation of space in environmental coordinates. In addition to representing object locations in motoric coordinates, parietal neurons exhibit strong modulation by attention. Both top-down and bottom-up mechanisms of attention contribute to the enhancement of visual responses. The saliance of a stimulus is the primary factor in determining the neural response to it. Although parietal neurons represent objects in motor coordinates, visual responses are independent of the intention to perform specific motor acts.", "title": "" }, { "docid": "f04d59966483bf7e4053a9d504278a82", "text": "Radio Frequency Identification (RFID) is a promising new technology that is widely deployed for object tracking and monitoring, ticketing, supply-chain management, contactless payment, etc. However, RFID related security problems attract more and more attentions. This paper has studied a novel elliptic curve cryptography (ECC) based RFID security protocol and it shows some great features. Firstly, the high strength of ECC encryption provides convincing security for communication and tag memory data access. Secondly, the public-key cryptography used in the protocol reduces the key storage requirement and the backend system just store the private key. Thirdly, the new protocol just depends on simple calculations, such as XOR, bitwise AND, and so forth, which reduce the tag computation. Finally, the computational performance, security features, and the formal proof based on BAN logic are also discussed in detail in the paper.", "title": "" }, { "docid": "096b09f064643cbd2cd80f310981c5a6", "text": "A Ku-band 200-W pulsed solid-state power amplifier has been presented and designed by using a hybrid radial-/rectangular-waveguide spatially power-combining technique. The hybrid radial-/rectangular-waveguide power-dividing/power-combining circuit employed in this design provides not only a high power-combining efficiency over a wide bandwidth but also efficient heat sinking for the active power devices. A simple design approach of the presented power-dividing/power-combining structure has been developed. The measured small-signal gain of the pulsed power amplifier is about 51.3 dB over the operating frequency range, while the measured maximum output power at 1-dB compression is 209 W at 13.9 GHz, with an active power-combining efficiency of about 91%. Furthermore, the active power-combining efficiency is greater than 82% from 13.75 to 14.5 GHz.", "title": "" }, { "docid": "1a8df1f14f66c0ff09679ea5bbfc2c36", "text": "Making strategic decision on new manufacturing technology investments is difficult. New technologies are usually costly, affected by numerous factors, and the potential benefits are often hard to justify prior to implementation. Traditionally, decisions are made based upon intuition and past experience, sometimes with the support of multicriteria decision support tools. However, these approaches do not retain and reuse knowledge, thus managers are not able to make effective use of their knowledge and experience of previously completed projects to help with the prioritisation of future projects. In this paper, a hybrid intelligent system integrating case-based reasoning (CBR) and the fuzzy ARTMAP (FAM) neural network model is proposed to support managers in making timely and optimal manufacturing technology investment decisions. The system comprises a case library that holds the details of past technology investment projects. Each project proposal is characterised by a set of features determined by human experts. The FAM network is then employed to match the features of a new proposal with those from historical cases. Similar cases are retrieved and adapted, and information on these cases can be utilised as an input to prioritisation of new projects. A case study is conducted to illustrate the applicability and effectiveness of the approach, with the results presented and analysed. Implications of the proposed approach are discussed, and suggestions for further work are outlined. r 2005 Published by Elsevier B.V.", "title": "" }, { "docid": "ea926e7245c74d1fde2661434262bf12", "text": "Article history: Received 16 November 2010 Received in revised form 2 June 2011 Accepted 23 July 2011 Available online 6 August 2011", "title": "" }, { "docid": "bfdbc3814d517df9859294bd53885aa2", "text": "The Internet of Things (IoT) is the next big wave in computing characterized by large scale open ended heterogeneous network of things, with varying sensing, actuating, computing and communication capabilities. Compared to the traditional field of autonomic computing, the IoT is characterized by an open ended and highly dynamic ecosystem with variable workload and resource availability. These characteristics make it difficult to implement self-awareness capabilities for IoT to manage and optimize itself. In this work, we introduce a methodology to explore and learn the trade-offs of different deployment configurations to autonomously optimize the QoS and other quality attributes of IoT applications. Our experiments demonstrate that our proposed methodology can automate the efficient deployment of IoT applications in the presence of multiple optimization objectives and variable operational circumstances.", "title": "" }, { "docid": "4c941e492c517768cd623ea5d8ad79dc", "text": "Multi-task Learning (MTL) is applied to the problem of predicting next-day health, stress, and happiness using data from wearable sensors and smartphone logs. Three formulations of MTL are compared: i) Multi-task Multi-Kernel learning, which feeds information across tasks through kernel weights on feature types, ii) a Hierarchical Bayes model in which tasks share a common Dirichlet prior, and iii) Deep Neural Networks, which share several hidden layers but have final layers unique to each task. We show that by using MTL to leverage data from across the population while still customizing a model for each person, we can account for individual differences, and obtain state-of-the-art performance on this dataset.", "title": "" }, { "docid": "7fbc3820c259d9ea58ecabaa92f8c875", "text": "The use of digital imaging devices, ranging from professional digital cinema cameras to consumer grade smartphone cameras, has become ubiquitous. The acquired image is a degraded observation of the unknown latent image, while the degradation comes from various factors such as noise corruption, camera shake, object motion, resolution limit, hazing, rain streaks, or a combination of them. Image restoration (IR), as a fundamental problem in image processing and low-level vision, aims to reconstruct the latent high-quality image from its degraded observation. Image degradation is, in general, irreversible, and IR is a typical ill-posed inverse problem. Due to the large space of natural image contents, prior information on image structures is crucial to regularize the solution space and produce a good estimation of the latent image. Image prior modeling and learning then are key issues in IR research. This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.", "title": "" }, { "docid": "2f286146b770f4d36426ba039d4fe05b", "text": "In this paper, we have proposed an unified framework for event summarization and rare event detection and presented the graph-structure learning and editing method to solve these problems efficiently. The experimental results demonstrated that the proposed method outperformed conventional algorithms in complex and crowded public scenes by exploiting and utilizing causality, frequency, and significance of relations of events.", "title": "" }, { "docid": "e37cf869a5a92d1a86c622feb477a444", "text": "Visual Context Textual Context Oh my gosh, i’m so buying this shirt. I found a cawaii bird. Stocking up!! Only reason I come to carnival. Question Where did you see this for sale? Are you going to collect some feathers? Ayee! what the prices looking like? Oh my God. How the hell do you even eat that? Response Midwest sports There are so many crows here I’d be surprised if I never found one. Only like 10-20% off..I think I’m gonna wait a little longer. They are the greatest things ever chan. I could eat 5!", "title": "" }, { "docid": "0d61a946a8620cab60a5cb6693be64a2", "text": "We give a brief overview of the Mario AI Championship, a series of competitions based on an open source clone of the seminal platform game Super Mario Bros. The competition has four tracks. The gameplay and learning tracks resemble traditional reinforcement learning competitions, the Level generation track focuses on the generation of entertaining game levels, and the Turing Test track focuses on human-like game-playing behaviour. We also outline some lessons learned from the competition and its future. The paper is written by the four organisers of the competition.", "title": "" }, { "docid": "34ba1323c4975a566f53e2873231e6ad", "text": "This paper describes the motivation, the realization, and the experience of incorporating simulation and hardware implementation into teaching computer organization and architecture to computer science students. It demonstrates that learning by doing has helped students to truly understand how a computer is constructed and how it really works in practice. Correlated with textbook material, a set of simulation and implementation projects were created on the basis of the work that students had done in previous homework and laboratory activities. Students can thus use these designs as building blocks for completing more complex projects at a later time. The projects cover a wide range of topics from simple adders up to ALU's and CPU's. These processors operate in a virtual manner on certain short assembly-language programs. Specifically, this paper shares the experience of using simulation tools (Alterareg Quartus II) and reconfigurable hardware prototyping platforms (Alterareg UP2 development boards)", "title": "" }, { "docid": "cd0e7cace1b89af72680f9d8ef38bdf3", "text": "Analyzing stock market trends and sentiment is an interdisciplinary area of research being undertaken by many disciplines such as Finance, Computer Science, Statistics, and Economics. It has been well established that real time news plays a strong role in the movement of stock prices. With the advent of electronic and online news sources, analysts have to deal with enormous amounts of real-time, unstructured streaming data. In this paper, we present an automated text mining based approach to aggregate news stories from diverse sources and create a News Corpus. The Corpus is filtered down to relevant sentences and analyzed using Natural Language Processing (NLP) techniques. A sentiment metric, called NewsSentiment, utilizing the count of positive and negative polarity words is proposed as a measure of the sentiment of the overall news corpus. We have used various open source packages and tools to develop the news collection and aggregation engine as well as the sentiment evaluation engine. Extensive experimentation has been done using news stories about various stocks. The time variation of NewsSentiment shows a very strong correlation with the actual stock price movement. Our proposed metric has many applications in analyzing current news stories and predicting stock trends for specific companies and sectors of the economy.", "title": "" } ]
scidocsrr
3e00cf3486170e2ba31220c925a6b526
Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection
[ { "docid": "4fc6ac1b376c965d824b9f8eb52c4b50", "text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.", "title": "" }, { "docid": "b97e58184a94d6827bf294a3b1f91687", "text": "A good and robust sensor data fusion in diverse weather conditions is a quite challenging task. There are several fusion architectures in the literature, e.g. the sensor data can be fused right at the beginning (Early Fusion), or they can be first processed separately and then concatenated later (Late Fusion). In this work, different fusion architectures are compared and evaluated by means of object detection tasks, in which the goal is to recognize and localize predefined objects in a stream of data. Usually, state-of-the-art object detectors based on neural networks are highly optimized for good weather conditions, since the well-known benchmarks only consist of sensor data recorded in optimal weather conditions. Therefore, the performance of these approaches decreases enormously or even fails in adverse weather conditions. In this work, different sensor fusion architectures are compared for good and adverse weather conditions for finding the optimal fusion architecture for diverse weather situations. A new training strategy is also introduced such that the performance of the object detector is greatly enhanced in adverse weather scenarios or if a sensor fails. Furthermore, the paper responds to the question if the detection accuracy can be increased further by providing the neural network with a-priori knowledge such as the spatial calibration of the sensors.", "title": "" } ]
[ { "docid": "639c8142b14f0eed40b63c0fa7580597", "text": "The purpose of this study is to give an overlook and comparison of best known data warehouse architectures. Single-layer, two-layer, and three-layer architectures are structure-oriented one that are depending on the number of layers used by the architecture. In independent data marts architecture, bus, hub-and-spoke, centralized and distributed architectures, the main layers are differently combined. Listed data warehouse architectures are compared based on organizational structures, with its similarities and differences. The second comparison gives a look into information quality (consistency, completeness, accuracy) and system quality (integration, flexibility, scalability). Bus, hub-and-spoke and centralized data warehouse architectures got the highest scores in information and system quality assessment.", "title": "" }, { "docid": "3097273de70077bac4a56b3f7e7b0ed4", "text": "Transverse flux machine (TFM) useful for in-wheel motor applications is presented. This transverse flux permanent magnet motor is designed to achieve high torque-to-weight ratio and is suitable for direct-drive wheel applications. As in conventional TFM, the phases are located under each other, which will increase the axial length of the machine. The idea of this design is to reduce the axial length of TFM, by placing the windings around the stator and by shifting those from each other by electrically 120° or 90°, for three- or two-phase machine, respectively. Therefore, a remarkable reduction on the total axial length of the machine will be achieved while keeping the torque density high. This TFM is compared to another similar TFM, in which the three phases have been divided into two halves and placed opposite each other to ensure the mechanical balance and stability of the stator. The corresponding mechanical phase shifts between the phases have accordingly been taken into account. The motors are modelled in finite-element method (FEM) program, Flux3D, and designed to meet the specifications of an optimisation scheme, subject to certain constraints, such as construction dimensions, electric and magnetic loading. Based on this comparison study, many recommendations have been suggested to achieve optimum results.", "title": "" }, { "docid": "9b2dd28151751477cc46f6c6d5ec475f", "text": "Clinical and experimental data indicate that most acupuncture clinical results are mediated by the central nervous system, but the specific effects of acupuncture on the human brain remain unclear. Even less is known about its effects on the cerebellum. This fMRI study demonstrated that manual acupuncture at ST 36 (Stomach 36, Zusanli), a main acupoint on the leg, modulated neural activity at multiple levels of the cerebro-cerebellar and limbic systems. The pattern of hemodynamic response depended on the psychophysical response to needle manipulation. Acupuncture stimulation typically elicited a composite of sensations termed deqi that is related to clinical efficacy according to traditional Chinese medicine. The limbic and paralimbic structures of cortical and subcortical regions in the telencephalon, diencephalon, brainstem and cerebellum demonstrated a concerted attenuation of signal intensity when the subjects experienced deqi. When deqi was mixed with sharp pain, the hemodynamic response was mixed, showing a predominance of signal increases instead. Tactile stimulation as control also elicited a predominance of signal increase in a subset of these regions. The study provides preliminary evidence for an integrated response of the human cerebro-cerebellar and limbic systems to acupuncture stimulation at ST 36 that correlates with the psychophysical response.", "title": "" }, { "docid": "4be9ae4bc6fb01e78d550bedf199d0b0", "text": "Protein timing is a popular dietary strategy designed to optimize the adaptive response to exercise. The strategy involves consuming protein in and around a training session in an effort to facilitate muscular repair and remodeling, and thereby enhance post-exercise strength- and hypertrophy-related adaptations. Despite the apparent biological plausibility of the strategy, however, the effectiveness of protein timing in chronic training studies has been decidedly mixed. The purpose of this paper therefore was to conduct a multi-level meta-regression of randomized controlled trials to determine whether protein timing is a viable strategy for enhancing post-exercise muscular adaptations. The strength analysis comprised 478 subjects and 96 ESs, nested within 41 treatment or control groups and 20 studies. The hypertrophy analysis comprised 525 subjects and 132 ESs, nested with 47 treatment or control groups and 23 studies. A simple pooled analysis of protein timing without controlling for covariates showed a small to moderate effect on muscle hypertrophy with no significant effect found on muscle strength. In the full meta-regression model controlling for all covariates, however, no significant differences were found between treatment and control for strength or hypertrophy. The reduced model was not significantly different from the full model for either strength or hypertrophy. With respect to hypertrophy, total protein intake was the strongest predictor of ES magnitude. These results refute the commonly held belief that the timing of protein intake in and around a training session is critical to muscular adaptations and indicate that consuming adequate protein in combination with resistance exercise is the key factor for maximizing muscle protein accretion.", "title": "" }, { "docid": "153a22e4477a0d6ce98b9a0fba2ab595", "text": "Uninterruptible power supplies (UPSs) have been used in many installations for critical loads that cannot afford power failure or surge during operation. It is often difficult to upgrade the UPS system as the load grows over time. Due to lower cost and maintenance, as well as ease of increasing system capacity, the parallel operation of modularized small-power UPS has attracted much attention in recent years. In this paper, a new scheme for parallel operation of inverters is introduced. A multiple-input-multiple-output state-space model is developed to describe the parallel-connected inverters system, and a model-predictive-control scheme suitable for paralleled inverters control is proposed. In this algorithm, the control objectives of voltage tracking and current sharing are formulated using a weighted cost function. The effectiveness and the hot-swap capability of the proposed parallel-connected inverters system have been verified with experimental results.", "title": "" }, { "docid": "a68244dedee73f87103a1e05a8c33b20", "text": "Given the knowledge that the same or similar objects appear in a set of images, our goal is to simultaneously segment that object from the set of images. To solve this problem, known as the cosegmentation problem, we present a method based upon hierarchical clustering. Our framework first eliminates intra-class heterogeneity in a dataset by clustering similar images together into smaller groups. Then, from each image, our method extracts multiple levels of segmentation and creates connections between regions (e.g. superpixel) across levels to establish intra-image multi-scale constraints. Next we take advantage of the information available from other images in our group. We design and present an efficient method to create inter-image relationships, e.g. connections between image regions from one image to all other images in an image cluster. Given the intra & inter-image connections, we perform a segmentation of the group of images into foreground and background regions. Finally, we compare our segmentation accuracy to several other state-of-the-art segmentation methods on standard datasets, and also demonstrate the robustness of our method on real world data.", "title": "" }, { "docid": "489e4bab8e975d9d82380adcd1692385", "text": "Nonnegative tucker decomposition (NTD) is a recent multiway extension of nonnegative matrix factorization (NMF), where nonnega- tivity constraints are incorporated into Tucker model. In this paper we consider alpha-divergence as a discrepancy measure and derive multiplicative updating algorithms for NTD. The proposed multiplicative algorithm includes some existing NMF and NTD algorithms as its special cases, since alpha-divergence is a one-parameter family of divergences which accommodates KL-divergence, Hellinger divergence, X2 divergence, and so on. Numerical experiments on face images show how different values of alpha affect the factorization results under different types of noise.", "title": "" }, { "docid": "9f97fffcb1b0a1f92443c9c769438cf5", "text": "A literature review was done within a revision of a guideline concerned with data quality management in registries and cohort studies. The review focused on quality indicators, feedback, and source data verification. Thirty-nine relevant articles were selected in a stepwise selection process. The majority of the papers dealt with indicators. The papers presented concepts or data analyses. The leading indicators were related to case or data completeness, correctness, and accuracy. In the future, data pools as well as research reports from quantitative studies should be obligatory supplemented by information about their data quality, ideally picking up some indicators presented in this review.", "title": "" }, { "docid": "b1f348ff63eaa97f6eeda5fcd81330a9", "text": "The recent expansion of the cloud computing paradigm has motivated educators to include cloud-related topics in computer science and computer engineering curricula. While programming and algorithm topics have been covered in different undergraduate and graduate courses, cloud architecture/system topics are still not usually studied in academic contexts. But design, deployment and management of datacenters, virtualization technologies for cloud, cloud management tools and similar issues should be addressed in current computer science and computer engineering programs. This work presents our approach and experiences in designing and implementing a curricular module covering all these topics. In this approach the utilization of a simulation tool, CloudSim, is essential to allow the students a practical approximation to the course contents.", "title": "" }, { "docid": "37de72b0e9064d09fb6901b40d695c0a", "text": "BACKGROUND AND OBJECTIVES\nVery little is known about the use of probiotics among pregnant women with gestational diabetes mellitus (GDM) especially its effect on oxidative stress and inflammatory indices. The aim of present study was to measure the effect of a probiotic supplement capsule on inflammation and oxidative stress biomarkers in women with newly-diagnosed GDM.\n\n\nMETHODS AND STUDY DESIGN\n64 pregnant women with GDM were enrolled in a double-blind placebo controlled randomized clinical trial in the spring and summer of 2014. They were randomly assigned to receive either a probiotic containing four bacterial strains of Lactobacillus acidophilus LA-5, Bifidobacterium BB-12, Streptococcus Thermophilus STY-31 and Lactobacillus delbrueckii bulgaricus LBY-27 or placebo capsule for 8 consecutive weeks. Blood samples were taken pre- and post-treatment and serum indices of inflammation and oxidative stress were assayed. The measured mean response scales were then analyzed using mixed effects model. All statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software (version 16).\n\n\nRESULTS\nSerum high-sensitivity C-reactive protein and tumor necrosis factor-α levels improved in the probiotic group to a statistically significant level over the placebo group. Serum interleukin-6 levels decreased in both groups after intervention; however, neither within group nor between group differences interleukin-6 serum levels was statistically significant. Malondialdehyde, glutathione reductase and erythrocyte glutathione peroxidase levels improved significantly with the use of probiotics when compared with the placebo.\n\n\nCONCLUSIONS\nThe probiotic supplement containing L.acidophilus LA- 5, Bifidobacterium BB- 12, S.thermophilus STY-31 and L.delbrueckii bulgaricus LBY-2 appears to improve several inflammation and oxidative stress biomarkers in women with GDM.", "title": "" }, { "docid": "bbb1dc09e41e08e095a48e9e2a806356", "text": "Using the inexpensive Raspberry Pi to automate the tasks at home such as switching appliances on & off over Wi-Fi (Wireless Fidelity) or LAN(Local Area Network) using a personal computer or a mobile or a tablet through the browser. This can also be done by using the dedicated Android application. The conventional switch boards will be added with a touch screen or replaced with a touch screen to match the taste of the user's home decor. PIR (Passive Infrared Sensor) sensor will be used to detect human detection and automate the on and off functionality.", "title": "" }, { "docid": "6d329c1fa679ac201387c81f59392316", "text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.", "title": "" }, { "docid": "827ecd05ff323a45bf880a65f34494e9", "text": "BACKGROUND\nSocial support can be a critical component of how a woman adjusts to infertility, yet few studies have investigated its impact on infertility-related coping and stress. We examined relationships between social support contexts and infertility stress domains, and tested if they were mediated by infertility-related coping strategies in a sample of infertile women.\n\n\nMETHODS\nThe Multidimensional Scale of Perceived Social Support, the Copenhagen Multi-centre Psychosocial Infertility coping scales and the Fertility Problem Inventory were completed by 252 women seeking treatment. Structural equation modeling analysis was used to test the hypothesized multiple mediation model.\n\n\nRESULTS\nThe final model revealed negative effects from perceived partner support to relationship concern (β = -0.47), sexual concern (β = -0.20) and rejection of childfree lifestyle through meaning-based coping (β = -0.04). Perceived friend support had a negative effect on social concern through active-confronting coping (β = -0.04). Finally, besides a direct negative association with social concern (β = -0.30), perceived family support was indirectly and negatively related with all infertility stress domains (β from -0.04 to -0.13) through a positive effect of active-avoidance coping. The model explained between 12 and 66% of the variance of outcomes.\n\n\nCONCLUSIONS\nDespite being limited by a convenience sampling and cross-sectional design, results highlight the importance of social support contexts in helping women deal with infertility treatment. Health professionals should explore the quality of social networks and encourage seeking positive support from family and partners. Findings suggest it might prove useful for counselors to use coping skills training interventions, by retraining active-avoidance coping into meaning-based and active-confronting strategies.", "title": "" }, { "docid": "6d110ceb82878e13014ee9b9ab63a7d1", "text": "The fuzzy control algorithm that carries on the intelligent control twelve phases three traffic lanes single crossroads traffic light, works well in the real-time traffic flow under flexible operation. The procedures can be described as below: first, the number of vehicles of all the lanes can be received through the sensor, and the phase with the largest number is stipulated to be highest priority, while the phase turns to the next one from the previous, it transfers into the highest priority. Then the best of the green light delay time can be figured out under the fuzzy rules reasoning on the current waiting formation length and general formation length. The simulation result indicates the fuzzy control method on vehicle delay time compared with the traditional timed control method is greatly improved.", "title": "" }, { "docid": "d2fb10bdbe745ace3a2512ccfa414d4c", "text": "In cloud computing environment, especially in big data era, adversary may use data deduplication service supported by the cloud service provider as a side channel to eavesdrop users' privacy or sensitive information. In order to tackle this serious issue, in this paper, we propose a secure data deduplication scheme based on differential privacy. The highlights of the proposed scheme lie in constructing a hybrid cloud framework, using convergent encryption algorithm to encrypt original files, and introducing differential privacy mechanism to resist against the side channel attack. Performance evaluation shows that our scheme is able to effectively save network bandwidth and disk storage space during the processes of data deduplication. Meanwhile, security analysis indicates that our scheme can resist against the side channel attack and related files attack, and prevent the disclosure of privacy information.", "title": "" }, { "docid": "aecacf7d1ba736899f185ee142e32522", "text": "BACKGROUND\nLow rates of handwashing compliance among nurses are still reported in literature. Handwashing beliefs and attitudes were found to correlate and predict handwashing practices. However, such an important field is not fully explored in Jordan.\n\n\nOBJECTIVES\nThis study aims at exploring Jordanian nurses' handwashing beliefs, attitudes, and compliance and examining the predictors of their handwashing compliance.\n\n\nMETHODS\nA cross-sectional multicenter survey design was used to collect data from registered nurses and nursing assistants (N = 198) who were providing care to patients in governmental hospitals in Jordan. Data collection took place over 3 months during the period of February 2011 to April 2011 using the Handwashing Assessment Inventory.\n\n\nRESULTS\nParticipants' mean score of handwashing compliance was 74.29%. They showed positive attitudes but seemed to lack knowledge concerning handwashing. Analysis revealed a 5-predictor model, which accounted for 37.5% of the variance in nurses' handwashing compliance. Nurses' beliefs relatively had the highest prediction effects (β = .309, P < .01), followed by skin assessment (β = .290, P < .01).\n\n\nCONCLUSION\nJordanian nurses reported moderate handwashing compliance and were found to lack knowledge concerning handwashing protocols, for which education programs are recommended. This study raised the awareness regarding the importance of complying with handwashing protocols.", "title": "" }, { "docid": "03dcfd0b89b7eee84d678371c13e97c2", "text": "Recommender systems o‰en use latent features to explain the behaviors of users and capture the properties of items. As users interact with di‚erent items over time, user and item features can inƒuence each other, evolve and co-evolve over time. Œe compatibility of user and item’s feature further inƒuence the future interaction between users and items. Recently, point process based models have been proposed in the literature aiming to capture the temporally evolving nature of these latent features. However, these models o‰en make strong parametric assumptions about the evolution process of the user and item latent features, which may not reƒect the reality, and has limited power in expressing the complex and nonlinear dynamics underlying these processes. To address these limitations, we propose a novel deep coevolutionary network model (DeepCoevolve), for learning user and item features based on their interaction graph. DeepCoevolve use recurrent neural network (RNN) over evolving networks to de€ne the intensity function in point processes, which allows the model to capture complex mutual inƒuence between users and items, and the feature evolution over time. We also develop an ecient procedure for training the model parameters, and show that the learned models lead to signi€cant improvements in recommendation and activity prediction compared to previous state-of-the-arts parametric models.", "title": "" }, { "docid": "d1668503d8986884035c8784d1f3f426", "text": "Feature extraction is a classic problem of machine vision and image processing. Edges are often detected using integer-order differential operators. In this paper, a one-dimensional digital fractional-order Charef differentiator (1D-FCD) is introduced and extended to 2D by a multi-directional operator. The obtained 2D-fractional differentiation (2D-FCD) is a new edge detection operation. The computed multi-directional mask coefficients are computed in a way that image details are detected and preserved. Experiments on texture images have demonstrated the efficiency of the proposed filter compared to existing techniques.", "title": "" }, { "docid": "d050730d7a5bd591b805f1b9729b0f2d", "text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.", "title": "" }, { "docid": "65a9813786554ede5e3c36f62b345ad8", "text": "Web search queries provide a surprisingly large amount of information, which can be potentially organized and converted into a knowledgebase. In this paper, we focus on the problem of automatically identifying brand and product entities from a large collection of web queries in online shopping domain. We propose an unsupervised approach based on adaptor grammars that does not require any human annotation efforts nor rely on any external resources. To reduce the noise and normalize the query patterns, we introduce a query standardization step, which groups multiple search patterns and word orderings together into their most frequent ones. We present three different sets of grammar rules used to infer query structures and extract brand and product entities. To give an objective assessment of the performance of our approach, we conduct experiments on a large collection of online shopping queries and intrinsically evaluate the knowledgebase generated by our method qualitatively and quantitatively. In addition, we also evaluate our framework on extrinsic tasks on query tagging and chunking. Our empirical studies show that the knowledgebase discovered by our approach is highly accurate, has good coverage and significantly improves the performance on the external tasks.", "title": "" } ]
scidocsrr
74b4282ea94716a805567aa7f44c6e69
net Wireless Fetal Monitoring
[ { "docid": "0da78253d26ddba2b17dd76c4b4c697a", "text": "In this work, a portable real-time wireless health monitoring system is developed. The system is used for remote monitoring of patients' heart rate and oxygen saturation in blood. The system was designed and implemented using ZigBee wireless technologies. All pulse oximetry data are transferred within a group of wireless personal area network (WPAN) to database computer server. The sensor modules were designed for low power operation with a program that can adjust power management depending on scenarios of power source and current power operation. The sensor unit consists of (1) two types of LEDs and photodiode packed in Velcro strip that is facing to a patient's fingertip; (2) Microcontroller unit for interfacing with ZigBee module, processing pulse oximetry data and storing some data before sending to base PC; (3) ZigBee module for communicating the data of pulse oximetry, ZigBee module gets all commands from microcontroller unit and it has a complete ZigBee stack inside and (4) Base node for receiving and storing the data before sending to PC.", "title": "" } ]
[ { "docid": "ef7b6c2b0254535e9dbf85a4af596080", "text": "African swine fever virus (ASFV) is a highly virulent swine pathogen that has spread across Eastern Europe since 2007 and for which there is no effective vaccine or treatment available. The dynamics of shedding and excretion is not well known for this currently circulating ASFV strain. Therefore, susceptible pigs were exposed to pigs intramuscularly infected with the Georgia 2007/1 ASFV strain to measure those dynamics through within- and between-pen transmission scenarios. Blood, oral, nasal and rectal fluid samples were tested for the presence of ASFV by virus titration (VT) and quantitative real-time polymerase chain reaction (qPCR). Serum was tested for the presence of ASFV-specific antibodies. Both intramuscular inoculation and contact transmission resulted in development of acute disease in all pigs although the experiments indicated that the pathogenesis of the disease might be different, depending on the route of infection. Infectious ASFV was first isolated in blood among the inoculated pigs by day 3, and then chronologically among the direct and indirect contact pigs, by day 10 and 13, respectively. Close to the onset of clinical signs, higher ASFV titres were found in blood compared with nasal and rectal fluid samples among all pigs. No infectious ASFV was isolated in oral fluid samples although ASFV genome copies were detected. Only one animal developed antibodies starting after 12 days post-inoculation. The results provide quantitative data on shedding and excretion of the Georgia 2007/1 ASFV strain among domestic pigs and suggest a limited potential of this isolate to cause persistent infection.", "title": "" }, { "docid": "66c9a05d8ff109696f5c09a70c5f11fc", "text": "How do informal institutions influence the formation and function of formal institutions? Existing typologies focus on the interaction of informal institutions with an established framework of formal rules that is taken for granted. In transitional settings, such typologies are less helpful, since many formal institutions are in a state of flux. Instead, using examples drawn from postcommunist state development, I argue that informal institutions can replace, undermine, and reinforce formal institutions irrespective of the latter’s strength, and that the elite competition generated by informal rules further influences which of these interactions dominate the development of the institutional framework. In transitional settings, the emergence and effectiveness of many formal institutions is endogenous to the informal institutions themselves.", "title": "" }, { "docid": "651e1c0385dd55e04bb2fe90f0e6dd24", "text": "Pollution has been recognized as the major threat to sustainability of river in Malaysia. Some of the limitations of existing methods for river monitoring are cost of deployment, non-real-time monitoring, and low resolution both in time and space. To overcome these limitations, a smart river monitoring solution is proposed for river water quality in Malaysia. The proposed method incorporates unmanned aerial vehicle (UAV), internet of things (IoT), low power wide area (LPWA) and data analytic (DA). A setup of the proposed method and preliminary results are presented. The proposed method is expected to deliver an efficient and real-time solution for river monitoring in Malaysia.", "title": "" }, { "docid": "61b6cf4bc86ae9a817f6e809fdf59ad2", "text": "In the last few years, phishing scams have rapidly grown posing huge threat to global Internet security. Today, phishing attack is one of the most common and serious threats over Internet where cyber attackers try to steal user’s personal or financial credentials by using either malwares or social engineering. Detection of phishing attacks with high accuracy has always been an issue of great interest. Recent developments in phishing detection techniques have led to various new techniques, specially designed for phishing detection where accuracy is extremely important. Phishing problem is widely present as there are several ways to carry out such an attack, which implies that one solution is not adequate to address it. Two main issues are addressed in our paper. First, we discuss in detail phishing attacks, history of phishing attacks and motivation of attacker behind performing this attack. In addition, we also provide taxonomy of various types of phishing attacks. Second, we provide taxonomy of various solutions proposed in the literature to detect and defend from phishing attacks. In addition, we also discuss various issues and challenges faced in dealing with phishing attacks and spear phishing and how phishing is now targeting the emerging domain of IoT. We discuss various tools and datasets that are used by the researchers for the evaluation of their approaches. This provides better understanding of the problem, current solution space and future research scope to efficiently deal with such attacks.", "title": "" }, { "docid": "936048690fb043434c3ee0060c5bf7a5", "text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "e87c67ffe98bf90ada3002fe87a9bbdd", "text": "Visually analyzing citation networks poses challenges to many fields of the data mining research. How can we summarize a large citation graph according to the user's interest? In particular, how can we illustrate the impact of a highly influential paper through the summarization? Can we maintain the sensory node-link graph structure while revealing the flow-based influence patterns and preserving a fine readability? The state-of-the-art influence maximization algorithms can detect the most influential node in a citation network, but fail to summarize a graph structure to account for its influence. On the other hand, existing graph summarization methods fold large graphs into clustered views, but can not reveal the hidden influence patterns underneath the citation network. In this paper, we first formally define the Influence Graph Summarization problem on citation networks. Second, we propose a matrix decomposition based algorithm pipeline to solve the IGS problem. Our method can not only highlight the flow-based influence patterns, but also easily extend to support the rich attribute information. A prototype system called VEGAS implementing this pipeline is also developed. Third, we present a theoretical analysis on our main algorithm, which is equivalent to the kernel k-mean clustering. It can be proved that the matrix decomposition based algorithm can approximate the objective of the proposed IGS problem. Last, we conduct comprehensive experiments with real-world citation networks to compare the proposed algorithm with classical graph summarization methods. Evaluation results demonstrate that our method significantly outperforms the previous ones in optimizing both the quantitative IGS objective and the quality of the visual summarizations.", "title": "" }, { "docid": "0b872b1d13c9a96c52046b41272e3a5f", "text": "This dissertation describes experiments conducted to evaluate an algorithm that attempts to automatically recognise emotions (affect) in written language. Examples from several areas of research that can inform affect recognition experiments are reviewed, including sentiment analysis, subjectivity analysis, and the psychology of emotion. An affect annotation exercise was carried out in order to build a suitable set of test data for the experiment. An algorithm to classify according to the emotional content of sentences was derived from an existing technique for sentiment analysis. When compared against the manual annotations, the algorithm achieved an accuracy of 32.78%. Several factors indicate that the method is making slightly informed choices, and could be useful as part of a holistic approach to recognising the affect represented in text. iii Acknowledgements", "title": "" }, { "docid": "5d673f5297919e6307dc2861d10ddfe6", "text": "Given the increased testing of school-aged children in the United States there is a need for a current and valid scale to measure the effects of test anxiety in children. The domain of children’s test anxiety was theorized to be comprised of three dimensions: thoughts, autonomic reactions, and off-task behaviors. Four stages are described in the evolution of the Children’s Test Anxiety Scale (CTAS): planning, construction, quantitative evaluation, and validation. A 50-item scale was administered to a development sample (N /230) of children in grades 3 /6 to obtain item analysis and reliability estimates which resulted in a refined 30-item scale. The reduced scale was administered to a validation sample (N /261) to obtain construct validity evidence. A three-factor structure fit the data reasonably well. Recommendations for future research with the scale are described.", "title": "" }, { "docid": "32977df591e90db67bf09b0412f56d7b", "text": "In an electronic warfare (EW) battlefield environment, it is highly necessary for a fighter aircraft to intercept and identify the several interleaved radar signals that it receives from the surrounding emitters, so as to prepare itself for countermeasures. The main function of the Electronic Support Measure (ESM) receiver is to receive, measure, deinterleave pulses and then identify alternative threat emitters. Deinterleaving of radar signals is based on time of arrival (TOA) analysis and the use of the sequential difference (SDIF) histogram method for determining the pulse repetition interval (PRI), which is an important pulse parameter. Once the pulse repetition intervals are determined, check for the existence of staggered PRI (level-2) is carried out, implemented in MATLAB. Keywordspulse deinterleaving, pulse repetition interval, stagger PRI, sequential difference histogram, time of arrival.", "title": "" }, { "docid": "9d5e1ec9444b1113c79c3740f9f773cf", "text": "Intuitionistic Fuzzy Sets (IFS) are a generalization of fuzzy sets where the membership is an interval. That is, membership, instead of being a single value, is an interval. A large number of operations have been defined for this type of fuzzy sets, and several applications have been developed in the last years. In this paper we describe hesitant fuzzy sets. They are another generalization of fuzzy sets. Although similar in intention to IFS, some basic differences on their interpretation and on their operators exist. In this paper we review their definition, the main results and we present an extension principle, which permits to generalize existing operations on fuzzy sets to this new type of fuzzy sets. We also discuss their use in decision making.", "title": "" }, { "docid": "57602f5e2f64514926ab96551f2b4fb6", "text": "Landscape genetics has seen rapid growth in number of publications since the term was coined in 2003. An extensive literature search from 1998 to 2008 using keywords associated with landscape genetics yielded 655 articles encompassing a vast array of study organisms, study designs and methodology. These publications were screened to identify 174 studies that explicitly incorporated at least one landscape variable with genetic data. We systematically reviewed this set of papers to assess taxonomic and temporal trends in: (i) geographic regions studied; (ii) types of questions addressed; (iii) molecular markers used; (iv) statistical analyses used; and (v) types and nature of spatial data used. Overall, studies have occurred in geographic regions proximal to developed countries and more commonly in terrestrial vs. aquatic habitats. Questions most often focused on effects of barriers and/or landscape variables on gene flow. The most commonly used molecular markers were microsatellites and amplified fragment length polymorphism (AFLPs), with AFLPs used more frequently in plants than animals. Analysis methods were dominated by Mantel and assignment tests. We also assessed differences among journals to evaluate the uniformity of reporting and publication standards. Few studies presented an explicit study design or explicit descriptions of spatial extent. While some landscape variables such as topographic relief affected most species studied, effects were not universal, and some species appeared unaffected by the landscape. Effects of habitat fragmentation were mixed, with some species altering movement paths and others unaffected. Taken together, although some generalities emerged regarding effects of specific landscape variables, results varied, thereby reinforcing the need for species-specific work. We conclude by: highlighting gaps in knowledge and methodology, providing guidelines to authors and reviewers of landscape genetics studies, and suggesting promising future directions of inquiry.", "title": "" }, { "docid": "7f6a45292aeca83bebb9556c938e0782", "text": "Many methods of text summarization combining sentence selection and sentence compression have recently been proposed. Although the dependency between words has been used in most of these methods, the dependency between sentences, i.e., rhetorical structures, has not been exploited in such joint methods. We used both dependency between words and dependency between sentences by constructing a nested tree, in which nodes in the document tree representing dependency between sentences were replaced by a sentence tree representing dependency between words. We formulated a summarization task as a combinatorial optimization problem, in which the nested tree was trimmed without losing important content in the source document. The results from an empirical evaluation revealed that our method based on the trimming of the nested tree significantly improved the summarization of texts.", "title": "" }, { "docid": "08d5c83c7effa92659ea705ad51317e2", "text": "This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways. Furthermore, we examine situations in which various measures may capture fluctuations in nonmotivational factors, such as learning or physiological depletion. This analysis seeks to advance research in experimental social psychology by highlighting the need for caution when selecting measures of motivation and when interpreting fluctuations captured by these measures. Motivation – the psychological force that enables action – has long been the object of scientific inquiry (Carver & Scheier, 1998; Festinger, 1957; Fishbein & Ajzen, 1974; Hull, 1932; Kruglanski, 1996; Lewin, 1935; Miller, Galanter, & Pribram, 1960; Mischel, Shoda, & Rodriguez, 1989; Zeigarnik, 1927). Because motivation is a psychological construct that cannot be observed or recorded directly, studying it raises an important question: how to measure motivation? Researchers measure motivation in terms of observable cognitive (e.g., recall, perception), affective (e.g., subjective experience), behavioral (e.g., performance), and physiological (e.g., brain activation) responses and using self-reports. Furthermore, motivation is measured in relative terms: compared to previous or subsequent levels of motivation or to motivation in a different goal state (e.g., salient versus non-salient goal). For example, following exposure to a health-goal prime (e.g., gymmembership card), an individual might be more motivated to exercise now than she was 20minutes ago (before exposure to the prime), or than another person who was not exposed to the same prime. An important aspect of determining how to measure motivation is understanding what type of motivation one is attempting to capture. Thus, in exploring the measures of motivation, the present article takes into account different dimensions of motivation. In particular, we highlight the distinction between the outcome-focused motivation to complete a goal (Brehm & Self, 1989; Locke & Latham, 1990; Powers, 1973) and the process-focused motivation to attend to elements related to the process of goal pursuit – with less emphasis on the outcome. Process-related elements may include using “proper” means during goal pursuit (means-focused motivation; Higgins, Idson, Freitas, Spiegel, & Molden, 2003; Touré-Tillery & Fishbach, 2012) and enjoying the experience of goal pursuit (intrinsic motivation; Deci & Ryan, 1985; Fishbach & Choi, 2012; Sansone & Harackiewicz, 1996; Shah & Kruglanski, 2000). In some cases, particular measures of motivation may help distinguish between these different dimensions of motivation, whereas other measures may not. For example, the measured speed at which a person works on a task can have several interpretations. © 2014 John Wiley & Sons Ltd How to Measure Motivation 329 Working slowly could mean (a) that the individual’s motivation to complete the task is low (outcome-focused motivation); or (b) that her motivation to engage in the task is high such that she is “savoring” the task (intrinsic motivation); or (c) that her motivation to “do it right” and use proper means is high such that she is applying herself (means-focused motivation); or even (d) that she is tired (diminished physiological resources). In this case, additional measures (e.g., accuracy in performance) and manipulations (e.g., task difficulty) may help tease apart these various potential interpretations. Thus, experimental researchers must exercise caution when selecting measures of motivation and when interpreting the fluctuations captured by these measures. This review provides a guide for how to measure fluctuations in motivation in experimental settings. One approach is to ask people to rate their motivation (i.e., “how motivated are you?”). However, such an approach is limited to people’s conscious understanding of their own psychological states and can further be biased by social desirability concerns; hence, research in experimental social psychology developed a variety of cognitive and behavioral paradigms to assess motivation without relying on self-reports. We focus on these objective measures of situational fluctuations in motivation. We note that other fields of psychological research commonly use physiological measures (e.g., brain activation, skin conductance), self-report measures (i.e., motivation scales), or measure motivation as a stable trait. These physiological, self-report, and trait measures of motivation are beyond the scope our review. In the sections that follow, we start with a discussion of measures researchers commonly use to capture motivation. We review cognitive measures such as memory accessibility, evaluations, and perceptions of goal-relevant objects, as well as affective measures such as subjective experience. Next, we examine the use of behavioral measures such as speed, performance, and choice to capture fluctuations in motivational strength. In the third section, we discuss the outcomeand process-focused dimensions of motivation and examine specific measures of process-focused motivation, including measures of intrinsic motivation and means-focused motivation. We then discuss how different measures may help distinguish between the outcomeand process-focused dimensions. In the final section, we explore circumstances under which measures may capture fluctuations in learning and physiological resources, rather than changes in motivation. We conclude with some implications of this analysis for the measurement and study of motivation. Cognitive and Affective Measures of Motivation Experimental social psychologists conceptualize a goal as the cognitive representation of a desired end state (Fishbach & Ferguson, 2007; Kruglanski, 1996). According to this view, goals are organized in associative memory networks connecting each goal to corresponding constructs. Goal-relevant constructs could be activities or objects that contribute to goal attainment (i.e., means; Kruglanski et al., 2002), as well as activities or objects that hinder goal attainment (i.e., temptations; Fishbach, Friedman, & Kruglanski, 2003). For example, the goal to eat healthily may be associated with constructs such as apple, doctor (facilitating means), or French fries (hindering temptation). Cognitive and affective measures of motivation include the activation, evaluation, and perception of these goal-related constructs and the subjective experience they evoke. Goal activation: Memory, accessibility, and inhibition of goal-related constructs Constructs related to a goal can activate or prime the pursuit of that goal. For example, the presence of one’s study partner or the word “exam” in a game of scrabble can activate a student’s academic goal and hence increase her motivation to study. Once a goal is active, Social and Personality Psychology Compass 8/7 (2014): 328–341, 10.1111/spc3.12110 © 2014 John Wiley & Sons Ltd 330 How to Measure Motivation the motivational system prepares the individual for action by activating goal-relevant information (Bargh & Barndollar, 1996; Gollwitzer, 1996; Kruglanski, 1996). Thus, motivation manifests itself in terms of how easily goal-related constructs are brought tomind (i.e., accessibility; Aarts, Dijksterhuis, & De Vries, 2001; Higgins & King, 1981; Wyer & Srull, 1986). The activation and subsequent pursuit of a goal can be conscious, such that one is aware of the cues that led to goal-related judgments and behaviors. This activation can also be non-conscious, such that a one is unaware of the goal prime or that one is even exhibiting goal-related judgments and behaviors. Whether goals are conscious or non-conscious, a fundamental characteristic of goal-driven processes is the persistence of the accessibility of goal-related constructs for as long as the goal is active or until an individual disengages from the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Goschke & Kuhl, 1993). Upon goal completion, motivation diminishes and accessibility is inhibited (Liberman & Förster, 2000; Marsh, Hicks, & Bink, 1998). This active reduction in accessibility allows individuals to direct their cognitive resources to other tasks at hand without being distracted by thoughts of a completed goal. Thus, motivation can be measured by the degree to which goal-related concepts are accessible inmemory. Specifically, the greater the motivation to pursue/achieve a goal, the more likely individuals are to remember, notice, or recognize concepts, objects, or persons related to that goal. For example, in a classic study, Zeigarnik (1927) instructed participants to perform 20 short tasks, ten of which they did not get a chance to finish because the experimenter interrupted them. At the end of the study, Zeigarnik inferred the strength of motivation by asking participants to recall as many of the tasks as possible. Consistent with the notion that unfulfilled goals are associated with heightened motivational states, whereas fulfilled goals inhibit motivation, the results show that participants recalled more uncompleted tasks (i.e., unfulfilled goals) than completed tasks (i.e., fulfilled goals; the Zeigarnik effect). More recently, Förster, Liberman, and Higgins (2005) replicated these findings; inferring motivation from performance on a lexical decision task. Their study assessed the speed of recognizing – i.e., identifying as words versus non-words –words related to a focal goal prior to (versus after) completing that goal. A related measure of motivation is the inhibition of conflicting constructs. In ", "title": "" }, { "docid": "1557db582fbcf5e17c2b021b6d37b03a", "text": "Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.", "title": "" }, { "docid": "2e976aa51bc5550ad14083d5df7252a8", "text": "This paper presents a 60-dB gain bulk-driven Miller OTA operating at 0.25-V power supply in the 130-nm digital CMOS process. The amplifier operates in the weak-inversion region with input bulk-driven differential pair sporting positive feedback source degeneration for transconductance enhancement. In addition, the distributed layout configuration is used for all the transistors to mitigate the effect of halo implants for higher output impedance. Combining these two approaches, we experimentally demonstrate a high gain of over 60-dB with just 18-nW power consumption from 0.25-V power supply. The use of enhanced bulk-driven differential pair and distributed layout can help overcome some of the constraints imposed by nanometer CMOS process for high performance analog circuits in weak inversion region.", "title": "" }, { "docid": "36e8ecc13c1f92ca3b056359e2d803f0", "text": "We propose a novel module, the reviewer module, to improve the encoder-decoder learning framework. The reviewer module is generic, and can be plugged into an existing encoder-decoder model. The reviewer module performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a fact vector after each review step; the fact vectors are used as the input of the attention mechanism in the decoder. We show that the conventional encoderdecoders are a special case of our framework. Empirically, we show that our framework can improve over state-of-the-art encoder-decoder systems on the tasks of image captioning and source code captioning.", "title": "" }, { "docid": "b9cf32ef9364f55c5f59b4c6a9626656", "text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.", "title": "" }, { "docid": "c01fe3b589479fb14568ce1e00a08125", "text": "Purpose – The purpose of this paper is to propose a model to examine the impact of organizational support on behavioral intention (BI) regarding enterprise resource planning (ERP) implementation based on the technology acceptance model (TAM). Design/methodology/approach – A research model is proposed which describes the effects of organizational support, both formal and informal, on factors of TAM. A survey questionnaire is developed to test the proposed model. A total of 700 of questionnaires are distributed to users in small and medium enterprises that have implemented ERP systems in Korea and 209 responses are used for analyses. Structural equation modeling is employed to test the research hypotheses. Findings – The results indicate that the organizational support is an important factor for perceived usefulness (PU) and perceived ease of use (PEOU). PU and PEOU seem to lead to a higher level of interest in the ERP system and BI to use the system. The most notable finding of our study is that organizational support is positively associated with factors of TAM. Research limitations/implications – The survey data used in this paper are collected from smalland medium-sized companies in South Korea. Thus, the respondents in these firms might have been trained at different levels or on different modules of ERP, which would yield diversity in subject experience with different ERP systems. Originality/value – To improve the efficiency and effectiveness of ERP implementation in a real world environment, organizations need to better understand user satisfaction. The TAM model provides a theoretical construct to explain how user satisfaction is affected.", "title": "" }, { "docid": "b4abab79e652bb4d6d3ea31df81ebd40", "text": "Humor is an integral part of our day-to-day communication making it interesting and plausible. The growing demand in robotics and the constant urge for machines to pass the Turing test to eliminate the thin line difference between human and machine behavior make it essential for machines to be able to use humor in communication. Moreover, Learning is a continuous process and very important at every stage of life. However sometimes merely reading from a book induces lassitude and lack of concentration may hamper strong foundation building. Children suffering from Autism Spectrum Disorder (ASD) suffer from slow learning and grasping issues. English being a funny language, a particular word has multiple meanings, making it difficult for children with ASD to cognize it. Solving riddles induces fast learning and sharpness in children including those affected by ASD. The existing systems however, are too far from being used in any practical application. This paper proposes a system that uses core ideas of JAPE to create puns for entertainment and vocabulary building purpose for children. General Terms Homophone: Two or more words having the same pronunciation but different meanings, origins, or spelling (e.g. new and knew) [4]. Homonym: Two or more words having the same spelling or pronunciation but different meanings and origins (e.g. pole. and pole) [5]. Rhyming words: Words that have the same ending sounds. E.g. are cat, hat, bat, mat, fat and rat [6]. Punning words: A form of word play that suggests two or more meanings, by exploiting multiple meanings of words, or of similar-sounding words, for an intended humorous or rhetorical effect [7]. Pun generator: A system that uses punning words to generate riddles/jokes with an intention of making it humorous.", "title": "" }, { "docid": "ae97effd4e999ccf580d32c8522b6f59", "text": "Eight isolates of cellulose-degrading bacteria (CDB) were isolated from four different invertebrates (termite, snail, caterpillar, and bookworm) by enriching the basal culture medium with filter paper as substrate for cellulose degradation. To indicate the cellulase activity of the organisms, diameter of clear zone around the colony and hydrolytic value on cellulose Congo Red agar media were measured. CDB 8 and CDB 10 exhibited the maximum zone of clearance around the colony with diameter of 45 and 50 mm and with the hydrolytic value of 9 and 9.8, respectively. The enzyme assays for two enzymes, filter paper cellulase (FPC), and cellulase (endoglucanase), were examined by methods recommended by the International Union of Pure and Applied Chemistry (IUPAC). The extracellular cellulase activities ranged from 0.012 to 0.196 IU/mL for FPC and 0.162 to 0.400 IU/mL for endoglucanase assay. All the cultures were also further tested for their capacity to degrade filter paper by gravimetric method. The maximum filter paper degradation percentage was estimated to be 65.7 for CDB 8. Selected bacterial isolates CDB 2, 7, 8, and 10 were co-cultured with Saccharomyces cerevisiae for simultaneous saccharification and fermentation. Ethanol production was positively tested after five days of incubation with acidified potassium dichromate.", "title": "" } ]
scidocsrr
301014a83104b64509cc46f44c22443b
Dynamic user profiles for web personalisation
[ { "docid": "97561632e9d87093a5de4f1e4b096df7", "text": "Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate. Guy Shani Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: guyshani@microsoft.com Asela Gunawardana Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: aselag@microsoft.com", "title": "" }, { "docid": "1f8128a4a525f32099d4fefe4bea1212", "text": "Information overload on the Web has created enormous challenges to customers selecting products for online purchases and to online businesses attempting to identify customers’ preferences efficiently. Various recommender systems employing different data representations and recommendation methods are currently used to address these challenges. In this research, we developed a graph model that provides a generic data representation and can support different recommendation methods. To demonstrate its usefulness and flexibility, we developed three recommendation methods: direct retrieval, association mining, and high-degree association retrieval. We used a data set from an online bookstore as our research test-bed. Evaluation results showed that combining product content information and historical customer transaction information achieved more accurate predictions and relevant recommendations than using only collaborative information. However, comparisons among different methods showed that high-degree association retrieval did not perform significantly better than the association mining method or the direct retrieval method in our test-bed.", "title": "" } ]
[ { "docid": "3a81f0fc24dd90f6c35c47e60db3daa4", "text": "Advances in information and Web technologies have open numerous opportunities for online retailing. The pervasiveness of the Internet coupled with the keenness in competition among online retailers has led to virtual experiential marketing (VEM). This study examines the relationship of five VEM elements on customer browse and purchase intentions and loyalty, and the moderating effects of shopping orientation and Internet experience on these relationships. A survey was conducted of customers who frequently visited two online game stores to play two popular games in Taiwan. The results suggest that of the five VEM elements, three have positive effects on browse intention, and two on purchase intentions. Both browse and purchase intentions have positive effects on customer loyalty. Economic orientation was found to moderate that relationships between the VEM elements and browse and purchase intentions. However, convenience orientation moderated only the relationships between the VEM elements and browse intention.", "title": "" }, { "docid": "121f1baeaba51ebfdfc69dde5cd06ce3", "text": "Mobile operators are facing an exponential traffic growth due to the proliferation of portable devices that require a high-capacity connectivity. This, in turn, leads to a tremendous increase of the energy consumption of wireless access networks. A promising solution to this problem is the concept of heterogeneous networks, which is based on the dense deployment of low-cost and low-power base stations, in addition to the traditional macro cells. However, in such a scenario the energy consumed by the backhaul, which aggregates the traffic from each base station towards the metro/core segment, becomes significant and may limit the advantages of heterogeneous network deployments. This paper aims at assessing the impact of backhaul on the energy consumption of wireless access networks, taking into consideration different data traffic requirements (i.e., from todays to 2020 traffic levels). Three backhaul architectures combining different technologies (i.e., copper, fiber, and microwave) are considered. Results show that backhaul can amount to up to 50% of the power consumption of a wireless access network. On the other hand, hybrid backhaul architectures that combines fiber and microwave performs relatively well in scenarios where the wireless network is characterized by a high small-base-stations penetration rate.", "title": "" }, { "docid": "7a6ae2e12dbd9f4a0a3355caec648ca7", "text": "Near Field Communication (NFC) is an emerging wireless short-range communication technology that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In combination with NFC-capable smartphones it enables intuitive application scenarios for contactless transactions, in particular services for mobile payment and over-theair ticketing. The intention of this paper is to describe basic characteristics and benefits of the underlaying technology, to classify modes of operation and to present various use cases. Both existing NFC applications and possible future scenarios will be analyzed in this context. Furthermore, security concerns, challenges and present conflicts will be discussed eventually.", "title": "" }, { "docid": "04c2024b53a0939ee24878cfa2397f49", "text": "We describe an algorithm for automatically segmenting flowers in colour photographs. This is a challenging problem because of the sheer variety of flower classes, the variability within a class and within a particular flower, and the variability of the imaging conditions – lighting, pose, foreshortening, etc. The method couples two models – a colour model for foreground and background, and a light generic shape model for the petal structure. This shape model is tolerant to viewpoint changes and petal deformations, and applicable across many different flower classes. The segmentations are produced using a MRF cost function optimized using graph cuts. We show how the components of the algorithm can be tuned to overcome common segmentation errors, and how performance can be optimized by learning parameters on a training set. The algorithm is evaluated on 13 flower classes and more than 750 examples. Performance is assessed against ground truth trimap segmentations. The algorithms is also compared to several previous approaches for flower segmentation. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "da129ff6527c7b8af0f34a910051e5ef", "text": "A compact ultra-wideband (UWB) bandpass filter is proposed based on the coplanar-waveguide (CPW) split-mode resonator. By suitably introducing a short-circuited stub to implement the shunt inductance between two quarter wavelength CPW stepped-impedance resonators, a strong magnetic coupling may be realized so that a CPW split-mode resonator may be constructed. Moreover, by properly designing the dual-metal-plane structure, one may accomplish a microstrip-to-CPW feeding mechanism to provide strong enough capacitive coupling for bandwidth enhancement and also introduce an extra electric coupling between input and output ports so that two transmission zeros may be created for selectivity improvement. The implemented UWB filter shows a fractional bandwidth of 116% and two transmission zeros at 1.705 and 11.39 GHz. Good agreement between simulated and measured responses is observed.", "title": "" }, { "docid": "37057dff785d5d373f3c4d7b60441276", "text": "We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.", "title": "" }, { "docid": "f9c6f688bc93df9966ed425720045aea", "text": "The main contribution of this work is a new paradigm for image representation and image compression. We describe a new multilayered representation technique for images. An image is parsed into a superposition of coherent layers: piecewise smooth regions layer, textures layer, etc. The multilayered decomposition algorithm consists in a cascade of compressions applied successively to the image itself and to the residuals that resulted from the previous compressions. During each iteration of the algorithm, we code the residual part in a lossy way: we only retain the most significant structures of the residual part, which results in a sparse representation. Each layer is encoded independently with a different transform, or basis, at a different bitrate, and the combination of the compressed layers can always be reconstructed in a meaningful way. The strength of the multilayer approach comes from the fact that different sets of basis functions complement each others: some of the basis functions will give reasonable account of the large trend of the data, while others will catch the local transients, or the oscillatory patterns. This multilayered representation has a lot of beautiful applications in image understanding, and image and video coding. We have implemented the algorithm and we have studied its capabilities.", "title": "" }, { "docid": "9a6a724f8aa0ae4fa9de1367f8661583", "text": "In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.", "title": "" }, { "docid": "ec1120018899c6c9fe16240b8e35efac", "text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.", "title": "" }, { "docid": "4f760928083b9b4c574c6d6e1cc4f4b1", "text": "Finding matching images across large datasets plays a key role in many computer vision applications such as structure-from-motion (SfM), multi-view 3D reconstruction, image retrieval, and image-based localisation. In this paper, we propose finding matching and non-matching pairs of images by representing them with neural network based feature vectors, whose similarity is measured by Euclidean distance. The feature vectors are obtained with convolutional neural networks which are learnt from labeled examples of matching and non-matching image pairs by using a contrastive loss function in a Siamese network architecture. Previously Siamese architecture has been utilised in facial image verification and in matching local image patches, but not yet in generic image retrieval or whole-image matching. Our experimental results show that the proposed features improve matching performance compared to baseline features obtained with networks which are trained for image classification task. The features generalize well and improve matching of images of new landmarks which are not seen at training time. This is despite the fact that the labeling of matching and non-matching pairs is imperfect in our training data. The results are promising considering image retrieval applications, and there is potential for further improvement by utilising more training image pairs with more accurate ground truth labels.", "title": "" }, { "docid": "ed097b44837a57ad0053ae06a95f1543", "text": "For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.", "title": "" }, { "docid": "d84179bb22103150f3eae95e6ea7b3ab", "text": "Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the \"multiple segment Viterbi\" (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call \"sparse rescaling\". These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches.", "title": "" }, { "docid": "8440dc3177b112ffe796c64125f9e242", "text": "Acquired flatfoot deformity after injury is usually due to partial or complete tearing of the tendon of tibialis posterior, with secondary failure of the other structures which maintain the medial longitudinal arch. We describe a patient in whom the rupture of the plantar calcaneonavicular (spring) ligament resulted in a clinical picture similar to that of rupture of the tendon of tibialis posterior. Operative repair of the ligament and transfer of the tendon of flexor digitorum gave an excellent result at four years with the patient returning to full sporting activities.", "title": "" }, { "docid": "eeda67ba0bc36bd1984789be93d8ce9c", "text": "Using modified constructivist grounded theory, the purpose of the present study was to explore positive body image experiences in people with spinal cord injury. Nine participants (five women, four men) varying in age (21-63 years), type of injury (C3-T7; complete and incomplete), and years post-injury (4-36 years) were recruited. The following main categories were found: body acceptance, body appreciation and gratitude, social support, functional gains, independence, media literacy, broadly conceptualizing beauty, inner positivity influencing outer demeanour, finding others who have a positive body image, unconditional acceptance from others, religion/spirituality, listening to and taking care of the body, managing secondary complications, minimizing pain, and respect. Interestingly, there was consistency in positive body image characteristics reported in this study with those found in previous research, demonstrating universality of positive body image. However, unique characteristics (e.g., resilience, functional gains, independence) were also reported demonstrating the importance of exploring positive body image in diverse groups.", "title": "" }, { "docid": "badfe178923af250baa80c2871aae5bc", "text": "We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.", "title": "" }, { "docid": "c9be394df8b4827c57c5413fc28b47e8", "text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.", "title": "" }, { "docid": "d613dd269de4e2616fa7278a02dea2bf", "text": "Computer Forensics is mainly about investigating crime where computers have been involved. There are many tools available to aid the investigator with this task. We have created a prototype of a new type of tool called CyberForensic TimeLab where all evidence is indexed by their time variables and plotted on a timeline. We believed that this way of visualizing the evidence allows the investigators to find coherent evidence faster and more intuitively. We have performed a user test where a group of people has evaluated our prototype tool against a modern commercial computer forensic tool and the results of this preliminary test are very promising. The results show that users completed the task in shorter time, with greater accuracy and with less errors using CyberForensic TimeLab. The subjects also experienced that the prototype were more intuitive to use and that it allowed them to easier locate evidence that was coherent in time. a 2009 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7f4b27422520ad678dd2f5f658ffebc3", "text": "We present a generic framework to make wrapper induction algorithms tolerant to noise in the training data. This enables us to learn wrappers in a completely unsupervised manner from automatically and cheaply obtained noisy training data, e.g., using dictionaries and regular expressions. By removing the site-level supervision that wrapper-based techniques require, we are able to perform information extraction at web-scale, with accuracy unattained with existing unsupervised extraction techniques. Our system is used in production at Yahoo! and powers live applications.", "title": "" }, { "docid": "90d1d78d3d624d3cb1ecc07e8acaefd4", "text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.", "title": "" }, { "docid": "e6d5781d32e76d9c5f7c4ea985568986", "text": "We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.", "title": "" } ]
scidocsrr
9765578b50fc821f8d90b55e6d8aced4
Block arrivals in the Bitcoin blockchain
[ { "docid": "6ab1bc5fced659803724f2f7916be355", "text": "Statistical Analysis of a Telephone Call Center Lawrence Brown, Noah Gans, Avishai Mandelbaum, Anat Sakov, Haipeng Shen, Sergey Zeltyn and Linda Zhao Lawrence Brown is Professor, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . Noah Gans is Associate Professor, Department of Operations and Information Management, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . Avishai Mandelbaum is Professor, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel . Anat Sakov is Postdoctoral Fellow, Tel-Aviv University, Tel-Aviv, Israel . Haipeng Shen is Assistant Professor, Department of Statistics, University of North Carolina, Durham, NC 27599 . Sergey Zeltyn is Ph.D. Candidate, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel . Linda Zhao is Associate Professor, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . This work was supported by National Science Foundation DMS-99-71751 and DMS-99-71848, the Sloane Foundation, Israeli Science Foundation grants 388/99 and 126/02, the Wharton Financial Institutions Center, and Technion funds for the promotion of research and sponsored research. Version of record first published: 31 Dec 2011.", "title": "" }, { "docid": "f181c3fe17392239e5feaef02c37dd11", "text": "We present a formal model of synchronous processes without distinct identifiers (i.e., anonymous processes) that communicate using one-way public broadcasts. Our main contribution is a proof that the Bitcoin protocol achieves consensus in this model, except for a negligible probability, when Byzantine faults make up less than half the network. The protocol is scalable, since the running time and message complexity are all independent of the size of the network, instead depending only on the relative computing power of the faulty processes. We also introduce a requirement that the protocol must tolerate an arbitrary number of passive clients that receive broadcasts but can not send. This leads to a tight 2f + 1 resilience bound.", "title": "" } ]
[ { "docid": "b2911f3df2793066dde1af35f5a09d62", "text": "Cloud computing is drawing attention from both practitioners and researchers, and its adoption among organizations is on the rise. The focus has mainly been on minimizing fixed IT costs and using the IT resource flexibility offered by the cloud. However, the promise of cloud computing is much greater. As a disruptive technology, it enables innovative new services and business models that decrease time to market, create operational efficiencies and engage customers and citizens in new ways. However, we are still in the early days of cloud computing, and, for organizations to exploit the full potential, we need knowledge of the potential applications and pitfalls of cloud computing. Maturity models provide effective methods for organizations to assess, evaluate, and benchmark their capabilities as bases for developing roadmaps for improving weaknesses. Adopting the business-IT maturity model by Pearlson & Saunders (2007) as analytical framework, we synthesize the existing literature, identify levels of cloud computing benefits, and establish propositions for practice in terms of how to realize these benefits.", "title": "" }, { "docid": "8b63800da2019180d266297647e3dbc0", "text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.", "title": "" }, { "docid": "e900bd24f24f5b6c4ec1cab2fac5ce45", "text": "The recent emergence of lab-on-a-chip (LoC) technology has led to a paradigm shift in many healthcare-related application areas, e.g., point-of-care clinical diagnostics, high-throughput sequencing, and proteomics. A promising category of LoCs is digital microfluidic (DMF)-based biochips, in which nanoliter-volume fluid droplets are manipulated on a 2-D electrode array. A key challenge in designing such chips and mapping lab-bench protocols to a LoC is to carry out the dilution process of biochemical samples efficiently. As an optimization and automation technique, we present a dilution/mixing algorithm that significantly reduces the production of waste droplets. This algorithm takes O(n) time to compute at most n sequential mix/split operations required to achieve any given target concentration with an error in concentration factor less than [1/(2n)]. To implement the algorithm, we design an architectural layout of a DMF-based LoC consisting of two O(n)-size rotary mixers and O(n) storage electrodes. Simulation results show that the proposed technique always yields nonnegative savings in the number of waste droplets and also in the total number of input droplets compared to earlier methods.", "title": "" }, { "docid": "34bd41f7384d6ee4d882a39aec167b3e", "text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.", "title": "" }, { "docid": "e4007c7e6a80006238e1211a213e391b", "text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.", "title": "" }, { "docid": "94b061285a0ca52aa0e82adcca392416", "text": "Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the concept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.", "title": "" }, { "docid": "e71d55a573426068fab2212a55bc3682", "text": "In this article we present a theoretical approach to cognitive control and attention modulation, as well as review studies related to such a view, using an auditory task based on dichotic presentations of simple consonant-vowel syllables. The reviewed work comes out of joint research efforts by the 'Attention-node' at the 'Nordic Center of Excellence in Cognitive Control'. We suggest a new way of defining degrees of cognitive control based on systematically varying the stimulus intensity of the right or left ear dichotic stimulus, thus parametrically varying the degree of stimulus interference and conflict when assessing the amount of cognitive control necessary to resolve the interference. We first present an overview and review of previous studies using the so-called \"forced-attention\" dichotic listening paradigm. We then present behavioral and neuroimaging data to explore the suggested cognitive control model, with examples from normal adults, clinical and special ability groups.", "title": "" }, { "docid": "0c60255bd78597a6389852fc34bab1c4", "text": "The interaction between indomethacin and human serum albumin (HSA) was investigated by fluorescence quenching technique and UV-vis absorption spectroscopy. The results of fluorescence titration revealed that indomethacin, strongly quench the intrinsic fluorescence of HSA by static quenching and nonradiative energy transfer. The binding site number n and the apparent binding constant K(A), were calculated using linear and nonlinear fit to the experimental data. The distance r between donor (HSA) and acceptor (indomethacin) was obtained according to fluorescence resonance energy transfer (FRET). The study suggests that the donor and the acceptor are bound at different locations but within the quenching distance.", "title": "" }, { "docid": "a1a2c3f62bd2923fc317fcda8c907196", "text": "Hardware intellectual-property (IP) cores have emerged as an integral part of modern system-on-chip (SoC) designs. However, IP vendors are facing major challenges to protect hardware IPs from IP piracy. This paper proposes a novel design methodology for hardware IP protection using netlist-level obfuscation. The proposed methodology can be integrated in the SoC design and manufacturing flow to simultaneously obfuscate and authenticate the design. Simulation results for a set of ISCAS-89 benchmark circuits and the advanced-encryption-standard IP core show that high levels of security can be achieved at less than 5% area and power overhead under delay constraint.", "title": "" }, { "docid": "481f4a4b14d4594d8b023f9df074dfeb", "text": "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.", "title": "" }, { "docid": "ac1018fb262f38faf50071603292c3c0", "text": "This paper provides an overview and an evaluation of the Cetus source-to-source compiler infrastructure. The original goal of the Cetus project was to create an easy-to-use compiler for research in automatic parallelization of C programs. In meantime, Cetus has been used for many additional program transformation tasks. It serves as a compiler infrastructure for many projects in the US and internationally. Recently, Cetus has been supported by the National Science Foundation to build a community resource. The compiler has gone through several iterations of benchmark studies and implementations of those techniques that could improve the parallel performance of these programs. These efforts have resulted in a system that favorably compares with state-of-the-art parallelizers, such as Intel’s ICC. A key limitation of advanced optimizing compilers is their lack of runtime information, such as the program input data. We will discuss and evaluate several techniques that support dynamic optimization decisions. Finally, as there is an extensive body of proposed compiler analyses and transformations for parallelization, the question of the importance of the techniques arises. This paper evaluates the impact of the individual Cetus techniques on overall program performance.", "title": "" }, { "docid": "066d22c1c5554bf32118baa331c64a88", "text": "A center-fed, single-layer, planar antenna with unilateral radiation patterns is investigated. The antenna consists of a turnstile-shaped patch and a slotted ground plane, which function as a vertical magnetic dipole and a horizontal electric dipole, respectively. By combining the two orthogonal dipoles with the same radiation intensities and antiphases, unilateral patterns with wide beamwidth and high front-to-back (F/B) ratio are achieved. As the unilateral radiation pattern can be easily steered in the horizontal plane by changing the slot location, a pattern reconfigurable antenna is further designed by using p-i-n diodes to control the connection states of the radial slots on the ground plane. Four steerable beams are obtained, capable of covering the entire azimuthal plane. For demonstration, both the unilateral and pattern reconfigurable antennas operating at 2.4 GHz WLAN band (2.40–2.48 GHz) were fabricated and measured. The measured overlapping bandwidths, with $\\vert S_{11}\\vert <-10$ dB and F/B ratio >15 dB, are given by 7.0% (2.33–2.5 GHz) and 6.3% (2.32–2.47 GHz), respectively.", "title": "" }, { "docid": "01594ac29e66b229dbfacd0e1a967e3c", "text": "This article describes two approaches for computing the line-of-sight between objects in real terrain data. Our purpose is to find an efficient algorithm for combat elements in warfare simulation such as soldiers, troops, vehicles, ships, and aircrafts, thus allowing a simulated combat theater.", "title": "" }, { "docid": "5a71d766ecd60b8973b965e53ef8ddfd", "text": "An m-polar fuzzy model is useful for multi-polar information, multi-agent, multi-attribute and multiobject network models which gives more precision, flexibility, and comparability to the system as compared to the classical, fuzzy and bipolar fuzzy models. In this paper, m-polar fuzzy sets are used to introduce the notion of m-polar psi-morphism on product m-polar fuzzy graph (mFG). The action of this morphism is studied and established some results on weak and co-weak isomorphism. d2-degree and total d2-degree of a vertex in product mFG are defined and studied their properties. A real life situation has been modeled as an application of product mFG. c ©2018 World Academic Press, UK. All rights reserved.", "title": "" }, { "docid": "907d5aa059ee85629ba0b2b131a9324a", "text": "Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.", "title": "" }, { "docid": "ab9a65fda5a628b1042d1a31f3cf6188", "text": "Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-theart methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied version of the contextual bandits problem. We prove a high probability regret bound of Õ( 2 √ T 1+ ) in time T for any 0 < < 1, where d is the dimension of each context vector and is a parameter used by the algorithm. Our results provide the first theoretical guarantees for the contextual version of Thompson Sampling, and are close to the lower bound of Ω(d √ T ) for this problem. This essentially solves a COLT open problem of Chapelle and Li [COLT 2012]. Proceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013. JMLR: W&CP volume 28. Copyright 2013 by the author(s).", "title": "" }, { "docid": "a21f04b6c8af0b38b3b41f79f2661fa6", "text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.", "title": "" }, { "docid": "abb06d560266ca1695f72e4d908cf6ea", "text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.", "title": "" }, { "docid": "20be8363ae04659061a56a1c7d3ee4d5", "text": "The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours", "title": "" } ]
scidocsrr
799dfda3ad6aabc09bd000a234545c7b
Learning to Represent Knowledge Graphs with Gaussian Embedding
[ { "docid": "c4df97f3db23c91f0ce02411d2e1e999", "text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.", "title": "" }, { "docid": "a2f46b51b65c56acf6768f8e0d3feb79", "text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit", "title": "" }, { "docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0", "text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.", "title": "" } ]
[ { "docid": "0574e5c8cf24cd2f72a01223c54cec09", "text": "In the wake of the Mexican and Asian currency turmoil, the subject of financial crises has come to the forefront of academic and policy discussions. This paper analyzes the links between banking and currency crises. We find that: problems in the banking sector typically precede a currency crisis--the currency crisis deepens the banking crisis, activating a vicious spiral; financial liberalization often precedes banking crises. The anatomy of these episodes suggests that crises occur as the economy enters a recession, following a prolonged boom in economic activity that was fueled by credit, capital inflows and accompanied by an overvalued currency. (JEL F30, F41) * Graciela L. Kaminsky, George Washington University, Washington, D.C. 20552. Carmen M. Reinhart, University of Maryland, College Park, Maryland 20742. We thank two anonymous referees for very helpful suggestions. We also thank Guillermo Calvo, Rudiger Dornbusch, Peter Montiel, Vincent Reinhart, John Rogers, Andrew Rose and seminar participants at Banco de México, the Board of Governors of the Federal Reserve System, Florida State University, Harvard, the IMF, Johns Hopkins University, Massachusetts Institute of Technology, Stanford University, SUNY at Albany, University of California, Berkeley, UCLA, University of California, Santa Cruz, University of Maryland, University of Washington, The World Bank, and the conference on “Speculative Attacks in the Era of the Global Economy: Theory, Evidence, and Policy Implications,” (Washington, DC, December 1995), for very helpful comments and Greg Belzer, Kris Dickson, and Noah Williams for superb research assistance. 1 Pervasive currency turmoil, particularly in Latin America in the late 1970s and early 1980s, gave impetus to a flourishing literature on balance-of-payments crises. As stressed in Paul Krugman’s (1979) seminal paper, in this literature crises occur because a country finances its fiscal deficit by printing money to the extent that excessive credit growth leads to the eventual collapse of the fixed exchange rate regime. With calmer currency markets in the midand late 1980s, interest in this literature languished. The collapse of the European Exchange Rate Mechanism, the Mexican peso crisis, and the wave of currency crises sweeping through Asia have, however, rekindled interest in the topic. Yet, the focus of this recent literature has shifted. While the earlier literature emphasized the inconsistency between fiscal and monetary policies and the exchange rate commitment, the new one stresses self-fulfilling expectations and herding behavior in international capital markets. In this view, as Guillermo A.Calvo (1995, page 1) summarizes “If investors deem you unworthy, no funds will be forthcoming and, thus, unworthy you will be.” Whatever the causes of currency crises, neither the old literature nor the new models of self-fulfilling crises have paid much attention to the interaction between banking and currency problems, despite the fact that many of the countries that have had currency crises have also had full-fledged domestic banking crises around the same time. Notable exceptions are: Carlos Diaz-Alejandro (1985), Andres Velasco (1987), Calvo (1995), Ilan Goldfajn and Rodrigo Valdés (1995), and Victoria Miller (1995). As to the empirical evidence on the potential links between what we dub the twin crises, the literature has been entirely silent. The Thai, Indonesian, and Korean crises are not the first examples of dual currency and banking woes, they are only the recent additions to a long list of casualties which includes Chile, Finland, Mexico, Norway, and Sweden. In this paper, we aim to fill this void in the literature and examine currency and banking crises episodes for a number of industrial and developing countries. The former include: Denmark, Finland, Norway, Spain, and Sweden. The latter focus on: Argentina, Bolivia, Brazil, Chile, Colombia, Indonesia,", "title": "" }, { "docid": "8c0c7d6554f21b4cb5e155cf1e33a165", "text": "Despite progress, early childhood development (ECD) remains a neglected issue, particularly in resource-poor countries. We analyse the challenges and opportunities that ECD proponents face in advancing global priority for the issue. We triangulated among several data sources, including 19 semi-structured interviews with individuals involved in global ECD leadership, practice, and advocacy, as well as peer-reviewed research, organisation reports, and grey literature. We undertook a thematic analysis of the collected data, drawing on social science scholarship on collective action and a policy framework that elucidates why some global initiatives are more successful in generating political priority than others. The analysis indicates that the ECD community faces two primary challenges in advancing global political priority. The first pertains to framing: generation of internal consensus on the definition of the problem and solutions, agreement that could facilitate the discovery of a public positioning of the issue that could generate political support. The second concerns governance: building of effective institutions to achieve collective goals. However, there are multiple opportunities to advance political priority for ECD, including an increasingly favourable political environment, advances in ECD metrics, and the existence of compelling arguments for investment in ECD. To advance global priority for ECD, proponents will need to surmount the framing and governance challenges and leverage these opportunities.", "title": "" }, { "docid": "08a6297a0959e0c12801b603d585e12c", "text": "The national exchequer, the banking industry and regular citizens all incur a high overhead in using physical cash. Electronic cash and cell phone-based payment in particular is a viable alternative to physical cash since it incurs much lower overheads and offers more convenience. Because security is of paramount importance in financial transactions, it is imperative that attack vectors in this application be identified and analyzed. In this paper, we investigate vulnerabilities in several dimensions – in choice of hardware/software platform, in technology and in cell phone operating system. We examine how existing and future mobile worms can severely compromise the security of transacting payments through a cell phone.", "title": "" }, { "docid": "42bd08ed5a65d2b16e6a94708e88f0ed", "text": "Designers of distributed embedded systems face many challenges in determining the tradeoffs when defining a system architecture or retargeting an existing design. Communication synthesis, the automatic generation of the necessary software and hardware for system components to exchange data, is required to more effectively explore the design space and automate very error prone tasks. The paper examines the problem of mapping a high level specification to an arbitrary architecture that uses specific, common bus protocols for interprocessor communication. The communication model presented allows for easy retargeting to different bus topologies, protocols, and illustrates that global considerations are required to achieve a correct implementation. An algorithm is presented that partitions multihop communication timing constraints to effectively utilize the bus bandwidth along a message path. The communication synthesis tool is integrated with a system co-simulator to provide performance data for a given mapping.", "title": "" }, { "docid": "661a5c7f49d4232f61a4a2ee0c1ddbff", "text": "Power is now a first-order design constraint in large-scale parallel computing. Used carefully, dynamic voltage scaling can execute parts of a program at a slower CPU speed to achieve energy savings with a relatively small (possibly zero) time delay. However, the problem of when to change frequencies in order to optimize energy savings is NP-complete, which has led to many heuristic energy-saving algorithms. To determine how closely these algorithms approach optimal savings, we developed a system that determines a bound on the energy savings for an application. Our system uses a linear programming solver that takes as inputs the application communication trace and the cluster power characteristics and then outputs a schedule that realizes this bound. We apply our system to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K. Results from our bounding technique show particle simulation is more amenable to energy savings than UMT2K.", "title": "" }, { "docid": "4a96980dc1ba12b1ea822699a6505aed", "text": "Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.", "title": "" }, { "docid": "8680472a2562c3877ed46ecc960168f6", "text": "Javier Garcia-Bernardo, ∗ Hong Qi, James M. Shultz, Alyssa M. Cohen, Neil F. Johnson, † and Peter Sheridan Dodds ‡ Department of Computer Science, University of Vermont, Burlington VT 05405, USA§ Department of Physics, University of Miami, Coral Gables, FL 33124, USA Center for Disaster & Extreme Event Preparedness (DEEP Center), University of Miami, Miller School of Medicine, FL 33124, USA 10858 Limeberry Drive, Cooper City, FL 33026, USA Department of Mathematics & Statistics, Vermont Complex Systems Center, Computational Story Lab, & the Vermont Advanced Computing Core, The University of Vermont, Burlington, VT 05401, USA§ (Dated: June 23, 2015)", "title": "" }, { "docid": "8abcf3e56e272c06da26a40d66afcfb0", "text": "As internet use becomes increasingly integral to modern life, the hazards of excessive use are also becoming apparent. Prior research suggests that socially anxious individuals are particularly susceptible to problematic internet use. This vulnerability may relate to the perception of online communication as a safer means of interacting, due to greater control over self-presentation, decreased risk of negative evaluation, and improved relationship quality. To investigate these hypotheses, a general sample of 338 completed an online survey. Social anxiety was confirmed as a significant predictor of problematic internet use when controlling for depression and general anxiety. Social anxiety was associated with perceptions of greater control and decreased risk of negative evaluation when communicating online, however perceived relationship quality did not differ. Negative expectations during face-to-face interactions partially accounted for the relationship between social anxiety and problematic internet use. There was also preliminary evidence that preference for online communication exacerbates face-to-face avoidance.", "title": "" }, { "docid": "04e094e8f1e0466248df9c1263285f0b", "text": "We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images.", "title": "" }, { "docid": "e9750bf1287847b6587ad28b19e78751", "text": "Biomedical engineering handles the organization and functioning of medical devices in the hospital. This is a strategic function of the hospital for its balance, development, and growth. This is a major focus in internal and external reports of the hospital. It's based on piloting of medical devices needs and the procedures of biomedical teams’ intervention. Multi-year projects of capital and operating expenditure in medical devices are planned as coherently as possible with the hospital's financial budgets. An information system is an essential tool for monitoring medical devices engineering and relationship with medical services.", "title": "" }, { "docid": "cdfcc894d32c9a6a3a076d3e978d400f", "text": "The earliest Convolution Neural Network (CNN) model is leNet-5 model proposed by LeCun in 1998. However, in the next few years, the development of CNN had been almost stopped until the article ‘Reducing the dimensionality of data with neural networks’ presented by Hinton in 2006. CNN started entering a period of rapid development. AlexNet won the championship in the image classification contest of ImageNet with the huge superiority of 11% beyond the second place in 2012, and the proposal of DeepFace and DeepID, as two relatively successful models for high-performance face recognition and authentication in 2014, marking the important position of CNN. Convolution Neural Network (CNN) is an efficient recognition algorithm widely used in image recognition and other fields in recent years. That the core features of CNN include local field, shared weights and pooling greatly reducing the parameters, as well as simple structure, make CNN become an academic focus. In this paper, the Convolution Neural Network’s history and structure are summarized. And then several areas of Convolutional Neural Network applications are enumerated. At last, some new insights for the future research of CNN are presented.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" }, { "docid": "eed8fd39830e8058d55427623bb655df", "text": "In this paper, we present a solution for main content identification in web pages. Our solution is language-independent; Web pages may be written in different languages. It is topic-independent; no domain knowledge or dictionary is applied. And it is unsupervised; no training phase is necessary. The solution exploits the tree structure of web pages and the frequencies of text tokens to attribute scores of content density to the areas of the page and by the way identify the most important one. We tested this solution over representative examples of web pages to show how efficient and accurate it is. The results were satisfying.", "title": "" }, { "docid": "59acfbc9bd96073956f4d3d2f2db7946", "text": "A resource allocation framework is presented for spectrum underlay in cognitive wireless networks. We consider both interference constraints for primary users and quality of service (QoS) constraints for secondary users. Specifically, interference from secondary users to primary users is constrained to be below a tolerable limit. Also, signal to interference plus noise ratio (SINR) of each secondary user is maintained higher than a desired level for QoS insurance. We propose admission control algorithms to be used during high network load conditions which are performed jointly with power control so that QoS requirements of all admitted secondary users are satisfied while keeping the interference to primary users below the tolerable limit. If all secondary users can be supported at minimum rates, we allow them to increase their transmission rates and share the spectrum in a fair manner. We formulate the joint power/rate allocation with proportional and max-min fairness criteria as optimization problems. We show how to transform these optimization problems into a convex form so that their globally optimal solutions can be obtained. Numerical results show that the proposed admission control algorithms achieve performance very close to that of the optimal solution. Also, impacts of different system and QoS parameters on the network performance are investigated for the admission control, and rate/power allocation algorithms under different fairness criteria.", "title": "" }, { "docid": "aded7e5301d40faf52942cd61a1b54ba", "text": "In this paper, a lower limb rehabilitation robot in sitting position is developed for patients with muscle weakness. The robot is a stationary based type which is able to perform various types of therapeutic exercises. For safe operation, the robot's joint is driven by two-stage cable transmission while the balance mechanism is used to reduce actuator size and transmission ratio. Control algorithms for passive, assistive and resistive exercises are designed to match characteristics of each therapeutic exercises and patients with different muscle strength. Preliminary experiments conducted with a healthy subject have demonstrated that the robot and the control algorithms are promising for lower limb rehabilitation task.", "title": "" }, { "docid": "44e5c86afbe3814ad718aa27880941c4", "text": "This paper introduces genetic algorithms (GA) as a complete entity, in which knowledge of this emerging technology can be integrated together to form the framework of a design tool for industrial engineers. An attempt has also been made to explain “why’’ and “when” GA should be used as an optimization tool.", "title": "" }, { "docid": "f018db7f20245180d74e4eb07b99e8d3", "text": "Particle filters can become quite inefficient when being applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, by proposing an adaptive Rao-Blackwellized particle filter for tracking in surveillance, we show how to exploit the analytical relationship among state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, the distributions of the linear variables are updated analytically using a Kalman filter which is associated with each particle in a particle filtering framework. Experiments and detailed performance analysis using both simulated data and real video sequences reveal that the proposed method results in more accurate tracking than a regular particle filter", "title": "" }, { "docid": "aa54c82efcb94caf8fd224f362631167", "text": "A current-reused quadrature voltage-controlled oscillator (CR-QVCO) is proposed with the cross-coupled transformer-feedback technology for the quadrature signal generation. This CR-QVCO has the advantages of low-voltage/low-power operation with an adequate phase noise performance. A compact differential three-port transformer, in which two half-circle secondary coils are carefully designed to optimize the effective turn ratio and the coupling factor, is newly constructed to satisfy the need of signal coupling and to save the area consumption simultaneously. The quadrature oscillator providing a center frequency of 7.128 GHz for the ultrawideband (UWB) frequency synthesizer use is demonstrated in a 0.18 mum RF CMOS technology. The oscillator core dissipates 2.2 mW from a 1 V supply and occupies an area of 0.48 mm2. A tuning range of 330 MHz (with a maximum control voltage of 1.8 V) can be achieved to stand the frequency shift caused by the process variation. The measured phase noise is -111.2 dBc/Hz at 1 MHz offset from the center frequency. The IQ phase error shown is less than 2deg. The calculated figure-of-merit (FOM) is 184.8 dB.", "title": "" }, { "docid": "9718921e6546abd13e8f08698ba10423", "text": "LawStats provides quantitative insights into court decisions from the Bundesgerichtshof – Federal Court of Justice (BGH), the Federal Court of Justice in Germany. Using Watson Web Services and approaches from Sentiment Analysis (SA), we can automatically classify the revision outcome and offer statistics on judges, senates, previous instances etc. via faceted search. These statistics are accessible through a open web interface to aid law professionals. With a clear focus on interpretability, users can not only explore statistics, but can also understand, which sentences in the decision are responsible for the machine’s decision; links to the original texts provide more context. This is the first largescale application of Machine Learning (ML) based Natural Language Processing (NLP) for German in the analysis of ordinary court decisions in Germany that we are aware of. We have analyzed over 50,000 court decisions and extracted the outcomes and relevant entities. The modular architecture of the application allows continuous improvements of the ML model as more annotations become available over time. The tool can provide a critical foundation for further quantitative research in the legal domain and can be used as a proof-of-concept for similar efforts.", "title": "" }, { "docid": "265e7a149c152cb96503c66d377f5bf0", "text": "Applied psychologists have long been interested in examining expert performance in complex cognitive domains. In the present article, we report the results from a study of expert cognitive skill in which elements from two historically distinct research paradigms are incorporated -- the individual differences tradition and the expert-performance approach. Forty tournament-rated SCRABBLE players (20 elite, 20 average) and 40 unrated novice players completed a battery of domain-representative laboratory tasks and standardized verbal ability tests. The analyses revealed that elite- and average-level rated players only significantly differed from each other on tasks representative of SCRABBLE performance. Furthermore, domain-relevant practice mediated the effects of SCRABBLE tournament ratings on representative task performance, suggesting that SCRABBLE players can acquire some of the knowledge necessary for success at the highest levels of competition by engaging in activities deliberately designed to maximize adaptation to SCRABBLE-specific task constraints. We discuss the potential importance of our results in the context of continuing efforts to capture and explain superior performance across intellectual domains.", "title": "" } ]
scidocsrr
4efa496051930c634b840681314d7edb
Temporal Attention as a Scaffold for Language Development
[ { "docid": "68982ce5d5a61584f125856b10e0653f", "text": "The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward 'segregation' (a general decrease in correlation strength) between regions close in anatomical space and 'integration' (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more \"distributed\" architecture in young adults. We argue that this \"local to distributed\" developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing \"small-world\"-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.", "title": "" }, { "docid": "6ccf6a6db765b15bca6a12b5fb35619f", "text": "PURPOSE\nInformation-processing limitations have been associated with language problems in children with specific language impairment (SLI). These processing limitations may be associated with limitations in attentional capacity, even in the absence of clinically significant attention deficits. In this study, the authors examined the performance of 4- to 6-year-old children with SLI and their typically developing (TD) peers on a visual sustained attention task. It was predicted that the children with SLI would demonstrate lower levels of performance in the absence of clinically significant attention deficits.\n\n\nMETHOD\nA visual continuous performance task (CPT) was used to assess sustained attention in 13 children with SLI (M = 62.07 months) and 13 TD age-matched controls (M = 62.92 months). All children were screened for normal vision, hearing, and attention. Accuracy (d') and response time were analyzed to see if this sustained attention task could differentiate between the 2 groups.\n\n\nRESULTS\nThe children with SLI were significantly less accurate but not significantly slower than the TD children on this test of visual sustained attention.\n\n\nCONCLUSION\nChildren with SLI may have reduced capacity for sustained attention in the absence of clinically significant attention deficits that, over time, could contribute to language learning difficulties.", "title": "" } ]
[ { "docid": "d1114f1ced731a700d40dd97fe62b82b", "text": "Agricultural sector is playing vital role in Indian economy, in which irrigation mechanism is of key concern. Our paper aims to find the exact field condition and to control the wastage of water in the field and to provide exact controlling of field by using the drip irrigation, atomizing the agricultural environment by using the components and building the necessary hardware. For the precisely monitoring and controlling of the agriculture filed, different types of sensors were used. To implement the proposed system ARM LPC2148 Microcontroller is used. The irrigation mechanism is monitored and controlled more efficiently by the proposed system, which is a real time feedback control system. GSM technology is used to inform the end user about the exact field condition. Actually this method of irrigation system has been proposed primarily to save resources, yield of crops and farm profitability.", "title": "" }, { "docid": "5ad26d4135cc2ce1638046ead24351df", "text": "A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described,", "title": "" }, { "docid": "fb44e3c2624d92c9ed408ebd00bdb793", "text": "A novel method for online data acquisition of cursive handwriting is described. A video camera is used to record the handwriting of a user. From the acquired sequence of images, the movement of the tip of the pen is reconstructed. A prototype of the system has been implemented and tested. In one series of tests, the performance of the system was visually assessed. In another series of experiments, the system was combined with an existing online handwriting recognizer. Good results have been obtained in both sets of experiments.", "title": "" }, { "docid": "ff6c60d341ba05daa38a2f173eb03b19", "text": "Despite the importance of online product recommendations (OPR) in e-Commerce transactions, there is still very little understanding about how different recommendation sources affect consumers' beliefs and behavior, and whether these effects are additive, complementary or rivals for different types of products. This study investigates the differential effects of provider recommendations (PR) and consumer reviews (CR) on the instrumental, affective and trusting dimensions of consumer beliefs, and show how these beliefs ultimately influence continued OPR usage and product purchase intentions. This study tests a conceptual model linking PR and CR to four consumer beliefs (perceived usefulness, perceived ease of use, perceived affective quality, and trust) in two different product settings (search products vs. experience products). Results of an experimental study (N = 396) show that users of PR express significantly higher perceived usefulness and perceived ease of use than users of CR, while users of CR express higher trusting beliefs and perceived affective quality than users of PR, resulting in different effect mechanisms towards OPR reuse and purchase intentions in e-Commerce transactions. Further, CR were found to elicit higher perceived usefulness, trusting beliefs and perceived affective quality for experience goods, while PR were found to unfold higher effects on all of these variables for search goods.", "title": "" }, { "docid": "d6fbe041eb639e18c3bb9c1ed59d4194", "text": "Based on discrete event-triggered communication scheme (DETCS), this paper is concerned with the satisfactory H ! / H 2 event-triggered fault-tolerant control problem for networked control system (NCS) with α -safety degree and actuator saturation constraint from the perspective of improving satisfaction of fault-tolerant control and saving network resource. Firstly, the closed-loop NCS model with actuator failures and actuator saturation is built based on DETCS; Secondly, based on Lyapunov-Krasovskii function and the definition of α -safety degree given in the paper, a sufficient condition is presented for NCS with the generalized H2 and H! performance, which is the contractively invariant set of fault-tolerance with α -safety degree, and the co-design method for event-triggered parameter and satisfactory faulttolerant controller is also given in this paper. Moreover, the simulation example verifies the feasibility of improving system satisfaction and the effectiveness of saving network resource for the method. Finally, the compatibility analysis of the related indexes is also discussed and analyzed.", "title": "" }, { "docid": "4cf77462459efa81f6ed856655ae7454", "text": "Antibody response to the influenza immunization was investigated in 83 1st-semester healthy university freshmen. Elevated levels of loneliness throughout the semester and small social networks were independently associated with poorer antibody response to 1 component of the vaccine. Those with both high levels of loneliness and a small social network had the lowest antibody response. Loneliness was also associated with greater psychological stress and negative affect, less positive affect, poorer sleep efficiency and quality, and elevations in circulating levels of cortisol. However, only the stress data were consistent with mediation of the loneliness-antibody response relation. None of these variables were associated with social network size, and hence none were potential mediators of the relation between network size and immunization response.", "title": "" }, { "docid": "dc885cc855ea14a8be90aa9f7d3efbeb", "text": "Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.", "title": "" }, { "docid": "1e69c1aef1b194a27d150e45607abd5a", "text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.", "title": "" }, { "docid": "aa706cec270778124c7ba4ee16f8be0e", "text": "Current face recognition usually faces problems with the training dataset due to the insufficient size and potential manual labelling errors. The project introduces a dataset construction and filtering process to deal the problem with less cost. FaceNet[35] and Sphereface[29] are harnessed for the purpose of filtering the dataset scratched from Google. Results show the impressive effectiveness of automatic filtering and purity enhancement after filtering with considerable attention on labeling errors in the view of web search. Except exclusively self-constructed dataset, filtered and merged dataset from CASIA-WebFace[54] and VGG Face [32] were also tested and analyzed. Subsequent research and experiment can target at the further improvement of filtering process with lower false negative rate as well as getting rid of labeling errors due to web search. And those further improvements are expected to contribute more to the unsupervised learning in the general fine-grained object recognition.", "title": "" }, { "docid": "51eb99d08c5bc715d36469afb77e6b75", "text": "OBJECTIVE\nSpecific CT angiography (CTA) signs of vascular injury can be readily detected, and additional information regarding osseous and soft-tissue injuries can also be routinely obtained. In this article, we illustrate the important CTA signs of lower extremity vascular injury.\n\n\nCONCLUSION\nCTA is efficient and accurate in the evaluation of clinically significant lower extremity arterial injuries after trauma.", "title": "" }, { "docid": "978b6dfa805b214d95827af6b1d030f9", "text": "LambdaMART is the boosted tree version of LambdaRank, which is based on RankNet. RankNet, LambdaRank, and LambdaMART have proven to be very successful algorithms for solving real world ranking problems: for example an ensemble of LambdaMART rankers won Track 1 of the 2010 Yahoo! Learning To Rank Challenge. The details of these algorithms are spread across several papers and reports, and so here we give a self-contained, detailed and complete description of them.", "title": "" }, { "docid": "5c96222feacb0454d353dcaa1f70fb83", "text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1", "title": "" }, { "docid": "0e54be77f69c6afbc83dfabc0b8b4178", "text": "Spinal muscular atrophy (SMA) is a neurodegenerative disease characterized by loss of motor neurons in the anterior horn of the spinal cord and resultant weakness. The most common form of SMA, accounting for 95% of cases, is autosomal recessive proximal SMA associated with mutations in the survival of motor neurons (SMN1) gene. Relentless progress during the past 15 years in the understanding of the molecular genetics and pathophysiology of SMA has resulted in a unique opportunity for rational, effective therapeutic trials. The goal of SMA therapy is to increase the expression levels of the SMN protein in the correct cells at the right time. With this target in sight, investigators can now effectively screen potential therapies in vitro, test them in accurate, reliable animal models, move promising agents forward to clinical trials, and accurately diagnose patients at an early or presymptomatic stage of disease. A major challenge for the SMA community will be to prioritize and develop the most promising therapies in an efficient, timely, and safe manner with the guidance of the appropriate regulatory agencies. This review will take a historical perspective to highlight important milestones on the road to developing effective therapies for SMA.", "title": "" }, { "docid": "2419e2750787b1ba2f00d1629e3bbdad", "text": "Resilient transportation systems enable quick evacuation, rescue, distribution of relief supplies, and other activities for reducing the impact of natural disasters and for accelerating the recovery from them. The resilience of a transportation system largely relies on the decisions made during a natural disaster. We developed an agent-based traffic simulator for predicting the results of potential actions taken with respect to the transportation system to quickly make appropriate decisions. For realistic simulation, we govern the behavior of individual drivers of vehicles with foundational principles learned from probe-car data. For example, we used the probe-car data to estimate the personality of individual drivers of vehicles in selecting their routes, taking into account various metrics of routes such as travel time, travel distance, and the number of turns. This behavioral model, which was constructed from actual data, constitutes a special feature of our simulator. We built this simulator using the X10 language, which enables massively parallel execution for simulating traffic in a large metropolitan area. We report the use cases of the simulator in three major cities in the context of disaster recovery and resilient transportation.", "title": "" }, { "docid": "3fb85f6f093b4a47dafd830c4b99f4e3", "text": "New applications of evolutionary biology are transforming our understanding of cancer. The articles in this special issue provide many specific examples, such as microorganisms inducing cancers, the significance of within-tumor heterogeneity, and the possibility that lower dose chemotherapy may sometimes promote longer survival. Underlying these specific advances is a large-scale transformation, as cancer research incorporates evolutionary methods into its toolkit, and asks new evolutionary questions about why we are vulnerable to cancer. Evolution explains why cancer exists at all, how neoplasms grow, why cancer is remarkably rare, and why it occurs despite powerful cancer suppression mechanisms. Cancer exists because of somatic selection; mutations in somatic cells result in some dividing faster than others, in some cases generating neoplasms. Neoplasms grow, or do not, in complex cellular ecosystems. Cancer is relatively rare because of natural selection; our genomes were derived disproportionally from individuals with effective mechanisms for suppressing cancer. Cancer occurs nonetheless for the same six evolutionary reasons that explain why we remain vulnerable to other diseases. These four principles-cancers evolve by somatic selection, neoplasms grow in complex ecosystems, natural selection has shaped powerful cancer defenses, and the limitations of those defenses have evolutionary explanations-provide a foundation for understanding, preventing, and treating cancer.", "title": "" }, { "docid": "1b2991f84433c96c6f0d61378baebbea", "text": "This article analyzes the topic of leadership from an evolutionary perspective and proposes three conclusions that are not part of mainstream theory. First, leading and following are strategies that evolved for solving social coordination problems in ancestral environments, including in particular the problems of group movement, intragroup peacekeeping, and intergroup competition. Second, the relationship between leaders and followers is inherently ambivalent because of the potential for exploitation of followers by leaders. Third, modern organizational structures are sometimes inconsistent with aspects of our evolved leadership psychology, which might explain the alienation and frustration of many citizens and employees. The authors draw several implications of this evolutionary analysis for leadership theory, research, and practice.", "title": "" }, { "docid": "db4b6a75db968868630720f7955d9211", "text": "Bots have been playing a crucial role in online platform ecosystems, as efficient and automatic tools to generate content and diffuse information to the social media human population. In this chapter, we will discuss the role of social bots in content spreading dynamics in social media. In particular, we will first investigate some differences between diffusion dynamics of content generated by bots, as opposed to humans, in the context of political communication, then study the characteristics of bots behind the diffusion dynamics of social media spam campaigns.", "title": "" }, { "docid": "b125649628d46871b2212c61e355ec43", "text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.", "title": "" }, { "docid": "34f37a1378f55c1b62f03f2e80c40bb3", "text": "People use social media to express their opinions. Often linguistic devices such as irony are used. From the sentiment analysis perspective such utterances represent a challenge being a polarity reversor (usually from positive to negative). This paper presents an approach to address irony detection from a machine learning perspective. Our model considers structural features as well as, for the first time, sentiment analysis features such as the overall sentiment of a tweet and a score of its polarity. The approach has been evaluated over a set classifiers such as: Näıve Bayes, Decision Tree, Maximum Entropy, Support Vector Machine, and for the first time in irony detection task: Multilayer Perceptron. The results obtained showed the ability of our model to distinguish between potentially ironic and non-ironic sentences.", "title": "" } ]
scidocsrr
d3b0e6ff365479c257e492276271a03b
Self-calibration for a 3D laser
[ { "docid": "bb6c42de5906f0f1d83f2be31c6c07e3", "text": "Correlation is a very effective way to align intensity images. We extend the correlation technique to point set registration using a method we call kernel correlation. Kernel correlation is an affinity measure, and it is also a function of the point set entropy. We define the point set registration problem as finding the maximum kernel correlation configuration of the the two point sets to be registered. The new registration method has intuitive interpretations, simple to implement algorithm and easy to prove convergence property. Our method shows favorable performance when compared with the iterative closest point (ICP) and EM-ICP methods.", "title": "" } ]
[ { "docid": "c37e41dd09a9c676e6e6b18f3f518915", "text": "Malicious URLs have been widely used to mount various cyber attacks including spamming, phishing and malware. Detection of malicious URLs and identification of threat types are critical to thwart these attacks. Knowing the type of a threat enables estimation of severity of the attack and helps adopt an effective countermeasure. Existing methods typically detect malicious URLs of a single attack type. In this paper, we propose method using machine learning to detect malicious URLs of all the popular attack types and identify the nature of attack a malicious URL attempts to launch. Our method uses a variety of discriminative features including textual properties, link structures, webpage contents, DNS information, and network traffic. Many of these features are novel and highly effective. Our experimental studies with 40,000 benign URLs and 32,000 malicious URLs obtained from real-life Internet sources show that our method delivers a superior performance: the accuracy was over 98% in detecting malicious URLs and over 93% in identifying attack types. We also report our studies on the effectiveness of each group of discriminative features, and discuss their evadability.", "title": "" }, { "docid": "f1dd866b1cdd79716f2bbc969c77132a", "text": "Fiber optic sensor technology offers the possibility of sensing different parameters like strain, temperature, pressure in harsh environment and remote locations. these kinds of sensors modulates some features of the light wave in an optical fiber such an intensity and phase or use optical fiber as a medium for transmitting the measurement information. The advantages of fiber optic sensors in contrast to conventional electrical ones make them popular in different applications and now a day they consider as a key component in improving industrial processes, quality control systems, medical diagnostics, and preventing and controlling general process abnormalities. This paper is an introduction to fiber optic sensor technology and some of the applications that make this branch of optic technology, which is still in its early infancy, an interesting field. Keywords—Fiber optic sensors, distributed sensors, sensor application, crack sensor.", "title": "" }, { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" }, { "docid": "7535a7351849c5a6dd65611037d06678", "text": "In this paper, we present an optimistic concurrency control solution. The proposed solution represents an excellent blossom in the concurrency control field. It deals with the concurrency control anomalies, and, simultaneously, assures the reliability of the data before read-write transactions and after successfully committed. It can be used within the distributed database to track data logs and roll back processes to overcome distributed database anomalies. The method is based on commit timestamps for validation and an integer flag that is incremented each time a successful update on the record is committed.", "title": "" }, { "docid": "79bf7df517aa859d3820e45cf131ea92", "text": "Steep switching Tunnel FETs (TFET) can extend the supply voltage scaling with improved energy efficiency for both digital and analog/RF application. In this paper, recent approaches on III-V Tunnel FET device design, prototype device demonstration, modeling techniques and performance evaluations for digital and analog/RF application are discussed and compared to CMOS technology. The impact of steep switching, uni-directional conduction and negative differential resistance characteristics are explored from circuit design perspective. Circuit-level implementation such as III-V TFET based Adder and SRAM design shows significant improvement on energy efficiency and power reduction below 0.3V for digital application. The analog/RF metric evaluation is presented including gm/Ids metric, temperature sensitivity, parasitic impact and noise performance. TFETs exhibit promising performance for high frequency, high sensitivity and ultra-low power RF rectifier application.", "title": "" }, { "docid": "25c412af8e072bf592ebfa1aa0168aa1", "text": "One of the most promising strategies to improve the bioavailability of active pharmaceutical ingredients is based on the association of the drug with colloidal carriers, for example, polymeric nanoparticles, which are stable in biological environment, protective for encapsulated substances and able to modulate physicochemical characteristics, drug release and biological behaviour. The synthetic polymers possess unique properties due to their chemical structure. Some of them are characterized with mucoadhesiveness; another can facilitate the penetration through mucous layers; or to be stimuli responsive, providing controlled drug release at the target organ, tissues or cells; and all of them are biocompatible and versatile. These are suitable vehicles of nucleic acids, oligonucleotides, DNA, peptides and proteins. This chapter aims to look at the ‘hot spots’ in the design of synthetic polymer nanoparticles as an intelligent drug delivery system in terms of biopharmaceutical challenges and in relation to the route of their administration: the non-invasive—oral, transdermal, transmucosal (nasal, buccal/sublingual, vaginal, rectal and ocular) and inhalation routes—and the invasive parenteral route.", "title": "" }, { "docid": "7941642359c725a96847c012aa11a84e", "text": "We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain non-asymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.", "title": "" }, { "docid": "7c9e89cb3384a34195fd6035cd2e75a0", "text": "Manual analysis of pedestrians and crowds is often impractical for massive datasets of surveillance videos. Automatic tracking of humans is one of the essential abilities for computerized analysis of such videos. In this keynote paper, we present two state of the art methods for automatic pedestrian tracking in videos with low and high crowd density. For videos with low density, first we detect each person using a part-based human detector. Then, we employ a global data association method based on Generalized Graphs for tracking each individual in the whole video. In videos with high crowd-density, we track individuals using a scene structured force model and crowd flow modeling. Additionally, we present an alternative approach which utilizes contextual information without the need to learn the structure of the scene. Performed evaluations show the presented methods outperform the currently available algorithms on several benchmarks.", "title": "" }, { "docid": "7a8619e3adf03c8b00a3e830c3f1170b", "text": "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.", "title": "" }, { "docid": "5eab47907e673449ad73ec6cef30bc07", "text": "Three-dimensional circuits built upon multiple layers of polyimide are required for constructing Si/SiGe monolithic microwave/mm-wave integrated circuits on low resistivity Si wafers. However, the closely spaced transmission lines are susceptible to high levels of cross-coupling, which degrades the overall circuit performance. In this paper, theoretical and experimental results on coupling of Finite Ground Coplanar (FGC) waveguides embedded in polyimide layers are presented for the first time. These results show that FGC lines have approximately 8 dB lower coupling than coupled Coplanar Waveguides. Furthermore, it is shown that the forward and backward coupling characteristics for FGC lines do not resemble the coupling characteristics of other transmission lines such as microstrip.", "title": "" }, { "docid": "19648747906741138f57e7ad8df3b99a", "text": "We propose a novel despeckling algorithm for synthetic aperture radar (SAR) images based on the concepts of nonlocal filtering and wavelet-domain shrinkage. It follows the structure of the block-matching 3-D algorithm, recently proposed for additive white Gaussian noise denoising, but modifies its major processing steps in order to take into account the peculiarities of SAR images. A probabilistic similarity measure is used for the block-matching step, while the wavelet shrinkage is developed using an additive signal-dependent noise model and looking for the optimum local linear minimum-mean-square-error estimator in the wavelet domain. The proposed technique compares favorably w.r.t. several state-of-the-art reference techniques, with better results both in terms of signal-to-noise ratio (on simulated speckled images) and of perceived image quality.", "title": "" }, { "docid": "2d4e89df9c3e54add8a9d54a963c9910", "text": "The tremendous amount of the data obtained from the study of complex biological systems changes our view on the pathogenesis of human diseases. Instead of looking at individual components of biological processes, we focus our attention more on the interaction and dynamics of biological systems. A network representation and analysis of the physiology and pathophysiology of biological systems is an effective way to study their complex behavior. Specific perturbations can trigger cascades of failures, which lead to the malfunctioning of cellular networks and as a result to the development of specific diseases. In this review we discuss recent developments in the field of disease network analysis and highlight some of the topics and views that we think are important for understanding network-based disease mechanisms.", "title": "" }, { "docid": "7c09cb7f935e2fb20a4d2e56a5471e61", "text": "This paper proposes and evaluates an approach to the parallelization, deployment and management of bioinformatics applications that integrates several emerging technologies for distributed computing. The proposed approach uses the MapReduce paradigm to parallelize tools and manage their execution, machine virtualization to encapsulate their execution environments and commonly used data sets into flexibly deployable virtual machines, and network virtualization to connect resources behind firewalls/NATs while preserving the necessary performance and the communication environment. An implementation of this approach is described and used to demonstrate and evaluate the proposed approach. The implementation integrates Hadoop, Virtual Workspaces, and ViNe as the MapReduce, virtual machine and virtual network technologies, respectively, to deploy the commonly used bioinformatics tool NCBI BLAST on a WAN-based test bed consisting of clusters at two distinct locations, the University of Florida and the University of Chicago. This WAN-based implementation, called CloudBLAST, was evaluated against both non-virtualized and LAN-based implementations in order to assess the overheads of machine and network virtualization, which were shown to be insignificant. To compare the proposed approach against an MPI-based solution, CloudBLAST performance was experimentally contrasted against the publicly available mpiBLAST on the same WAN-based test bed. Both versions demonstrated performance gains as the number of available processors increased, with CloudBLAST delivering speedups of 57 against 52.4 of MPI version, when 64 processors on 2 sites were used. The results encourage the use of the proposed approach for the execution of large-scale bioinformatics applications on emerging distributed environments that provide access to computing resources as a service.", "title": "" }, { "docid": "294cf53076b35d95b1a57ba8ff6202c5", "text": "Cloud computing services are widely used nowadays and need to be more secured for an effective exploitation by the users. One of the most challenging issues in these environments is the security of the hosted data. Many cloud computing providers offer web applications for their clients, this is why the most handling attacks in cloud computing are Distributed Denial of Service (DDoS). In this paper, we provide a comparative performance analysis of intrusion detection systems (IDSs) in a real world lab. The aim is to provide an up to date study for researchers and practitioners to understand the issues related to intrusion detection and to deal with DDoS attacks. This analysis includes intrusion detection rates, time running, etc. In the experiments, we configured a cloud platform using OpenStack and an IDS monitoring the whole network traffic of the web server configured. The results show that Suricata drops fewer packets than Bro and Snort successively when a DDoS attack is happening and detect more malicious packets.", "title": "" }, { "docid": "f1699e1e87ef2e95357c834384f77931", "text": "Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search space of posterior parameter smooth, the IMM procedure is complemented by various transfer learning techniques including weight transfer, L2-norm of the old and the new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSDBirds, and Lifelog datasets. The experimental results show that IMM achieves state-of-the-art performance by balancing the information between an old and a new network.", "title": "" }, { "docid": "3fd46b96983b317973a62c8d3e458bdf", "text": "There are lots of big companies that would love to switch from their big legacy systems to avoid compromises in functionality, make them more agile, lower IT costs, and help them to become faster to market. This article describes how they can make the move.", "title": "" }, { "docid": "fec764b69df58c44d9740ce231e77cb9", "text": "Ontologies are the backbone of the Semantic Web, a semantic-aware version of the World Wide Web. The availability of large-scale high quality domain ontologies depends on effective and usable methodologies aimed at supporting the crucial process of ontology building. Ontology building exhibits a structural and logical complexity that is building methodology that capitalizes the large experience drawn from a widely used standard in software engineering: the Unified Software Development Process or Unified Process (UP). In particular, we propose UP for ONtology (UPON) building, a methodology for ontology building derived from the UP. UPON is presented with the support of a practical example in the eBusiness domain. A comparative evaluation with other methodologies and the results of its adoption in the context of the Athena EU Integrated Project are also discussed. & 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0d398b38f0767560c433384541cf4941", "text": "One of the weaknesses of current supervised word sense disambiguation (WSD) systems is that they only treat a word as a discrete entity. However, a continuous-space representation of words (word embeddings) can provide valuable information and thus improve generalization accuracy. Since word embeddings are typically obtained from unlabeled data using unsupervised methods, this method can be seen as a semi-supervised word sense disambiguation approach. This paper investigates two ways of incorporating word embeddings in a word sense disambiguation setting and evaluates these two methods on some SensEval/SemEval lexical sample and all-words tasks and also a domain-specific lexical sample task. The obtained results show that such representations consistently improve the accuracy of the selected supervised WSD system. Moreover, our experiments on a domainspecific dataset show that our supervised baseline system beats the best knowledge-based systems by a large margin.", "title": "" }, { "docid": "a9ebd89c2f9c9b33ed9c69b4a9da221a", "text": "Continuum robots, which have continuous mechanical structures comparable to the flexibility in elephant trunks and octopus arms, have been primarily geared toward the medical and defense communities. In space, however, NASA projects these robots to have a place in irregular inspection routines. The inherent compliance and bending of these continuum arms are especially suitable for inspection in obstructed spaces to ensure proper equipment functionality. In this paper, we propose a new solution that improves on the functionality of previous continuum robots, via a novel mechanical scaly layer-jamming design. Layer-jamming assisted continuum arms have previously required pneumatic sources for actuation, which limit their portability and usage in aerospace applications. This paper combines the compliance of continuum arms and stiffness modulation of the layer jamming mechanism to design a new hybrid layer jamming continuum arm. The novel design uses an electromechanical actuation which eliminates the pneumatic actuation therefore making it compact and portable.", "title": "" } ]
scidocsrr
b6b59a1ad377524229b35f1c7f4d32f0
Embedded Binarized Neural Networks
[ { "docid": "27ad413fa5833094fb2e557308fa761d", "text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.", "title": "" } ]
[ { "docid": "a86c79f52fc8399ab00430459d4f0737", "text": "Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithmsperform, andwhich are their possible biases thatmay impair their effectiveness. Many popular ranking algorithms (such as Google’s PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks.We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7b5803f0bae7ee210fceae680d9d3840", "text": "The aim of this work is to extract the road network from aerial images. What makes the problem challenging is the complex structure of the prior: roads form a connected network of smooth, thin segments which meet at junctions and crossings. This type of a-priori knowledge is more difficult to turn into a tractable model than standard smoothness or co-occurrence assumptions. We develop a novel CRF formulation for road labeling, in which the prior is represented by higher-order cliques that connect sets of super pixels along straight line segments. These long-range cliques have asymmetric PN-potentials, which express a preference to assign all rather than just some of their constituent super pixels to the road class. Thus, the road likelihood is amplified for thin chains of super pixels, while the CRF is still amenable to optimization with graph cuts. Since the number of such cliques of arbitrary length is huge, we furthermore propose a sampling scheme which concentrates on those cliques which are most relevant for the optimization. In experiments on two different databases the model significantly improves both the per-pixel accuracy and the topological correctness of the extracted roads, and outperforms both a simple smoothness prior and heuristic rule-based road completion.", "title": "" }, { "docid": "b713da979bc3d01153eaae8827779b7b", "text": "Chronic lower leg pain results from various conditions, most commonly, medial tibial stress syndrome, stress fracture, chronic exertional compartment syndrome, nerve entrapment, and popliteal artery entrapment syndrome. Symptoms associated with these conditions often overlap, making a definitive diagnosis difficult. As a result, an algorithmic approach was created to aid in the evaluation of patients with complaints of lower leg pain and to assist in defining a diagnosis by providing recommended diagnostic studies for each condition. A comprehensive physical examination is imperative to confirm a diagnosis and should begin with an inquiry regarding the location and onset of the patient's pain and tenderness. Confirmation of the diagnosis requires performing the appropriate diagnostic studies, including radiographs, bone scans, magnetic resonance imaging, magnetic resonance angiography, compartmental pressure measurements, and arteriograms. Although most conditions causing lower leg pain are treated successfully with nonsurgical management, some syndromes, such as popliteal artery entrapment syndrome, may require surgical intervention. Regardless of the form of treatment, return to activity must be gradual and individualized for each patient to prevent future athletic injury.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "53267e7e574dce749bb3d5877640e017", "text": "After a decline in enthusiasm for national community health worker (CHW) programmes in the 1980s, these have re-emerged globally, particularly in the context of HIV. This paper examines the case of South Africa, where there has been rapid growth of a range of lay workers (home-based carers, lay counsellors, DOT supporters etc.) principally in response to an expansion in budgets and programmes for HIV, most recently the rollout of antiretroviral therapy (ART). In 2004, the term community health worker was introduced as the umbrella concept for all the community/lay workers in the health sector, and a national CHW Policy Framework was adopted. We summarize the key features of the emerging national CHW programme in South Africa, which include amongst others, their integration into a national public works programme and the use of non-governmental organizations as intermediaries. We then report on experiences in one Province, Free State. Over a period of 2 years (2004--06), we made serial visits on three occasions to the first 16 primary health care facilities in this Province providing comprehensive HIV services, including ART. At each of these visits, we did inventories of CHW numbers and training, and on two occasions conducted facility-based group interviews with CHWs (involving a total of 231 and 182 participants, respectively). We also interviewed clinic nurses tasked with supervising CHWs. From this evaluation we concluded that there is a significant CHW presence in the South African health system. This infrastructure, however, shares many of the managerial challenges (stability, recognition, volunteer vs. worker, relationships with professionals) associated with previous national CHW programmes, and we discuss prospects for sustainability in the light of the new policy context.", "title": "" }, { "docid": "07cfc30244cb9269861a7db9ad594ad4", "text": "In this paper we report on results from a cross-sectional survey with manufacturers in four typical Chinese industries, i.e., power generating, chemical/petroleum, electrical/electronic and automobile, to evaluate their perceived green supply chain management (GSCM) practices and relate them to closing the supply chain loop. Our findings provide insights into the capabilities of Chinese organizations on the adoption of GSCM practices in different industrial contexts and that these practices are not considered equitably across the four industries. Academic and managerial implications of our findings are discussed. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "895f167ee92d959c6919ff5181d50326", "text": "Deep learning methods achieve great success recently on many computer vision problems. In spite of these practical successes, optimization of deep networks remains an active topic in deep learning research. In this work, we focus on investigation of the network solution properties that can potentially lead to good performance. Our research is inspired by theoretical and empirical results that use orthogonal matrices to initialize networks, but we are interested in investigating how orthogonal weight matrices perform when network training converges. To this end, we propose to constrain the solutions of weight matrices in the orthogonal feasible set during the whole process of network training, and achieve this by a simple yet effective method called Singular Value Bounding (SVB). In SVB, all singular values of each weight matrix are simply bounded in a narrow band around the value of 1. Based on the same motivation, we also propose Bounded Batch Normalization (BBN), which improves Batch Normalization by removing its potential risk of ill-conditioned layer transform. We present both theoretical and empirical results to justify our proposed methods. Experiments on benchmark image classification datasets show the efficacy of our proposed SVB and BBN. In particular, we achieve the state-of-the-art results of 3.06% error rate on CIFAR10 and 16.90% on CIFAR100, using off-the-shelf network architectures (Wide ResNets). Our preliminary results on ImageNet also show the promise in large-scale learning. We release the implementation code of our methods at www.aperture-lab.net/research/svb.", "title": "" }, { "docid": "51fe6376956593cb8a2e4de3b37cb8fe", "text": "The human musculoskeletal system is supposed to play an important role in doing various static and dynamic tasks. From this standpoint, some musculoskeletal humanoid robots have been developed in recent years. However, existing musculoskeletal robots did not have upper body with several DOFs to balance their bodies statically or did not have enough power to perform dynamic tasks. We think the musculoskeletal structure has two significant properties: whole-body flexibility and whole-body coordination. Using these two properties can enable us to make robots' performance better than before. In this study, we developed a humanoid robot with a musculoskeletal system that is driven by pneumatic artificial muscles. To demonstrate the robot's capability in static and dynamic tasks, we conducted two experiments. As a static task, we conducted a standing experiment using a simple feedback control and evaluated the stability by applying an impulse to the robot. As a dynamic task, we conducted a walking experiment using a feedforward controller with human muscle activation patterns and confirmed that the robot was able to perform the dynamic task.", "title": "" }, { "docid": "209203c297898a2251cfd62bdfc37296", "text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.", "title": "" }, { "docid": "09f36704e0bbd914f7ce6b5c7e0da228", "text": "Studies have repeatedly shown that users are increasingly concerned about their privacy when they go online. In response to both public interest and regulatory pressures, privacy policies have become almost ubiquitous. An estimated 77% of websites now post a privacy policy. These policies differ greatly from site to site, and often address issues that are different from those that users care about. They are in most cases the users' only source of information.This paper evaluates the usability of online privacy policies, as well as the practice of posting them. We analyze 64 current privacy policies, their accessibility, writing, content and evolution over time. We examine how well these policies meet user needs and how they can be improved. We determine that significant changes need to be made to current practice to meet regulatory and usability requirements.", "title": "" }, { "docid": "ca7380c0b194aa5308f3329205b6e211", "text": "Endopolyploidy was observed in the protocorms of diploid Phalaenopsis aphrodite subsp. formosana with ploidy doubling achieved by in vitro regeneration of excised protocorms, or protocorm-like bodies (PLBs). Thirty-four per cent of the PLBs regenerated from the first cycle of sectioned protocorms were found to be polyploids with ploidy doubled once or twice as determined by flow-cytometry. The frequency of ploidy doubling increased as the sectioning cycles increased and was highest in diploid followed by the triploid and tetraploid. Regeneration of the endopolyploid cells in the tissue of the protocorms or PLBs is proposed as the source of the development of ploidy doubled plantlets. The frequency of ploidy doubling was similar in seven other Phalaenopsis species, although the rate of increase within cycles was genotype specific. In two species, a comparison of five parameters between 5-month-old diploid and tetraploid potted plants showed only the stomata density differed significantly. The flowers of the tetraploid plant were larger and heavier than those of the diploids. This ploidy doubling method is a simple and effective means to produce large number of polyploid Phalaenopsis species plants as well as their hybrids. The method will be beneficial to orchid breeding programs especially for the interspecific hybridization between varieties having different chromosome sizes and ploidy levels.", "title": "" }, { "docid": "3e7d307c8510b23faa576b1e29532bb1", "text": "This thesis describes how multimodal sensor data from a 3D sensor and microphone array can be processed with deep neural networks such that its fusion, the trained neural network, is a) more robust to noise, b) outperforms unimodal recognition and c) enhances unimodal recognition in absence of multimodal data. We built a framework for a complete workflow to experiment with multimodal sensor data ranging from recording (with Kinect 3D sensor), labeling, 3D signal processing, analysing and replaying. We also built three custom recognizers (automatic speech recognizer, 3D object recognizer and 3D gesture recognizer) to convert the raw sensor streams to decisions and feed this to the neural network using a late fusion strategy. We recorded 25 particpants performing 27 unique verbal and gestural interactions (intents) with objects and trained the neural network using a supervised strategy. We proved that the framework works by building a deep neural networks assisted speech recognizer that performs approximately 5% better with multimodal data at 20 dB SnR up to 61% better with multimodal data at -5 dB SnR while performing identical to the individual recognizer when fed a unimodal datastream. Analysis shows that performance gain in low acoustic noise is due to true fusion of classifer results while gain at high acoustic noise is due to absence of speech results as it cannot detect speech events anymore, while the gesture recognizer is not affected. The impact of this thesis is significant for computational linguists and computer vision researchers as it describes how practical issues with (real and) real-time data can be solved such as dealing with sensor noise, GPU offloading for computational performance, 3D object and hand tracking. The speech-, objectand gesture recognizers are not state-of-the-art and the small vocabulary with 27 unique phrases and 9 objects can be considered a preliminary experiment. The main contributions of this thesis project are a) validated multimodal fusion framework and workflow for embodied natural language understanding named MASU, b) 600GB, 2,5 hour labelled multimodal database with synchronous multi channel audio and 3D video, c) algorithm for 3D hand-object detection and tracking, d) recipe to train a deep neural network model for multimodal fusion and e) demontrate MASU in practical real-time scenario. Faculty: Faculty of Electrical Engineering, Mathematics and Computer Science Department: Intelligent Systems Committee members: Prof. M.A. Larson TU Delft Dr. Ir. E.A. Hendriks TU Delft Dr. M.J. Tax TU Delft", "title": "" }, { "docid": "e243677212e628d84d5e207fe451ce43", "text": "Based on analysis of the structure and control requirements of ice-storage air conditioning system, a distributed control system design was introduced. The hardware environment was mainly based on Programmable Logic Controller ¿PLC¿, and a touching screen was also applied as the local platforms of SCADA ( Supervisory Control and Data Acquisition) ; The software were CX-Programmer 7.1 and EV5000 configuration soft ware respectively. Tests results show that the PLC based control system is not only capable of running stably and reliably, but also has higher control accuracy. The touching screen can communicate precisely with PLC, and monitor and control the statuses of ice-storage air conditioning system promptly via MPI(Multi-Point Interface) protocol.", "title": "" }, { "docid": "3a3759cdfa7523de2bf99cfd1c82ba1f", "text": "In remote-sensing classification, there are situations when users are only interested in classifying one specific land-cover type, without considering other classes. These situations are referred to as one-class classification. Traditional supervised learning is inefficient for one-class classification because it requires all classes that occur in the image to be exhaustively assigned labels. In this paper, we investigate a new positive and unlabeled learning (PUL) algorithm, applying it to one-class classifications of two scenes of a high-spatial-resolution aerial photograph. The PUL algorithm trains a classifier on positive and unlabeled data, estimates the probability that a positive training sample has been labeled, and generates binary predictions for test samples using an adjusted threshold. Experimental results indicate that the new algorithm provides high classification accuracy, outperforming the biased support-vector machine (SVM), one-class SVM, and Gaussian domain descriptor methods. The advantages of the new algorithm are that it can use unlabeled data to help build classifiers, and it requires only a small set of positive data to be labeled by hand. Therefore, it can significantly reduce the effort of assigning labels to training data without losing predictive accuracy.", "title": "" }, { "docid": "478f0ac1084fb9b0eb1354d9627d8507", "text": "BACKGROUND\nFemale genital tract anomalies including imperforate hymen affect sexual life and fertility.\n\n\nCASE PRESENTATION\nIn the present case, we describe a pregnant woman diagnosed with imperforate hymen which never had penetrative vaginal sex. A 27-year-old married patient with 2 months of amenorrhea presented in a clinic without any other complications. Her history of difficult intercourse and prolonged menstrual flow were reported, and subsequent vaginal examination confirmed the diagnosis of imperforate hymen even though she claims to made pinhole surgery in hymen during puberty. Her urine pregnancy test was positive, and an ultrasound examination revealed 8.3 weeks pregnant. The pregnancy was followed up to 39.5 weeks when she entered in cesarean delivery in urgency. Due to perioperative complications in our study, a concomitant hymenotomy was successfully performed. The patient was discharged with the baby, and vaginal anatomy was restored.\n\n\nCONCLUSIONS\nThis case study suggests that even though as microperforated hymen surgery in puberty can permit pregnancy and intervention with cesarean section and hymenotomy is a good option to reduce the resulting perioperative complications which indirectly affect the increase of the fertilisation and improvement of later sexual life.", "title": "" }, { "docid": "e2df843bd6b491e904cc98f746c3314a", "text": "Cryonic suspension is a relatively new technology that offers those who can afford it the chance to be 'frozen' for future revival when they reach the ends of their lives. This paper will examine the ethical status of this technology and whether its use can be justified. Among the arguments against using this technology are: it is 'against nature', and would change the very concept of death; no friends or family of the 'freezee' will be left alive when he is revived; the considerable expense involved for the freezee and the future society that will revive him; the environmental cost of maintaining suspension; those who wish to use cryonics might not live life to the full because they would economize in order to afford suspension; and cryonics could lead to premature euthanasia in order to maximize chances of success. Furthermore, science might not advance enough to ever permit revival, and reanimation might not take place due to socio-political or catastrophic reasons. Arguments advanced by proponents of cryonics include: the potential benefit to society; the ability to cheat death for at least a few more years; the prospect of immortality if revival is successful; and all the associated benefits that delaying or avoiding dying would bring. It emerges that it might be imprudent not to use the technology, given the relatively minor expense involved and the potential payoff. An adapted and more persuasive version of Pascal's Wager is presented and offered as a conclusive argument in favour of utilizing cryonic suspension.", "title": "" }, { "docid": "7457c09c1068ba1397f468879bc3b0d1", "text": "Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise CRISPR–Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response. Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations. The efficiency, accuracy and safety of the approach presented suggest that it has potential to be used for the correction of heritable mutations in human embryos by complementing preimplantation genetic diagnosis. However, much remains to be considered before clinical applications, including the reproducibility of the technique with other heterozygous mutations.", "title": "" }, { "docid": "dfa5343bbeffc89cdd86afb2e5b3d2ae", "text": "We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as “real” samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here.", "title": "" }, { "docid": "8d0f80611b751565311ef84d5655802c", "text": "We present a computational model for periodic pattern perception based on the mathematical theory of crystallographic groups. In each N-dimensional Euclidean space, a finite number of symmetry groups can characterize the structures of an infinite variety of periodic patterns. In 2D space, there are seven frieze groups describing monochrome patterns that repeat along one direction and 17 wallpaper groups for patterns that repeat along two linearly independent directions to tile the plane. We develop a set of computer algorithms that \"understand\" a given periodic pattern by automatically finding its underlying lattice, identifying its symmetry group, and extracting its representative motifs. We also extend this computational model for near-periodic patterns using geometric AIC. Applications of such a computational model include pattern indexing, texture synthesis, image compression, and gait analysis.", "title": "" }, { "docid": "3d7e03b79ffb49f61a97a6a95264a60e", "text": "Hacktivism is the biggest challenge being faced by the Cyber world. Many digital forensic tools are being developed to deal with this challenge but at the same pace hackers are developing the counter techniques. This paper includes the digital forensics basics along with the recent trends of hacktivism in social networking sites, cloud computing, websites and phishing. The various tools of forensics with the platform supported, the recent versions and licensing details are discussed. The paper extends with the current challenges being faced by digital forensics.", "title": "" } ]
scidocsrr
d275b25164009fb1f1d379cb7501a2cc
Keyphrase Extraction Using Deep Recurrent Neural Networks on Twitter
[ { "docid": "a88cf59f9ca2b3181b7311f6fc2db159", "text": "Summarizing and analyzing Twitter content is an important and challenging task. In this paper, we propose to extract topical keyphrases as one way to summarize Twitter. We propose a context-sensitive topical PageRank method for keyword ranking and a probabilistic scoring function that considers both relevance and interestingness of keyphrases for keyphrase ranking. We evaluate our proposed methods on a large Twitter data set. Experiments show that these methods are very effective for topical keyphrase extraction.", "title": "" }, { "docid": "fb2028ca0e836452862a2cb1fa707d28", "text": "State-of-the-art approaches for unsupervised keyphrase extraction are typically evaluated on a single dataset with a single parameter setting. Consequently, it is unclear how effective these approaches are on a new dataset from a different domain, and how sensitive they are to changes in parameter settings. To gain a better understanding of state-of-the-art unsupervised keyphrase extraction algorithms, we conduct a systematic evaluation and analysis of these algorithms on a variety of standard evaluation datasets.", "title": "" } ]
[ { "docid": "d53db1dc155c983399a812bbfffa1fb1", "text": "We present a framework combining hierarchical and multi-agent deep reinforcement learning approaches to solve coordination problems among a multitude of agents using a semi-decentralized model. The framework extends the multi-agent learning setup by introducing a meta-controller that guides the communication between agent pairs, enabling agents to focus on communicating with only one other agent at any step. This hierarchical decomposition of the task allows for efficient exploration to learn policies that identify globally optimal solutions even as the number of collaborating agents increases. We show promising initial experimental results on a simulated distributed scheduling problem.", "title": "" }, { "docid": "471c52fca57c672267ef69e3e3db9cd9", "text": "This paper presents the approach of extending cellular networks with millimeter-wave backhaul and access links. Introducing a logical split between control and user plane will permit full coverage while seamlessly achieving very high data rates in the vicinity of mm-wave small cells.", "title": "" }, { "docid": "af4e6b3f6e5326872d308d3b7e8c4c7d", "text": "New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes.", "title": "" }, { "docid": "e6a6d7d4304fe14798597fbd5eae7ba5", "text": "BACKGROUND\nA significant proportion of trauma survivors experience an additional critical life event in the aftermath. These renewed experiences of traumatic and stressful life events may lead to an increase in trauma-related mental health symptoms.\n\n\nMETHOD\nIn a longitudinal study, the effects of renewed experiences of a trauma or stressful life event were examined. For this purpose, refugees seeking asylum in Germany were assessed for posttraumatic stress symptoms (PTS), Posttraumatic Stress Diagnostic Scale (PDS), anxiety, and depression (Hopkins Symptom Checklist [HSCL-25]) before treatment start as well as after 6 and 12 months during treatment (N=46). Stressful life events and traumatic events were recorded monthly. If a new event happened, PDS and HSCL were additionally assessed directly afterwards. Mann-Whitney U-tests were performed to calculate the differences between the group that experienced an additional critical event (stressful vs. trauma) during treatment (n=23) and the group that did not (n=23), as well as differences within the critical event group between the stressful life event group (n=13) and the trauma group (n=10).\n\n\nRESULTS\nRefugees improved significantly during the 12-month period of our study, but remained severely distressed. In a comparison of refugees with a new stressful life event or trauma, significant increases in PTS, anxiety, and depressive symptoms were found directly after the experience, compared to the group without a renewed event during the 12 months of treatment. With regard to the different critical life events (stressful vs. trauma), no significant differences were found regarding overall PTS, anxiety, and depression symptoms. Only avoidance symptoms increased significantly in the group experiencing a stressful life event.\n\n\nCONCLUSION\nAlthough all clinicians should be aware of possible PTS symptom reactivation, especially those working with refugees and asylum seekers, who often experience new critical life events, should understand symptom fluctuation and address it in treatment.", "title": "" }, { "docid": "02bd814b19eacf70339218f910c9a644", "text": "BACKGROUND\nAlthough \"traditional\" face-lifting techniques can achieve excellent improvement along the jawline and neck, they often have little impact on the midface area. Thus, many different types of procedures have been developed to provide rejuvenation in this region, usually contemplating various dissection planes, incisions, and suspension vectors.\n\n\nMETHODS\nA 7-year observational study of 350 patients undergoing midface lift was analyzed. The authors suspended the midface flap, anchoring to the deep temporal aponeurosis with a suspender-like suture (superolateral vector), or directly to the lower orbital rim with a belt-like suture (superomedial vector). Subjective and objective methods were used to evaluate the results. The subjective methods included a questionnaire completed by the patients. The objective method involved the evaluation of preoperative and postoperative photographs by a three-member jury instructed to compare the \"critical\" anatomical areas of the midface region: malar eminence, nasojugal groove, nasolabial fold, and jowls in the lower portion of the cheeks. The average follow-up period was 24 months.\n\n\nRESULTS\nHigh satisfaction was noticeable from the perceptions of both the jury and the patients. Objective evaluation evidenced that midface lift with temporal anchoring was more efficient for the treatment of malar eminence, whereas midface lift with transosseous periorbital anchoring was more efficient for the treatment of nasojugal groove.\n\n\nCONCLUSIONS\nThe most satisfying aspect of the adopted techniques is a dramatic facial rejuvenation and preservation of the patient's original youthful identity. Furthermore, choosing the most suitable technique respects the patient's needs and enables correction of the specific defects.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.", "title": "" }, { "docid": "177b020fd9cd0fec6d6f01bdb6114b97", "text": "A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.", "title": "" }, { "docid": "87a256b5e67b97cf4a11b5664a150295", "text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.", "title": "" }, { "docid": "569ea63f1a523c4040e195b7eb9323e9", "text": "Doubt about the role of stretch reflexes in movement and posture control has remained in part because the questions of reflex “usefulness” and the postural “set” have not been adequately considered in the design of experimental paradigms. The intent of this study was to discover the stabilizing role of stretch reflexes acting upon the ankle musculature while human subjects performed stance tasks requiring several different postural “sets”. Task specific differences of reflex function were investigated by experiments in which the role of stretch reflexes to stabilize sway during stance could be altered to be useful, of no use, or inappropriate. Because the system has available a number of alternate inputs to posture (e.g., vestibular and visual), stretch reflex responses were in themselves not necessary to prevent a loss of balance. Nevertheless, 5 out of 12 subjects in this study used long-latency (120 msec) stretch reflexes to help reduce postural sway. Following an unexpected change in the usefulness of stretch reflexes, the 5 subjects progressively altered reflex gain during the succeeding 3–5 trials. Adaptive changes in gain were always in the sense to reduce sway, and therefore could be attenuating or facilitating the reflex response. Comparing subjects using the reflex with those not doing so, stretch reflex control resulted in less swaying when the task conditions were unchanging. However, the 5 subjects using reflex controls oftentimes swayed more during the first 3–5 trials after a change, when inappropriate responses were elicited. Four patients with clinically diagnosed cerebellar deficits were studied briefly. Among the stance tasks, their performance was similar to normal in some and significantly poorer in others. Their most significant deficit appeared to be the inability to adapt long-latency reflex gain following changes in the stance task. The study concludes with a discussion of the role of stretch reflexes within a hierarchy of controls ranging from muscle stiffness up to centrally initiated responses.", "title": "" }, { "docid": "8df98bd1576f3de19c1626322b3c66ef", "text": "Image segmentation is the most important part in digital image processing. Segmentation is nothing but a portion of any image and object. In image segmentation, digital image is divided into multiple set of pixels. Image segmentation is generally required to cut out region of interest (ROI) from an image. Currently there are many different algorithms available for image segmentation. Each have their own advantages and purpose. In this paper, different image segmentation algorithms with their prospects are reviewed.", "title": "" }, { "docid": "a5100088eb2e5cdc66fcf135bbe6e336", "text": "In the context of protein structure prediction, there are two principle reasons for comparing and aligning protein sequences: (a) To obtain an accurate alignment. This may be for protein modelling by comparison to proteins of known three-dimensional structure. (b) To scan a database with a newly determined protein sequence and identify possible functions for the protein by analogy with well-characterized proteins. In this chapter I review the underlying principles and techniques for sequence comparison as applied to proteins and used to satisfy these two aims. 2, Amino acid scoring schemes All algorithms to compare protein sequences rely on some scheme to score the equivalencing of each of the 2L0 possible pairs of amino acids, (i.e. 190 pairs of different amino acids plus 20 pairs of identical amino acids). Most scoring schemes represent the 210 pairs of scores as a 20 x 20 matrix of similarities where identical amino acids and those of similar character (e.g. I, L) give higher scores compared to those of different character (e.g. I, D). Since the first protein sequences were obtained, many different types of scoring scheme have been devised. The most commonly used are those based on observed substitution and of these, the t976 Dayhoff matrix for 250 PAMS (1) has until recently dominated. This and other schemes are discussed in the following sections. 2.1 Identity scoring This is the simplest scoring scheme: amino acid pairs are classified into two types; identical and non-identical. Non-identical pairs are scored zero and", "title": "" }, { "docid": "f0a22a060fe9df0c2ea46f8d9639a093", "text": "Discourse structure is the hidden link between surface features and document-level properties, such as sentiment polarity. We show that the discourse analyses produced by Rhetorical Structure Theory (RST) parsers can improve document-level sentiment analysis, via composition of local information up the discourse tree. First, we show that reweighting discourse units according to their position in a dependency representation of the rhetorical structure can yield substantial improvements on lexicon-based sentiment analysis. Next, we present a recursive neural network over the RST structure, which offers significant improvements over classificationbased methods.", "title": "" }, { "docid": "767bb7977550e61e584f9140e72d242b", "text": "Two common Fourier imaging algorithms used in ground penetrating radar (GPR), synthetic aperture radar (SAR), and frequency-wavenumber (F-K) migration, are reviewed and compared from a theoretical perspective. The two algorithms, while arising from seemingly different physical models: a point-scatterer model for SAR and the exploding source model for F-K migration, result in similar imaging equations. Both algorithms are derived from an integral equation formulation of the inverse scalar wave problem, which allows a clear understanding of the approximations being made in each algorithm and allows a direct comparison. This derivation brings out the similarities of the two techniques which are hidden by the traditional formulations based on physical scattering models. The comparison shows that the approximations required to derive each technique from the integral equation formulation of the inverse problem are nearly identical, and hence the two imaging algorithms and physical models are making similar assumptions about the solution to the inverse problem, thus clarifying why the imaging equations are so similar. Sample images of landmine-like targets buried in sand are obtained from experimental GPR data using both algorithms.", "title": "" }, { "docid": "be4d9686e2730b67a383d730c1761e8b", "text": "Many factors have been cited for poor performance of students in CS1. To investigate how assessment mechanisms may impact student performance, nine experienced CS1 instructors reviewed final examinations from a variety of North American institutions. The majority of the exams reviewed were composed predominantly of high-value, integrative code-writing questions, and the reviewers regularly underestimated the number of CS1 concepts required to answer these questions. An evaluation of the content and cognitive requirements of individual questions suggests that in order to succeed, students must internalize a large amount of CS1 content. This emphasizes the need for focused assessment techniques to provide students with the opportunity to demonstrate their knowledge.", "title": "" }, { "docid": "1e56ff2af1b76571823d54d1f7523b49", "text": "Open-source intelligence offers value in information security decision making through knowledge of threats and malicious activities that potentially impact business. Open-source intelligence using the internet is common, however, using the darknet is less common for the typical cybersecurity analyst. The challenges to using the darknet for open-source intelligence includes using specialized collection, processing, and analysis tools. While researchers share techniques, there are few publicly shared tools; therefore, this paper explores an open-source intelligence automation toolset that scans across the darknet connecting, collecting, processing, and analyzing. It describes and shares the tools and processes to build a secure darknet connection, and then how to collect, process, store, and analyze data. Providing tools and processes serves as an on-ramp for cybersecurity intelligence analysts to search for threats. Future studies may refine, expand, and deepen this paper's toolset framework. © 2 01 7 T he SA NS In sti tut e, Au tho r R eta ins Fu ll R igh ts © 2017 The SANS Institute Author retains full rights. Data Mining in the Dark 2 Nafziger, Brian", "title": "" }, { "docid": "34cd47ff49e316f26e5596bc9717fd6d", "text": "In this paper, a BGA package having a ARM SoC chip is introduced, which has component-type embedded decoupling capacitors (decaps) for good power integrity performance of core power. To evaluate and confirm the impact of embedded decap on core PDN (power distribution network), two different packages were manufactured with and without the embedded decaps. The self impedances of system-level core PDN were simulated in frequency-domain and On-chip DvD (Dynamic Voltage Drop) simulations were performed in time-domain in order to verify the system-level impact of package embedded decap. There was clear improvement of system-level core PDN performance in middle frequency range when package embedded decaps were employed. In conclusion, the overall system-level core PDN for ARM SoC could meet the target impedance in frequency-domain as well as the target On-chip DvD level by having package embedded decaps.", "title": "" }, { "docid": "01b5d4015aa3c34d9090a4bfe45fda9f", "text": "The negotiation-based routing paradigm has been used successfully in a number of FPGA routers. In this paper, we report several new findings related to the negotiation-based routing paradigm. We examine in-depth the convergence of the negotiation-based routing algorithm. We illustrate that the negotiation-based algorithm can be parallelized. Finally, we demonstrate that a negotiation-based parallel FPGA router can perform well in terms of delay and speedup with practical FPGA circuits.", "title": "" }, { "docid": "9dc80bb779837f615a7f379ab2bbec99", "text": "Twitter, as a social media is a very popular way of expressing opinions and interacting with other people in the online world. When taken in aggregation tweets can provide a reflection of public sentiment towards events. In this paper, we provide a positive or negative sentiment on Twitter posts using a well-known machine learning method for text categorization. In addition, we use manually labeled (positive/negative) tweets to build a trained method to accomplish a task. The task is looking for a correlation between twitter sentiment and events that have occurred. The trained model is based on the Bayesian Logistic Regression (BLR) classification method. We used external lexicons to detect subjective or objective tweets, added Unigram and Bigram features and used TF-IDF (Term Frequency-Inverse Document Frequency) to filter out the features. Using the FIFA World Cup 2014 as our case study, we used Twitter Streaming API and some of the official world cup hashtags to mine, filter and process tweets, in order to analyze the reflection of public sentiment towards unexpected events. The same approach, can be used as a basis for predicting future events.", "title": "" }, { "docid": "7d7c596d334153f11098d9562753a1ee", "text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.", "title": "" }, { "docid": "4ede3f2caa829e60e4f87a9b516e28bd", "text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.", "title": "" }, { "docid": "4ef29432e034ec3634b47b20b0ee950e", "text": "Part-of-speech or morphological tags are important means of annotation in a vast number of corpora. However, different sets of tags are used in different corpora, even for the same language. Tagset conversion is difficult, and solutions tend to be tailored to a particular pair of tagsets. We propose a universal approach that makes the conversion tools reusable. We also provide an indirect evaluation in the context of a parsing task.", "title": "" } ]
scidocsrr
206f4e07982ee47acf7c278792615cd4
EXPLICATING DYNAMIC CAPABILITIES : THE NATURE AND MICROFOUNDATIONS OF ( SUSTAINABLE ) ENTERPRISE PERFORMANCE
[ { "docid": "983ec9cdd75d0860c96f89f3c9b2f752", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "adcaa15fd8f1e7887a05d3cb1cd47183", "text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "9296cf518b1b28862299e4a06d895761", "text": "Introduction:\nEtiology of dental crowding may be related to arch constriction in diverse dimensions, and an appropriate manipulation of arch perimeter by intervening in basal bone discrepancies cases, may be a key for crowding relief, especially when incisors movement is limited due to underlying pathology, periodontal issues or restrictions related to soft tissue profile.\n\n\nObjectives: \nThis case report illustrates a 24-year old woman, with maxillary transverse deficiency, upper and lower arches crowding, Class II, division 1, subdivision right relationship, previous upper incisors traumatic episode and straight profile. A non-surgical and non-extraction treatment approach was feasible due to the miniscrew-assisted rapid palatal expansion technique (MARPE).\n\n\nMethods: \nThe MARPE appliance consisted of a conventional Hyrax expander supported by four orthodontic miniscrews. A slow expansion protocol was adopted, with an overall of 40 days of activation and a 3-month retention period. Intrusive traction miniscrew-anchored mechanics were used for correcting the Class II subdivision relationship, managing lower arch perimeter and midline deviation before including the upper central incisors.\n\n\nResults: \nPost-treatment records show an intermolar width increase of 5 mm, bilateral Class I molar and canine relationships, upper and lower crowding resolution, coincident dental midlines and proper intercuspation.\n\n\nConclusions: \nThe MARPE is an effective treatment approach for managing arch-perimeter deficiencies related to maxillary transverse discrepancies in adult patients.", "title": "" }, { "docid": "eb7d44f475d2e6d78c249edc42420708", "text": "This paper proposes a novel composite kernel for relation extraction. The composite kernel consists of two individual kernels: an entity kernel that allows for entity-related features and a convolution parse tree kernel that models syntactic information of relation examples. The motivation of our method is to fully utilize the nice properties of kernel methods to explore diverse knowledge for relation extraction. Our study illustrates that the composite kernel can effectively capture both flat and structured features without the need for extensive feature engineering, and can also easily scale to include more features. Evaluation on the ACE corpus shows that our method outperforms the previous best-reported methods and significantly outperforms previous two dependency tree kernels for relation extraction.", "title": "" }, { "docid": "df7922bcf3a0ecac69b2ac283505c312", "text": "With the growing use of distributed information networks, there is an increasing need for algorithmic and system solutions for data-driven knowledge acquisition using distributed, heterogeneous and autonomous data repositories. In many applications, practical constraints require such systems to provide support for data analysis where the data and the computational resources are available. This presents us with distributed learning problems. We precisely formulate a class of distributed learning problems; present a general strategy for transforming traditional machine learning algorithms into distributed learning algorithms; and demonstrate the application of this strategy to devise algorithms for decision tree induction (using a variety of splitting criteria) from distributed data. The resulting algorithms are provably exact in that the decision tree constructed from distributed data is identical to that obtained by the corresponding algorithm when in the batch setting. The distributed decision tree induction algorithms have been implemented as part of INDUS, an agent-based system for data-driven knowledge acquisition from heterogeneous, distributed, autonomous data sources.", "title": "" }, { "docid": "ba6c9242c6f8992b916d97a4c307a239", "text": "We describe an unsupervised learning technique to facilitate automated creation of jazz melodic improvisation over chord sequences. Specifically we demonstrate training an artificial improvisation algorithm based on unsupervised learning using deep belief nets, a form of probabilistic neural network based on restricted Boltzmann machines. We present a musical encoding scheme and specifics of a learning and creational method. Our approach creates novel jazz licks, albeit not yet in real-time. The present work should be regarded as a feasibility study to determine whether such networks could be used at all. We do not claim superiority of this approach for pragmatically creating jazz.", "title": "" }, { "docid": "102bec350390b46415ae07128cb4e77f", "text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "title": "" }, { "docid": "e38f29a603fb23544ea2fcae04eb1b5d", "text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.", "title": "" }, { "docid": "80947cea68851bc522d5ebf8a74e28ab", "text": "Advertising is key to the business model of many online services. Personalization aims to make ads more relevant for users and more effective for advertisers. However, relatively few studies into user attitudes towards personalized ads are available. We present a San Francisco Bay Area survey (N=296) and in-depth interviews (N=24) with teens and adults. People are divided and often either (strongly) agreed or disagreed about utility or invasiveness of personalized ads and associated data collection. Mobile ads were reported to be less relevant than those on desktop. Participants explained ad personalization based on their personal previous behaviors and guesses about demographic targeting. We describe both metrics improvements as well as opportunities for improving online advertising by focusing on positive ad interactions reported by our participants, such as personalization focused not just on product categories but specific brands and styles, awareness of life events, and situations in which ads were useful or even inspirational.", "title": "" }, { "docid": "df3b0590054fb3056ed82247d01bc951", "text": "Understanding nonverbal behaviors in human machine interaction is a complex and challenge task. One of the key aspects is to recognize human emotion states accurately. This paper presents our effort to the Audio/Visual Emotion Challenge (AVEC'14), whose goal is to predict the continuous values of the emotion dimensions arousal, valence and dominance at each moment in time. The proposed method utilizes deep belief network based models to recognize emotion states from audio and visual modalities. Firstly, we employ temporal pooling functions in the deep neutral network to encode dynamic information in the features, which achieves the first time scale temporal modeling. Secondly, we combine the predicted results from different modalities and emotion temporal context information simultaneously. The proposed multimodal-temporal fusion achieves temporal modeling for the emotion states in the second time scale. Experiments results show the efficiency of each key point of the proposed method and competitive results are obtained", "title": "" }, { "docid": "aef8b4098ade89a3218e01d15de01063", "text": "This paper studies multidimensional matching between workers and jobs. Workers differ in manual and cognitive skills and sort into jobs that demand different combinations of these two skills. To study this multidimensional sorting, I develop a theoretical framework that generalizes the unidimensional notion of assortative matching. I derive the equilibrium in closed form and use this explicit solution to study biased technological change. The key finding is that an increase of worker-job complementarities in cognitive relative to manual inputs leads to more pronounced sorting and wage inequality across cognitive relative to manual skills. This can trigger wage polarization and boost aggregate wage dispersion. I then estimate the model for the US and identify sizeable technology shifts: During the 90s, worker-job complementarities in cognitive inputs increased by 15% whereas complementarities in manual inputs decreased by 41%. Besides this bias in complementarities, there has also been a strong cognitive skill -bias in production. Counterfactual exercises suggest that these technology shifts can account for observed changes in worker-job sorting, wage polarization and a significant part of the increase in US wage dispersion.", "title": "" }, { "docid": "911341dd579c7d16aa918497a23afc31", "text": "We discuss a variant of Thompson sampling for nonparametric reinforcement learning in countable classes of general stochastic environments. These environments can be non-Markov, nonergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.", "title": "" }, { "docid": "1b0a8696b0bf79c118c5b02a7a2f4d7c", "text": "Mechanical properties of living cells are commonly described in terms of the laws of continuum mechanics. The purpose of this report is to consider the implications of an alternative approach that emphasizes the discrete nature of stress bearing elements in the cell and is based on the known structural properties of the cytoskeleton. We have noted previously that tensegrity architecture seems to capture essential qualitative features of cytoskeletal shape distortion in adherent cells (Ingber, 1993a; Wang et al., 1993). Here we extend those qualitative notions into a formal microstructural analysis. On the basis of that analysis we attempt to identify unifying principles that might underlie the shape stability of the cytoskeleton. For simplicity, we focus on a tensegrity structure containing six rigid struts interconnected by 24 linearly elastic cables. Cables carry initial tension (‘‘prestress’’) counterbalanced by compression of struts. Two cases of interconnectedness between cables and struts are considered: one where they are connected by pin-joints, and the other where the cables run through frictionless loops at the junctions. At the molecular level, the pinned structure may represent the case in which different cytoskeletal filaments are cross-linked whereas the looped structure represents the case where they are free to slip past one another. The system is then subjected to uniaxial stretching. Using the principal of virtual work, stretching force vs. extension and structural stiffness vs. stretching force relationships are calculated for different prestresses. The stiffness is found to increase with increasing prestress and, at a given prestress, to increase approximately linearly with increasing stretching force. This behavior is consistent with observations in living endothelial cells exposed to shear stresses (Wang & Ingber, 1994). At a given prestress, the pinned structure is found to be stiffer than the looped one, a result consistent with data on mechanical behavior of isolated, cross-linked and uncross-linked actin networks (Wachsstock et al., 1993). On the basis of our analysis we concluded that architecture and the prestress of the cytoskeleton might be key features that underlie a cell’s ability to regulate its shape. 7 1996 Academic Press Limited", "title": "" }, { "docid": "7ad19baa334b9389d58d6a9948cfacc9", "text": "Energy harvesting computing has been gaining increasing traction over the past decade, fueled by technological developments and rising demand for autonomous and battery-free systems. Energy harvesting introduces numerous challenges to embedded systems but, arguably the greatest, is the required transition from an energy source that typically provides virtually unlimited power for a reasonable period of time until it becomes exhausted, to a power source that is highly unpredictable and dynamic (both spatially and temporally, and with a range spanning many orders of magnitude). The typical approach to overcome this is the addition of intermediate energy storage/buffering to smooth out the temporal dynamics of both power supply and consumption. This has the advantage that, if correctly sized, the system ‘looks like’ a battery-powered system; however, it also adds volume, mass, cost and complexity and, if not sized correctly, unreliability. In this paper, we consider energy-driven computing, where systems are designed from the outset to operate from an energy harvesting source. Such systems typically contain little or no additional energy storage (instead relying on tiny parasitic and decoupling capacitance), alleviating the aforementioned issues. Examples of energy-driven computing include transient systems (which power down when the supply disappears and efficiently continue execution when it returns) and power-neutral systems (which operate directly from the instantaneous power harvested, gracefully modulating their consumption and performance to match the supply). In this paper, we introduce a taxonomy of energy-driven computing, articulating how power-neutral, transient, and energy-driven systems present a different class of computing to conventional approaches.", "title": "" }, { "docid": "5cc3bf535efe6b2b4b018afe99a9380c", "text": "Doctor of Philosophy Trinity Term 1995 This thesis examines the theoretical and computational problems associated with map building and localization for autonomous vehicles. In particular, components of a system are described for performing terrain-aided navigation in real time for high speed vehicles or aircraft. Such a system would be able to dynamically construct a map of distinctive naturally-occurring environmental features while simultaneously using those features as landmarks to estimate the position of the vehicle. In order to develop such a system, a variety of challenges are addressed. Specifically: 1. A new approach for nonlinear filtering is described that is not only easier to implement, but substantially more accurate than the conventional methods. 2. A new approach is developed for avoiding problems associated with correlations among the position estimates of mapped features. Such correlations prevent the application of standard real time filtering methods and constitu te the key challenge in the area of large scale map build ing. A byproduct of this development is a new general-purpose filtering and data fusion technique. 3. A new data structure is developed for storing the map so that sensor observations can be associated with candidate features in the map in real time. This data structure is shown to be capable of supporting real time performance for maps having many thousands of features. 4. A new combinatorial result is derived that facilitates the decision process for determining which mapped feature is most likely to have produced a given sensor observation. Applications of the above results to other more general engineering problems are also d iscussed.", "title": "" }, { "docid": "239cc0a260c43f8bafc5dbb5ae123488", "text": "This paper presents the application of Neural Network Bottleneck (BN) features in Language Identification (LID). BN f eatures are generally used for Large Vocabulary Speech Recogn ition in conjunction with conventional acoustic features, s uch as MFCC or PLP. We compare the BN features to several common types of acoustic features used in the state-of-the-art LID systems. The test set is from DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state-of-the-art detection capabilities on audio from hig hly degraded radio communication channels. On this type of noisy data, we show that in average, the BN features provide a 45% relative improvement in theCavgor Equal Error Rate (EER) metrics across several test duration conditions, with resp ect to our single best acoustic features.", "title": "" }, { "docid": "9fc47eca91c72afbc6875ef71f22de30", "text": "We propose a principled probabilistic formulation of object saliency as a sampling problem. This novel formulation allows us to learn, from a large corpus of unlabelled images, which patches of an image are of the greatest interest and most likely to correspond to an object. We then sample the object saliency map to propose object locations. We show that using only a single object location proposal per image, we are able to correctly select an object in over 42% of the images in the Pascal VOC 2007 dataset, substantially outperforming existing approaches. Furthermore, we show that our object proposal can be used as a simple unsupervised approach to the weakly supervised annotation problem. Our simple unsupervised approach to annotating objects of interest in images achieves a higher annotation accuracy than most weakly supervised approaches.", "title": "" }, { "docid": "8d6f8087b96d9bf935a94934de2139df", "text": "Intent detection and slot filling are two main tasks for building a spoken language understanding(SLU) system. Multiple deep learning based models have demonstrated good results on these tasks . The most effective algorithms are based on the structures of sequence to sequence models (or ”encoder-decoder” models), and generate the intents and semantic tags either using separate models((Yao et al., 2014; Mesnil et al., 2015; Peng and Yao, 2015; Kurata et al., 2016; Hahn et al., 2011)) or a joint model ((Liu and Lane, 2016a; HakkaniTür et al., 2016; Guo et al., 2014)). Most of the previous studies, however, either treat the intent detection and slot filling as two separate parallel tasks, or use a sequence to sequence model to generate both semantic tags and intent. Most of these approaches use one (joint) NN based model (including encoderdecoder structure) to model two tasks, hence may not fully take advantage of the crossimpact between them. In this paper, new Bi-model based RNN semantic frame parsing network structures are designed to perform the intent detection and slot filling tasks jointly, by considering their cross-impact to each other using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a decoder achieves state-of-the-art result on the benchmark ATIS data (Hemphill et al., 1990; Tur et al., 2010), with about 0.5% intent accuracy improvement and 0.9 % slot filling improvement.", "title": "" }, { "docid": "7ab7a2270c364bfad24ea155f003a032", "text": "In this letter, we present a method of two-dimensional canonical correlation analysis (2D-CCA) where we extend the standard CCA in such a way that relations between two different sets of image data are directly sought without reshaping images into vectors. We stress that 2D-CCA dramatically reduces the computational complexity, compared to the standard CCA. We show the useful behavior of 2D-CCA through numerical examples of correspondence learning between face images in different poses and illumination conditions.", "title": "" }, { "docid": "6afb6140edbfdabb2f2c1a0cbee23665", "text": "The advent of Web 2.0 has led to an increase in the amount of sentimental content available in the Web. Such content is often found in social media web sites in the form of movie or product reviews, user comments, testimonials, messages in discussion forums etc. Timely discovery of the sentimental or opinionated web content has a number of advantages, the most important of all being monetization. Understanding of the sentiments of human masses towards different entities and products enables better services for contextual advertisements, recommendation systems and analysis of market trends. The focus of our project is sentiment focussed web crawling framework to facilitate the quick discovery of sentimental contents of movie reviews and hotel reviews and analysis of the same. We use statistical methods to capture elements of subjective style and the sentence polarity. The paper elaborately discusses two supervised machine learning algorithms: K-Nearest Neighbour(KNN) and Naïve Bayes‘ and compares their overall accuracy, precisions as well as recall values. It was seen that in case of movie reviews Naïve Bayes‘ gave far better results than K-NN but for hotel reviews these algorithms gave lesser, almost same accuracies.", "title": "" }, { "docid": "4b30695ba1989cb6770a38afca685aaa", "text": "Prior literature on search advertising primarily assumes that search engines know advertisers’ click-through rates, the probability that a consumer clicks on an advertiser’s ad. This information, however, is not available when a new advertiser starts search advertising for the first time. In particular, a new advertiser’s click-through rate can be learned only if the advertiser’s ad is shown to enough consumers, i.e., the advertiser wins enough auctions. Since search engines use advertisers’ expected click-through rates when calculating payments and allocations, the lack of information about a new advertiser can affect new and existing advertisers’ bidding strategies. In this paper, we use a game theory model to analyze advertisers’ strategies, their payoffs, and the search engine’s revenue when a new advertiser joins the market. Our results indicate that a new advertiser should always bid higher (sometimes above its valuation) when it starts search advertising. However, the strategy of an existing advertiser, i.e., an incumbent, depends on its valuation and click-through rate. A strong incumbent increases its bid to prevent the search engine from learning the new advertiser’s clickthrough rate, whereas a weak incumbent decreases its bid to facilitate the learning process. Interestingly, we find that, under certain conditions, the search engine benefits from not knowing the new advertiser’s click-through rate because its ignorance could induce the advertisers to bid more aggressively. Nonetheless, the search engine’s revenue sometimes decreases because of this lack of information, particularly, when the incumbent is sufficiently strong. We show that the search engine can mitigate this loss, and improve its total profit, by offering free advertising credit to new advertisers.", "title": "" }, { "docid": "beb1145302ead9a515267dc6500a9b3c", "text": "Quantitative evaluation of the ability of soccer players to contribute to team offensive performance is typically based on goals scored, assists made, and shots taken. In this paper, we describe a novel player ranking system based entirely on the value of passes completed. This value is derived based on the relationship of pass locations in a possession and shot opportunities generated. This relationship is learned by applying a supervised machine learning model to pass locations in event data from the 2012-2013 La Liga season. Interestingly, though this metric is based entirely on passes, the derived player rankings are largely consistent with general perceptions of offensive ability, e.g., Messi and Ronaldo are near the top. Additionally, when used to rank midfielders, it separates the more offensively-minded players from others.", "title": "" } ]
scidocsrr
787bc6f784b967f19ca06eda174da845
Successive Randomization of Inception v 3 Weights Gradient Gradient-SG Gradient -
[ { "docid": "93a3b7bdcbd0d15f67043a876b69b5f4", "text": "Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SMOOTHGRAD, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.", "title": "" }, { "docid": "b2c05f820195154dbbb76ee68740b5d9", "text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.", "title": "" }, { "docid": "5d3893a22635a977760cde03d3542d2a", "text": "We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing input regions that are ‘important’ for predictions – producing visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses class-specific gradient information to localize important regions. These localizations are combined with existing pixel-space visualizations to create a novel high-resolution and class-discriminative visualization called Guided Grad-CAM. These methods help better understand CNN-based models, including image captioning and visual question answering (VQA) models. We evaluate our visual explanations by measuring their ability to discriminate between classes, to inspire trust in humans, and their correlation with occlusion maps. Grad-CAM provides a new way to understand CNN-based models. We have released code, an online demo hosted on CloudCV [1], and the full paper [8].1", "title": "" } ]
[ { "docid": "51215220471f8f7f4afd68c1a27b5809", "text": "he unauthorized modification and subsequent misuse of software is often referred to as software cracking. Usually, cracking requires disabling one or more software features that enforce policies (of access, usage, dissemination, etc.) related to the software. Because there is value and/or notoriety to be gained by accessing valuable software capabilities, cracking continues to be common and is a growing problem. To combat cracking, anti-tamper (AT) technologies have been developed to protect valuable software. Both hardware and software AT technologies aim to make software more resistant against attack and protect critical program elements. However, before discussing the various AT technologies, we need to know the adversary's goals. What do software crackers hope to achieve? Their purposes vary, and typically include one or more of the following: • Gaining unauthorized access. The attacker's goal is to disable the software access control mechanisms built into the software. After doing so, the attacker can make and distribute illegal copies whose copy protection or usage control mechanisms have been disabled – this is the familiar software piracy problem. If the cracked software provides access to classified data, then the attacker's real goal is not the software itself, but the data that is accessible through the software. The attacker sometimes aims at modifying or unlocking specific functionality in the program, e.g., a demo or export version of software is often a deliberately degraded version of what is otherwise fully functional software. The attacker then seeks to make it fully functional by re-enabling the missing features. • Reverse engineering. The attacker aims to understand enough about the software to steal key routines, to gain access to proprietary intellectual property , or to carry out code-lifting, which consists of reusing a crucial part of the code (without necessarily understanding the internals of how it works) in some other software. Good programming practices, while they facilitate software engineering, also tend to simultaneously make it easier to carry out reverse engineering attacks. These attacks are potentially very costly to the original software developer as they allow a competitor (or an enemy) to nullify the develop-er's competitive advantage by rapidly closing a technology gap through insights gleaned from examining the software. • Violating code integrity. This familiar attack consists of either injecting malicious code (malware) into a program , injecting code that is not malevolent but illegally enhances a pro-gram's functionality, or otherwise sub-verting a program so it performs new and …", "title": "" }, { "docid": "9b2dd28151751477cc46f6c6d5ec475f", "text": "Clinical and experimental data indicate that most acupuncture clinical results are mediated by the central nervous system, but the specific effects of acupuncture on the human brain remain unclear. Even less is known about its effects on the cerebellum. This fMRI study demonstrated that manual acupuncture at ST 36 (Stomach 36, Zusanli), a main acupoint on the leg, modulated neural activity at multiple levels of the cerebro-cerebellar and limbic systems. The pattern of hemodynamic response depended on the psychophysical response to needle manipulation. Acupuncture stimulation typically elicited a composite of sensations termed deqi that is related to clinical efficacy according to traditional Chinese medicine. The limbic and paralimbic structures of cortical and subcortical regions in the telencephalon, diencephalon, brainstem and cerebellum demonstrated a concerted attenuation of signal intensity when the subjects experienced deqi. When deqi was mixed with sharp pain, the hemodynamic response was mixed, showing a predominance of signal increases instead. Tactile stimulation as control also elicited a predominance of signal increase in a subset of these regions. The study provides preliminary evidence for an integrated response of the human cerebro-cerebellar and limbic systems to acupuncture stimulation at ST 36 that correlates with the psychophysical response.", "title": "" }, { "docid": "8de530a30b8352e36b72f3436f47ffb2", "text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.", "title": "" }, { "docid": "8a55bf5b614d750a7de6ac34dc321b10", "text": "Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the additional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results.", "title": "" }, { "docid": "48b5e1959a4b77429c735038393f0315", "text": "MFOI/PCO2 Plus Postage. Cognitive Ability; Early Childhood Education; *Early Experience; Fine Arts; Humanities; *Motion; *Music; *PerceptuAl Development; Psychomotor Skills; *Spatial Ability; Young Children This research paper reports on testing the hypothesis that music and spatial task performance are causally related. Two complementary studies are presented that replicate and explore previous findings. One study of college students showed that listening to a Mozart sonata induces subsequent short-term spatial reasoning facilitation and tested the effects of highly repetitive music on spatial reasoning. The second study extends the findings of a preliminary pilot study of 1993 which suggested that music training of three-year-olds provides long-term enhancements of nonverbal cognitive abilities already present at significant leveli in infants. The paper concludes with a discussion of the scientific and educational implications, further controls, and future research objectives. Contains 10 references. (EH) Reproductions supplied by EDRS are the best that can be made from the original document. V i Date and Time of Presentation: Saturday, August 13, 1994, Ilam Westin Bonaventure Los Angeles, Lobby Level, Santa Barbara Room B Music and Spatial Task Performance: A Causal Relationship Frances H. Rauscher, Gordon L. Shaw, Linda J. Levine, Katherine N. Ky University of California, Irvine Eric L Wright Irvine Conservatory of Music Presented at the American Psychological Association 102nd Annual Convention in Los Angeles, CA August 12-16, 1994 BEST COPY AVAILABLE 4,0 7 f./.. I. . /, To THE DuC,ATI:!0.4AL RESOuHcES Nf ORMATIoN (.;1NTE4", "title": "" }, { "docid": "d25f828d6f68066f14adf718b40ba7a5", "text": "Metformin is a widely prescribed medication that has been used to treat children with type 2 diabetes in the United States for the past 15 years. Metformin now has a variety of clinical applications in pediatrics, and its potential clinical uses continue to expand. In addition to reviewing the current understanding of its mechanisms of action including the newly discovered effects on the gastrointestinal tract, we will also discuss current clinical uses in pediatrics, including in type 1 diabetes. Finally, we examine the existing state of monitoring for metformin efficacy and side effects and discuss prospective future clinical uses.", "title": "" }, { "docid": "2052b47be2b5e4d0c54ab0be6ae1958b", "text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .", "title": "" }, { "docid": "c12d595a944aa592fd3a1414fa873f93", "text": "Central nervous system cytotoxicity is linked to neurodegenerative disorders. The objective of the study was to investigate whether monosodium glutamate (MSG) neurotoxicity can be reversed by natural products, such as ginger or propolis, in male rats. Four different groups of Wistar rats were utilized in the study. Group A served as a normal control, whereas group B was orally administered with MSG (100 mg/kg body weight, via oral gavage). Two additional groups, C and D, were given MSG as group B along with oral dose (500 mg/kg body weight) of either ginger or propolis (600 mg/kg body weight) once a day for two months. At the end, the rats were sacrificed, and the brain tissue was excised and levels of neurotransmitters, ß-amyloid, and DNA oxidative marker 8-OHdG were estimated in the brain homogenates. Further, formalin-fixed and paraffin-embedded brain sections were used for histopathological evaluation. The results showed that MSG increased lipid peroxidation, nitric oxide, neurotransmitters, and 8-OHdG as well as registered an accumulation of ß-amyloid peptides compared to normal control rats. Moreover, significant depletions of glutathione, superoxide dismutase, and catalase as well as histopathological alterations in the brain tissue of MSG-treated rats were noticed in comparison with the normal control. In contrast, treatment with ginger greatly attenuated the neurotoxic effects of MSG through suppression of 8-OHdG and β-amyloid accumulation as well as alteration of neurotransmitter levels. Further improvements were also noticed based on histological alterations and reduction of neurodegeneration in the brain tissue. A modest inhibition of the neurodegenerative markers was observed by propolis. The study clearly indicates a neuroprotective effect of ginger and propolis against MSG-induced neurodegenerative disorders and these beneficial effects could be attributed to the polyphenolic compounds present in these natural products.", "title": "" }, { "docid": "52223bb5a6d6958048209253db28d9ad", "text": "OBJECTIVE\nTo understand the meanings that male university students assign to the condition of users of alcohol and other drugs.\n\n\nMETHOD\nAn exploratory study using a qualitative approach, with inductive analysis of the content of semi-structured interviews applied to 20 male university students from a public university in the southeast region of Brazil, grounded on the theoretical-methodological referential of interpretive anthropology and ethnographic method.\n\n\nRESULTS\nData were construed using content inductive analysis for two topics: use of alcohol and/or drugs as an outlet; and use of alcohol and/or other drugs: an alternative for belonging and identity.\n\n\nCONCLUSION\nMale university students share the rules of their sociocultural environment that values the use of alcohol and/or other drugs as a way of dealing with the demands and stress ensuing from the everyday university life, and to build identity and belong to this social context, reinforcing the influence of culture.\n\n\nOBJETIVO\nCompreender os significados atribuídos pelos universitários do sexo masculino à condição de usuários de álcool e outras drogas.\n\n\nMÉTODO\nEstudo exploratório de abordagem qualitativa, com análise de conteúdo indutiva de entrevistas semiestruturadas de 20 universitários do sexo masculino, matriculados em uma universidade pública da região sudeste do Brasil, fundamentado no referencial teórico-metodológico da Antropologia Interpretativa e do método etnográfico.\n\n\nRESULTADOS\nOs dados foram interpretados com a análise de conteúdo indutiva em dois temas: O uso do álcool e/ou drogas como válvula de escape; O uso do álcool e/ou outras drogas: alternativa para o pertencimento e para a identidade.\n\n\nCONCLUSÃO\nOs universitários do sexo masculino compartilham normas de seu meio sociocultural, que valorizam o uso de álcool e/ou outras drogas, como uma forma de lidar com as exigências e o estresse da vida universitária, criar uma identidade e ter pertencimento neste contexto social, reforçando a influência da cultura.", "title": "" }, { "docid": "b789785d7e9cdde760af1d65faccfa60", "text": "The use of an expired product may cause harm to its designated target. If the product is for human consumption, e.g. medicine, the result can be fatal. While most people can check the expiration date easily before using the product, it is very difficult for a visually impaired or a totally blind person to do so independently. This paper therefore proposes a solution that helps the visually impaired to identify a product and subsequently `read' the expiration date on a product using a handheld Smartphone. While there are a few commercial barcode decoder and text recognition applications for the mobile phone, they require the user to point the phone to the correct location - which is extremely hard for the visually impaired. We thus focus our research on helping the blind user to locate the barcode and the expiration date on a product package. After that, existing barcode decoding and OCR algorithms can be utilized to obtain the required information. A field trial with several bind- folded/totally-blind participants is conducted and shows that the proposed solution is effective in guiding a visually impaired user towards the barcode and expiry information, although some issues remain with the reliability of the off-the-shelf decoding algorithms on low-resolution videos.", "title": "" }, { "docid": "7c9b01d3abbefa325fe3cd21aa266969", "text": "The rapid growth of e-commerce has caused product overload where customers on the Web are no longer able to effectively choose the products they are exposed to. To overcome the product overload of online shoppers, a variety of recommendation methods have been developed. Collaborative filtering (CF) is the most successful recommendation method, but its widespread use has exposed some well-known limitations, such as sparsity and scalability, which can lead to poor recommendations. This paper proposes a recommendation methodology based on Web usage mining, and product taxonomy to enhance the recommendation quality and the system performance of current CF-based recommender systems. Web usage mining populates the rating database by tracking customers’ shopping behaviors on the Web, thereby leading to better quality recommendations. The product taxonomy is used to improve the performance of searching for nearest neighbors through dimensionality reduction of the rating database. Several experiments on real e-commerce data show that the proposed methodology provides higher quality recommendations and better performance than other CF methodologies. q 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b7729008700bd7623db8a967826d6e23", "text": "This paper describes the modeling of jitter in clock-and-data recovery (CDR) systems using an event-driven model that accurately includes the effects of power-supply noise, the finite bandwidth (aperture window) in the phase detector's front-end sampler, and intersymbol interference in the system's channel. These continuous-time jitter sources are captured in the model through their discrete-time influence on sample based phase detectors. Modeling parameters for these disturbances are directly extracted from the circuit implementation. The event-driven model, implemented in Simulink, has a simulation accuracy within 12% of an Hspice simulation-but with a simulation speed that is 1800 times higher.", "title": "" }, { "docid": "b6c94af660b76a66154a973a4cfbe03f", "text": "Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.", "title": "" }, { "docid": "b27f43bf472e44cf393d21781c3341cd", "text": "A massive hybrid array consists of multiple analog subarrays, with each subarray having its digital processing chain. It offers the potential advantage of balancing cost and performance for massive arrays and therefore serves as an attractive solution for future millimeter-wave (mm- Wave) cellular communications. On one hand, using beamforming analog subarrays such as phased arrays, the hybrid configuration can effectively collect or distribute signal energy in sparse mm-Wave channels. On the other hand, multiple digital chains in the configuration provide multiplexing capability and more beamforming flexibility to the system. In this article, we discuss several important issues and the state-of-the-art development for mm-Wave hybrid arrays, such as channel modeling, capacity characterization, applications of various smart antenna techniques for single-user and multiuser communications, and practical hardware design. We investigate how the hybrid array architecture and special mm-Wave channel property can be exploited to design suboptimal but practical massive antenna array schemes. We also compare two main types of hybrid arrays, interleaved and localized arrays, and recommend that the localized array is a better option in terms of overall performance and hardware feasibility.", "title": "" }, { "docid": "cadafd50eba3e60d8133520ff15fcfb8", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Security of Electronic Payment Systems: A Comprehensive Survey Siamak Solat", "title": "" }, { "docid": "dfea0aadb35d2984040938c7b9b1d633", "text": "While Agile methods were originally introduced for small, tightly coupled teams, leaner ways of working are becoming a practical method to run entire enterprises. As the emphasis of user experience work has inherently been on the early phases before starting the development, it also needs to be adapted to the Agile way of working. To improve the current practices in Agile user experience work, we determined the present state of a multi-continental software development organization that already had a functioning user experience team. In this paper, we describe the most prevalent issues regarding the interaction of user experience design and software development activities, and suggest improvements to fix those. Most of the observed problems were related to communication issues and to the service mode of the user experience team. The user experience team was operating between management and development organizations trying to adapt to the dissimilar practices of both the disciplines.", "title": "" }, { "docid": "e1adb8ebfd548c2aca5110e2a9e8d667", "text": "This paper introduces an active object detection and localization framework that combines a robust untextured object detection and 3D pose estimation algorithm with a novel next-best-view selection strategy. We address the detection and localization problems by proposing an edge-based registration algorithm that refines the object position by minimizing a cost directly extracted from a 3D image tensor that encodes the minimum distance to an edge point in a joint direction/location space. We face the next-best-view problem by exploiting a sequential decision process that, for each step, selects the next camera position which maximizes the mutual information between the state and the next observations. We solve the intrinsic intractability of this solution by generating observations that represent scene realizations, i.e. combination samples of object hypothesis provided by the object detector, while modeling the state by means of a set of constantly resampled particles. Experiments performed on different real world, challenging datasets confirm the effectiveness of the proposed methods.", "title": "" }, { "docid": "f4c3cd5706957ea3a27a6fd8285ae422", "text": "With the growth of mobile devices and applications, the number of malicious software, or malware, is rapidly increasing in recent years, which calls for the development of advanced and effective malware detection approaches. Traditional methods such as signature based ones cannot defend users from an increasing number of new types of malware or rapid malware behavior changes. In this paper, we propose a new Android malware detection approach based on deep learning and static analysis. Instead of using Application Programming Interfaces (APIs) only, we further analyze the source code of Android applications and create their higher-level graphical semantics, which makes it harder for attackers to evade detection. In particular, we use a call graph from method invocations in an Android application to represent the application, and further analyze method attributes to form a structured Program Representation Graph (PRG) with node attributes. Then, we use a graph convolutional network (GCN) to yield a graph representation of the application by embedding the entire graph into a dense vector, and classify whether it is a malware or not. To efficiently train such a graph convolutional network, we propose a batch training scheme that allows multiple heterogeneous graphs to be input as a batch. To the best of our knowledge, this is the first work to use graph representation learning for malware detection. We conduct extensive experiments from real-world sample collections and demonstrate that our developed system outperforms multiple other existing malware detection techniques.", "title": "" }, { "docid": "a928aa788221fc7f9a13d05a9d36badf", "text": "Segment routing is an emerging traffic engineering technique relying on Multi-protocol Label-Switched (MPLS) label stacking to steer traffic using the source-routing paradigm. Traffic flows are enforced through a given path by applying a specifically designed stack of labels (i.e., the segment list). Each packet is then forwarded along the shortest path toward the network element represented by the top label. Unlike traditional MPLS networks, segment routing maintains a per-flow state only at the ingress node; no signaling protocol is required to establish new flows or change the routing of active flows. Thus, control plane scalability is greatly improved. Several segment routing use cases have recently been proposed. As an example, it can be effectively used to dynamically steer traffic flows on paths characterized by low latency values. However, this may suffer from some potential issues. Indeed, deployed MPLS equipment typically supports a limited number of stacked labels. Therefore, it is important to define the proper procedures to minimize the required segment list depth. This work is focused on two relevant segment routing use cases: dynamic traffic recovery and traffic engineering in multi-domain networks. Indeed, in both use cases, the utilization of segment routing can significantly simplify the network operation with respect to traditional Internet Protocol (IP)/MPLS procedures. Thus, two original procedures based on segment routing are proposed for the aforementioned use cases. Both procedures are evaluated including a simulative analysis of the segment list depth. Moreover, an experimental demonstration is performed in a multi-layer test bed exploiting a software-defined-networking-based implementation of segment routing.", "title": "" }, { "docid": "c5b482ccc2fe10dca28644b6796b82a6", "text": "Advances in bioconjugation and native protein modification are appearing at a blistering pace, making it increasingly time consuming for practitioners to identify the best chemical method for modifying a specific amino acid residue in a complex setting. The purpose of this perspective is to provide an informative, graphically rich manual highlighting significant advances in the field over the past decade. This guide will help triage candidate methods for peptide alteration and will serve as a starting point for those seeking to solve long-standing challenges.", "title": "" } ]
scidocsrr
a506e9ef4e1a441c2f1501fdc7b89d14
A Trust-Based Intrusion Detection System for Mobile RPL Based Networks
[ { "docid": "a43646db20923d9058df5544a5753da0", "text": "Smart objects connected to the Internet, constituting the so called Internet of Things (IoT), are revolutionizing human beings' interaction with the world. As technology reaches everywhere, anyone can misuse it, and it is always essential to secure it. In this work we present a denial-of-service (DoS) detection architecture for 6LoWPAN, the standard protocol designed by IETF as an adaptation layer for low-power lossy networks enabling low-power devices to communicate with the Internet. The proposed architecture integrates an intrusion detection system (IDS) into the network framework developed within the EU FP7 project ebbits. The aim is to detect DoS attacks based on 6LoWPAN. In order to evaluate the performance of the proposed architecture, preliminary implementation was completed and tested against a real DoS attack using a penetration testing system. The paper concludes with the related results proving to be successful in detecting DoS attacks on 6LoWPAN. Further, extending the IDS could lead to detect more complex attacks on 6LoWPAN.", "title": "" }, { "docid": "9dbca4dbf411a6a4a06ae51d246734b1", "text": "We present overview of a distributed internal anomaly detection system for Internet-of-things. In the detection system, each node monitors its neighbors and if abnormal behavior is detected, the monitoring node will block the packets from the abnormally behaving node at the data link layer and reports to its parent node. The reporting propagates from child to parent nodes until it reaches the root. A novel control message, distress propagation object (DPO), is devised to report the anomaly to the subsequent parents and ultimately the edge-router. The DPO message is integrated to routing protocol for low-power and lossy networks (RPL). The system has configurable profile settings and it is able to learn and differentiate the nodes' normal and suspicious activities without a need for prior knowledge. It has different subsystems and operation phases at data link and network layers, which share a common repository in a node. The system uses network fingerprinting to be aware of changes in network topology and nodes' positions without any assistance from a positioning system.", "title": "" } ]
[ { "docid": "673fea40e5cb12b54cc296b1a2c98ddb", "text": "Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.", "title": "" }, { "docid": "9f3388eb88e230a9283feb83e4c623e1", "text": "Entity Linking (EL) is an essential task for semantic text understanding and information extraction. Popular methods separately address the Mention Detection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging their mutual dependency. We here propose the first neural end-to-end EL system that jointly discovers and links entities in a text document. The main idea is to consider all possible spans as potential mentions and learn contextual similarity scores over their entity candidates that are useful for both MD and ED decisions. Key components are context-aware mention embeddings, entity embeddings and a probabilistic mention entity map, without demanding other engineered features. Empirically, we show that our end-to-end method significantly outperforms popular systems on the Gerbil platform when enough training data is available. Conversely, if testing datasets follow different annotation conventions compared to the training set (e.g. queries/ tweets vs news documents), our ED model coupled with a traditional NER system offers the best or second best EL accuracy.", "title": "" }, { "docid": "d395193924613f6818511650d24cf9ae", "text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.", "title": "" }, { "docid": "ffca07962ddcdfa0d016df8020488b5d", "text": "Differential-drive mobile robots are usually equipped with video-cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the following quantities: the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and, finally, the odometric parameters of the vehicle. In this paper the simultaneous estimation of the above mentioned quantities is achieved by a systematic and effective calibration procedure that does not require any iterative step. The calibration procedure needs only on-board measurements given by the wheels encoders, the camera and a number of properly taken camera snapshots of a set of known landmarks. Numerical simulations and experimental results with a mobile robot Khepera III equipped with a low-cost camera confirm the effectiveness of the proposed technique.", "title": "" }, { "docid": "1f7127a98db1521e185c866a81243283", "text": "We describe two transfer approaches for building sentiment analysis systems without having gold labeled data in the target language. Unlike previous work that is focused on using only English as the source language and a small number of target languages, we use multiple source languages to learn a more robust sentiment transfer model for 16 languages from different language families. Our approaches explore the potential of using an annotation projection approach and a direct transfer approach using cross-lingual word representations and neural networks. Whereas most previous work relies on machine translation, we show that we can build cross-lingual sentiment analysis systems without machine translation or even high quality parallel data. We have conducted experiments assessing the availability of different resources such as in-domain parallel data, out-of-domain parallel data, and in-domain comparable data. Our experiments show that we can build a robust transfer system whose performance can in some cases approach that of a supervised system.", "title": "" }, { "docid": "909829de03729dd70d231d20a9c92e81", "text": "Nonparametric two sample testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. We refer to the most common settings as mean difference alternatives (MDA), for testing differences only in first moments, and general difference alternatives (GDA), which is about testing for any difference in distributions. A large number of test statistics have been proposed for both these settings. This paper connects three classes of statistics high dimensional variants of Hotelling’s t-test, statistics based on Reproducing Kernel Hilbert Spaces, and energy statistics based on pairwise distances. We ask the following question how much statistical power do popular kernel and distance based tests for GDA have when the unknown distributions differ in their means, compared to specialized tests for MDA? To answer this, we formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime. We prove several interesting properties relating these classes of tests under MDA, which include (a) eED and gMMD have asymptotically equal power; furthermore they also enjoy a free lunch because, while they are additionally consistent for GDA, they have the same power as specialized high-dimensional t-tests for MDA. All these tests are asymptotically optimal (including matching constants) for MDA under spherical covariances, according to simple lower bounds. (b) The power of gMMD is independent of the kernel bandwidth, as long as it is larger than the choice made by the median heuristic. (c) There is a clear and smooth computation-statistics tradeoff for linear-time, subquadratic-time and quadratic-time versions of these tests, with more computation resulting in higher power. 1 ar X iv :1 50 8. 00 65 5v 1 [ m at h. ST ] 4 A ug 2 01 5 All three observations are practically important, since point (a) implies that eED and gMMD while being consistent against all alternatives, are also automatically adaptive to simpler alternatives, point (b) suggests that the median “heuristic” has some theoretical justification for being a default bandwidth choice, and point (c) implies that expending more computation may yield direct statistical benefit by orders of magnitude.", "title": "" }, { "docid": "2bd93dcbc1dad25206059c9d3a7f6f75", "text": "We explore a novel approach for Semantic Role Labeling (SRL) by casting it as a sequence-to-sequence process. We employ an attention-based model enriched with a copying mechanism to ensure faithful regeneration of the input sequence, while enabling interleaved generation of argument role labels. Here, we apply this model in a monolingual setting, performing PropBank SRL on English language data. The constrained sequence generation set-up enforced with the copying mechanism allows us to analyze the performance and special properties of the model on manually labeled data and benchmarking against state-of-the-art sequence labeling models. We show that our model is able to solve the SRL argument labeling task on English data, yet further structural decoding constraints will need to be added to make the model truly competitive. Our work represents a first step towards more advanced, generative SRL labeling setups.", "title": "" }, { "docid": "b4ba8928fc9eb715c0a75f3af2c95661", "text": "Implementation of the Six Core Strategies to Reduce the Use of Seclusion and Restraint (Six Core Strategies) at a recovery-oriented, tertiary level mental health care facility and the resultant changes in mechanical restraint and seclusion incidents are described. Strategies included increased executive participation; enhanced staff knowledge, skills, and attitudes; development of restraint orders and decision support in the electronic medical record to enable informed debriefing and tracking of events; and implementation of initiatives to include service users and their families in the plan of care. Strategies were implemented in a staged manner across 3 years. The total number of mechanical restraint and seclusion incidents decreased by 19.7% from 2011/12 to 2013/14. Concurrently, the average length of a mechanical restraint or seclusion incident decreased 38.9% over the 36-month evaluation period. Implementation of the Six Core Strategies for restraint minimization effectively decreased the number and length of mechanical restraint and seclusion incidents in a specialized mental health care facility. [Journal of Psychosocial Nursing and Mental Health Services, 54(10), 32-39.].", "title": "" }, { "docid": "0a9a94bd83dfbbba2815f8575f1cb8a3", "text": "To create with an autonomous mobile robot a 3D volumetric map of a scene it is necessary to gage several 3D scans and to merge them into one consistent 3D model. This paper provides a new solution to the simultaneous localization and mapping (SLAM) problem with six degrees of freedom. Robot motion on natural surfaces has to cope with yaw, pitch and roll angles, turning pose estimation into a problem in six mathematical dimensions. A fast variant of the Iterative Closest Points algorithm registers the 3D scans in a common coordinate system and relocalizes the robot. Finally, consistent 3D maps are generated using a global relaxation. The algorithms have been tested with 3D scans taken in the Mathies mine, Pittsburgh, PA. Abandoned mines pose significant problems to society, yet a large fraction of them lack accurate 3D maps.", "title": "" }, { "docid": "a324180129b78d853c035c2477f54a30", "text": "A book aiming to build a bridge between two fields that share the subject of research but do not share the same views necessarily puts itself in a difficult position: The authors have either to strike a fair balance at peril of dissatisfying both sides or nail their colors to the mast and cater mainly to one of two communities. For semantic processing of natural language with either NLP methods or Semantic Web approaches, the authors clearly favor the latter and propose a strictly ontology-driven interpretation of natural language. The main contribution of the book, driving semantic processing from the ground up by a formal domain-specific ontology, is elaborated in ten well-structured chapters spanning 143 pages of content.", "title": "" }, { "docid": "13defab78fcb925165650b5f824f610a", "text": "Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems.", "title": "" }, { "docid": "249543df444c1a5e0d37de8c017e5167", "text": "This review provides an overview of the changing US epidemiology of cannabis use and associated problems. Adults and adolescents increasingly view cannabis as harmless, and some can use cannabis without harm. However, potential problems include harms from prenatal exposure and unintentional childhood exposure; decline in educational or occupational functioning after early adolescent use, and in adulthood, impaired driving and vehicle crashes; cannabis use disorders (CUD), cannabis withdrawal, and psychiatric comorbidity. Evidence suggests national increases in cannabis potency, prenatal and unintentional childhood exposure; and in adults, increased use, CUD, cannabis-related emergency room visits, and fatal vehicle crashes. Twenty-nine states have medical marijuana laws (MMLs) and of these, 8 have recreational marijuana laws (RMLs). Many studies indicate that MMLs or their specific provisions did not increase adolescent cannabis use. However, the more limited literature suggests that MMLs have led to increased cannabis potency, unintentional childhood exposures, adult cannabis use, and adult CUD. Ecological-level studies suggest that MMLs have led to substitution of cannabis for opioids, and also possibly for psychiatric medications. Much remains to be determined about cannabis trends and the role of MMLs and RMLs in these trends. The public, health professionals, and policy makers would benefit from education about the risks of cannabis use, the increases in such risks, and the role of marijuana laws in these increases.", "title": "" }, { "docid": "e7e9d6054a61a1f4a3ab7387be28538a", "text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.", "title": "" }, { "docid": "befbfb5b083cddb7fb43ebaa8df244c1", "text": "The aim of this study was to adapt and validate the Spanish version of the Sport Motivation Scale-II (S-SMS-II) in adolescent athletes. The sample included 766 Spanish adolescents (263 females and 503 males; average age = 13.71 ± 1.30 years old). The methodological steps established by the International Test Commission were followed. Four measurement models were compared employing the maximum likelihood estimation (with six, five, three, and two factors). Then, factorial invariance analyses were conducted and the effect sizes were calculated. Finally, the reliability was calculated using Cronbach's alpha, omega, and average variance extracted coefficients. The five-factor S-SMS-II showed the best indices of fit (Cronbach's alpha .64 to .74; goodness of fit index .971, root mean square error of approximation .044, comparative fit index .966). Factorial invariance was also verified across gender and between sport-federated athletes and non-federated athletes. The proposed S-SMS-II is discussed according to previous validated versions (English, Portuguese, and Chinese).", "title": "" }, { "docid": "9d0ea524b8f591d9ea337a8c789e51c1", "text": "Abstract—The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.", "title": "" }, { "docid": "d0a90e548919f440ee782cdf2537e62f", "text": "The OH ion is an important species in the Interstellar Medium. It has been used to infer the cosmic ray ionization rate and is an important intermediate for generation of more complex astrochemical species. OH observations are typically performed in the sub-millimeter and near-UV ranges, and rely on laboratory spectroscopy to provide transition frequencies. Observations of the A3Π−X3Σ− bands are used to both identify OH and determine the column densities along sight lines.a These A-X observations have relied on previous measurements with a grating spectrometer and photographic plates.b Here, we present data recorded at Kitt Peak using a Fourier transform spectrometer of the A-X band system. This data and other available data are combined to determine new molecular constants for the A and X electronic states. These new data are between one and two orders of magnitude more precise and should be used in support of observations in lieu of the older transition frequencies. We also intend to calculate improved line intensities in support of astronomical observations.", "title": "" }, { "docid": "8feb5dce809acf0efb63d322f0526fcf", "text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.", "title": "" }, { "docid": "986cb4e0129b50c13c46a57d04e22c0d", "text": "As online social networking sites become more and more popular, they have also attracted the attentions of the spammers. In this paper, Twitter, a popular micro-blogging service, is studied as an example of spam bots detection in online social networking sites. A machine learning approach is proposed to distinguish the spam bots from normal ones. To facilitate the spam bots detection, three graph-based features, such as the number of friends and the number of followers, are extracted to explore the unique follower and friend relationships among users on Twitter. Three content-based features are also extracted from user’s most recent 20 tweets. A real data set is collected from Twitter’s public available information using two different methods. Evaluation experiments show that the detection system is efficient and accurate to identify spam bots in Twitter.", "title": "" }, { "docid": "4c877ad8e2f8393526514b12ff992ca0", "text": "The squared-field-derivative method for calculating eddy-current (proximity-effect) losses in round-wire or litz-wire transformer and inductor windings is derived. The method is capable of analyzing losses due to two-dimensional and three-dimensional field effects in multiple windings with arbitrary waveforms in each winding. It uses a simple set of numerical magnetostatic field calculations, which require orders of magnitude less computation time than numerical eddy-current solutions, to derive a frequency-independent matrix describing the transformer or inductor. This is combined with a second, independently calculated matrix, based on derivatives of winding currents, to compute total ac loss. Experiments confirm the accuracy of the method.", "title": "" }, { "docid": "bd907783a4d35277dfb1c0d184965de1", "text": "Discrimination in decision making is prohibited on many attributes (religion, gender, etc…), but often present in historical decisions. Use of such discriminatory historical decision making as training data can perpetuate discrimination, even if the protected attributes are not directly present in the data. This work focuses on discovering discrimination in instances and preventing discrimination in classification. First, we propose a discrimination discovery method based on modeling the probability distribution of a class using Bayesian networks. This measures the effect of a protected attribute (e.g., gender) in a subset of the dataset using the estimated probability distribution (via a Bayesian network). Second, we propose a classification method that corrects for the discovered discrimination without using protected attributes in the decision process. We evaluate the discrimination discovery and discrimination prevention approaches on two different datasets. The empirical results show that a substantial amount of discrimination identified in instances is prevented in future decisions.", "title": "" } ]
scidocsrr
904ef19f551996b154bbaa29ef07a32d
A Practical and Highly Optimized Convolutional Neural Network for Classifying Traffic Signs in Real-Time
[ { "docid": "4902f8f8c03e5c0ed0d60d8be7c7060b", "text": "Traffic sign classification is an important function for driver assistance systems. In this paper, we propose a hierarchical method for traffic sign classification. There are two hierarchies in the method: the first one classifies traffic signs into several super classes, while the second one further classifies the signs within their super classes and provides the final results. Two perspective adjustment methods are proposed and performed before the second hierarchy, which significantly improves the classification accuracy. Experimental results show that the proposed method gets an accuracy of 99.52% on the German Traffic Sign Recognition Benchmark (GTSRB), which outperforms the state-of-the-art method. In addition, it takes about 40 ms to process one image, making it suitable for realtime applications.", "title": "" }, { "docid": "ab47d6b0ae971a5cf0a24f1934fbee63", "text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.", "title": "" } ]
[ { "docid": "80618ed72708bdda8d60a0876158db35", "text": "With the ever increasing application of Convolutional Neural Networks to customer products the need emerges for models to efficiently run on embedded, mobile hardware. Slimmer models have therefore become a hot research topic with various approaches which vary from binary networks to revised convolution layers. We offer our contribution to the latter and propose a novel convolution block which significantly reduces the computational burden while surpassing the current state-of-the-art. Our model, dubbed EffNet, is optimised for models which are slim to begin with and is created to tackle issues in existing models such as MobileNet and ShuffleNet.", "title": "" }, { "docid": "32417703b8291a5cdcc3c9eaabbdb99c", "text": "Purpose – The aim of this paper is to identify the quality determinants for education services provided by higher education institutions (HEIs) in Greece and to measure their relative importance from the students’ points of view. Design/mthodology/approach – A multi-criteria decision-making methodology was used for assessing the relative importance of quality determinants that affect student satisfaction. More specifically, the analytical hierarchical process (AHP) was used in order to measure the relative weight of each quality factor. Findings – The relative weights of the factors that contribute to the quality of educational services as it is perceived by students was measured. Research limitations/implications – The research is based on the questionnaire of the Hellenic Quality Assurance Agency for Higher Education. This implies that the measured weights are related mainly to questions posed in this questionnaire. However, the applied method (AHP) can be used to assess different quality determinants. Practical implications – The outcome of this study can be used in order to quantify internal quality assessment of HEIs. More specifically, the outcome can be directly used by HEIs for assessing quality as perceived by students. Originality/value – The paper attempts to develop insights into comparative evaluations of quality determinants as they are perceived by students.", "title": "" }, { "docid": "a72932cd98f425eafc19b9786da4319d", "text": "Recommender systems are changing from novelties used by a few E-commerce sites, to serious business tools that are re-shaping the world of E-commerce. Many of the largest commerce Web sites are already using recommender systems to help their customers find products to purchase. A recommender system learns from a customer and recommends products that she will find most valuable from among the available products. In this paper we present an explanation of how recommender systems help E-commerce sites increase sales, and analyze six sites that use recommender systems including several sites that use more than one recommender system. Based on the examples, we create a taxonomy of recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. We conclude with ideas for new applications of recommender systems to E-commerce.", "title": "" }, { "docid": "1729a7840399b27cedd538a22621f5e0", "text": "BACKGROUND\nTenidap is a liposoluble non-steroidal anti-inflammatory drug that is easily distributed in the central nervous system and also inhibits the production and activity of cyclooxygenase-2 (COX-2) and cytokines in vitro. This study aimed to evaluate the neuroprotective effect of tenidap in a pilocarpine rat model of temporal lobe epilepsy (TLE).\n\n\nMETHODS\nTenidap was administered daily at 10 mg/kg for 10 days following pilocarpine-induced status epilepticus (SE) in male Wistar rats after which prolonged generalized seizures resulted in TLE. After tenidap treatment, spontaneous recurrent seizures (SRSs) were recorded by video monitoring (for 7 hours per day for 14 days). The frequency and severity of the SRSs were observed. Histological and immunocytochemical analyses were used to evaluate the neuroprotective effect of tenidap and detect COX-2 expression, which may be associated with neuronal death.\n\n\nRESULTS\nThere were 46.88 ± 10.70 survival neurons in tenidap-SE group, while there were 27.60 ± 5.18 survival neurons in saline-SE group at -2.4 mm field in the CA3 area. There were 37.75 ± 8.78 survival neurons in tenidap-SE group, while there were 33.40 ± 8.14 survival neurons in saline-SE group at -2.4 mm field in the CA1 area. Tenidap treatment significantly reduced neuronal damage in the CA3 area (P < 0.05) and slightly reduced damage in the CA1 area. Tenidap markedly inhibited COX-2 expression in the hippocampus, especially in the CA3 area.\n\n\nCONCLUSION\nTenidap conferred neuroprotection to the CA3 area in a pilocarpine-induced rat model of TLE by inhibiting COX-2 expression.", "title": "" }, { "docid": "7a612161017a69e49370a4eef3c54d38", "text": "We report that human walk patterns contain statistically similar features observed in Levy walks. These features include heavy-tail flight and pause-time distributions and the super-diffusive nature of mobility. Human walks are not random walks, but it is surprising that the patterns of human walks and Levy walks contain some statistical similarity. Our study is based on 226 daily GPS traces collected from 101 volunteers in five different outdoor sites. The heavy-tail flight distribution of human mobility induces the super-diffusivity of travel, but up to 30 min to 1 h due to the boundary effect of people's daily movement, which is caused by the tendency of people to move within a predefined (also confined) area of daily activities. These tendencies are not captured in common mobility models such as random way point (RWP). To evaluate the impact of these tendencies on the performance of mobile networks, we construct a simple truncated Levy walk mobility (TLW) model that emulates the statistical features observed in our analysis and under which we measure the performance of routing protocols in delay-tolerant networks (DTNs) and mobile ad hoc networks (MANETs). The results indicate the following. Higher diffusivity induces shorter intercontact times in DTN and shorter path durations with higher success probability in MANET. The diffusivity of TLW is in between those of RWP and Brownian motion (BM). Therefore, the routing performance under RWP as commonly used in mobile network studies and tends to be overestimated for DTNs and underestimated for MANETs compared to the performance under TLW.", "title": "" }, { "docid": "2fb4fbd96c4da572ae008419b57458dd", "text": "A main puzzle of deep networks revolves around the apparent absence of overfitting intended as robustness of the expected error against overparametrization, despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to a gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results. The result extends to deep nonlinear networks two key properties of gradient descent for linear networks, that have been recently recognized (1) to provide a form of implicit regularization: 1. For classification, which is the main application of today’s deep networks, there is asymptotic convergence to the maximum margin solution by minimization of loss functions such as the logistic, the cross entropy and the exp-loss . The maximum margin solution guarantees good classification error for “low noise” datasets. Importantly, this property holds independently of the initial conditions. Because of this property, our proposition guarantees a maximum margin solution also for deep nonlinear networks. 2. Gradient descent enforces a form of implicit regularization controlled by the number of iterations, and asymptotically converges to the minimum norm solution for appropriate initial conditions of gradient descent. This implies that there is usually an optimum early stopping that avoids overfitting of the expected risk. This property, valid for the square loss and many other loss functions, is relevant especially for regression. In the case of deep nonlinear networks the solution however is not expected to be strictly minimum norm, unlike the linear case. The robustness to overparametrization has suggestive implications for the robustness of the architecture of deep convolutional networks with respect to the curse of dimensionality.", "title": "" }, { "docid": "35715b4646108f2286a56b559258912e", "text": "Many congenital and acquired defects occur in the maxillofacial area. The buccal fat pad flap (BFP) is a simple and reliable flap for the treatment of many of these defects because of its rich blood supply and location, which is close to the location of various intraoral defects. In this article, we have reviewed BFP and the associated anatomical background, surgical techniques, and clinical applications. The surgical procedure is simple and has shown a high success rate in various clinical applications (approximately 90%), including the closure of oroantral fistula, correction of congenital defect, treatment of jaw bone necrosis, and reconstruction of tumor defects. The control of etiologic factors, size of defect, anatomical location of defect, and general condition of patient could influence the prognosis after grafting. In conclusion, BFP is a reliable flap that can be applied to various clinical situations.", "title": "" }, { "docid": "2386d2665487761df56c4d4858ac0da8", "text": "We explore how crowdworkers can be trained to tackle complex crowdsourcing tasks. We are particularly interested in training novice workers to perform well on solving tasks in situations where the space of strategies is large and workers need to discover and try different strategies to be successful. In a first experiment, we perform a comparison of five different training strategies. For complex web search challenges, we show that providing expert examples is an effective form of training, surpassing other forms of training in nearly all measures of interest. However, such training relies on access to domain expertise, which may be expensive or lacking. Therefore, in a second experiment we study the feasibility of training workers in the absence of domain expertise. We show that having workers validate the work of their peer workers can be even more effective than having them review expert examples if we only present solutions filtered by a threshold length. The results suggest that crowdsourced solutions of peer workers may be harnessed in an automated training pipeline.", "title": "" }, { "docid": "1ef82e0ef6860f66aadce8073617eb99", "text": "The emergence and availability of remote storage providers prompted work in the security community that allows a client to verify integrity and availability of the data she outsourced to an untrusted remove storage server at a relatively low cost. Most recent solutions to this problem allow the client to read and update (insert, modify, or delete) stored data blocks while trying to lower the overhead associated with verifying data integrity. In this work we develop a novel and efficient scheme, computation and communication overhead of which is orders of magnitude lower than those of other state-of-the-art schemes. Our solution has a number of new features such as a natural support for operations on ranges of blocks, and revision control. The performance guarantees that we achieve stem from a novel data structure, termed balanced update tree, and removing the need to verify update operations.", "title": "" }, { "docid": "0687e28b42ca1acff99dc4917b920127", "text": "Advanced Synchronization Facility (ASF) is an AMD64 hardware extension for lock-free data structures and transactional memory. It provides a speculative region that atomically executes speculative accesses in the region. Five new instructions are added to demarcate the region, use speculative accesses selectively, and control the speculative hardware context. Programmers can use speculative regions to build flexible multi-word atomic primitives with no additional software support by relying on the minimum guarantee of available ASF hardware resources for lock-free programming. Transactional programs with high-level TM language constructs can either be compiled directly to the ASF code or be linked to software TM systems that use ASF to accelerate transactional execution. In this paper we develop an out-of-order hardware design to implement ASF on a future AMD processor and evaluate it with an in-house simulator. The experimental results show that the combined use of the L1 cache and the LS unit is very helpful for the performance robustness of ASF-based lock free data structures, and that the selective use of speculative accesses enables transactional programs to scale with limited ASF hardware resources.", "title": "" }, { "docid": "f82f8232d8457927b476bfa83972c189", "text": "This introduction presents the principles and fundamentals of the AICOL scientific initiative and in particular the main contributions of the current volume, underlining the interdisciplinary approach and the variety of adopted methodologies.", "title": "" }, { "docid": "e143eb298fff97f8f58cc52caa945640", "text": "Supervised domain adaptation—where a large generic corpus and a smaller indomain corpus are both available for training—is a challenge for neural machine translation (NMT). Standard practice is to train a generic model and use it to initialize a second model, then continue training the second model on in-domain data to produce an in-domain model. We add an auxiliary term to the training objective during continued training that minimizes the cross entropy between the indomain model’s output word distribution and that of the out-of-domain model to prevent the model’s output from differing too much from the original out-ofdomain model. We perform experiments on EMEA (descriptions of medicines) and TED (rehearsed presentations), initialized from a general domain (WMT) model. Our method shows improvements over standard continued training by up to 1.5 BLEU.", "title": "" }, { "docid": "74e3247514f6f6e6772a4b02aa57a6c7", "text": "Data mining has been applied in various areas because of its ability to rapidly analyze vast amounts of data. This study is to build the Graduates Employment Model using classification task in data mining, and to compare several of data-mining approaches such as Bayesian method and the Tree method. The Bayesian method includes 5 algorithms, including AODE, BayesNet, HNB, NaviveBayes, WAODE. The Tree method includes 5 algorithms, including BFTree, NBTree, REPTree, ID3, C4.5. The experiment uses a classification task in WEKA, and we compare the results of each algorithm, where several classification models were generated. To validate the generated model, the experiments were conducted using real data collected from graduate profile at the Maejo University in Thailand. The model is intended to be used for predicting whether a graduate was employed, unemployed, or in an undetermined situation. Keywords-Bayesian method; Classification model; Data mining; Tree method", "title": "" }, { "docid": "107b95c3bb00c918c73d82dd678e46c0", "text": "Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).", "title": "" }, { "docid": "866c1e87076da5a94b9adeacb9091ea3", "text": "Training a support vector machine (SVM) is usually done by ma pping the underlying optimization problem into a quadratic progr amming (QP) problem. Unfortunately, high quality QP solvers are not rea dily available, which makes research into the area of SVMs difficult for he those without a QP solver. Recently, the Sequential Minimal Optim ization algorithm (SMO) was introduced [1, 2]. SMO reduces SVM trainin g down to a series of smaller QP subproblems that have an analytical solution and, therefore, does not require a general QP solver. SMO has been shown to be very efficient for classification problems using l ear SVMs and/or sparse data sets. This work shows how SMO can be genera lized to handle regression problems.", "title": "" }, { "docid": "f2eded52dbe84fba54d1796aa8ed63a5", "text": "Buying airline tickets is an ubiquitous task in which it is difficult for humans to minimize cost due to insufficient information. Even with historical data available for inspection (a recent addition to some travel reservation websites), it is difficult to assess how purchase timing translates into changes in expected cost. To address this problem, we introduce an agent which is able to optimize purchase timing on behalf of customers. We provide results that demonstrate the method can perform much closer to the optimal purchase policy than existing decision theoretic approaches for this domain.", "title": "" }, { "docid": "957073d854607640cc3ca2255efe7315", "text": "The mixed methods approach has emerged as a ‘‘third paradigm’’ for social research. It has developed a platform of ideas and practices that are credible and distinctive and that mark the approach out as a viable alternative to quantitative and qualitative paradigms. However, there are also a number of variations and inconsistencies within the mixed methods approach that should not be ignored. This article argues the need for a vision of research paradigm that accommodates such variations and inconsistencies. It is argued that the use of ‘‘communities of practice’’ as the basis for such a research paradigm is (a) consistent with the pragmatist underpinnings of the mixed methods approach, (b) accommodates a level of diversity, and (c) has good potential for understanding the methodological choices made by those conducting mixed methods research.", "title": "" }, { "docid": "552d034d8414412bb38d5bdd9d8519bc", "text": "The purpose of this case study was to describe compassion fatigue using one nurse's experience as an example and to present the process of Personal Reflective Debrief as an intervention to prevent compassion fatigue in emergency department (ED) nurses. Debriefing after adverse outcomes using a structured model has been used in health care as a nonthreatening and relatively low-cost way to discuss unanticipated outcomes, identify opportunities for improvement, and heal as a group. There are many methods of debrief tailored to specific timing around events, specific populations of health care workers, and amount of time for debriefing. Debrief with personal and group reflection will help develop insights that nurses may need to understand their own emotions and experiences, as well as to develop knowledge that can be used in subsequent situations. Regular engagement in a proactive scheduled Personal Reflective Debrief has been identified as a method of promoting resiliency in an environment where the realities of emergency nursing make compassion fatigue an imminent concern. Nurses working in the ED normally experience some level of stress because of high acuity patients and high patient volume; yet, repeated exposure puts them at risk for developing compassion fatigue. The Personal Reflective Debrief is one way emergency nurses can alleviate some of this caring-related stress and thereby become more resilient. Increasing nurses' resilience to workplace stress can counter compassion fatigue. The key is to provide planned, proactive resources to positively improve resiliency.", "title": "" }, { "docid": "6eff790c76e7eb7016eef6d306a0dde0", "text": "To cite: Rozenblum R, Bates DW. BMJ Qual Saf 2013;22:183–186. Patients are central to healthcare delivery, yet all too often their perspectives and input have not been considered by providers. 2 This is beginning to change rapidly and is having a major impact across a range of dimensions. Patients are becoming more engaged in their care and patient-centred healthcare has emerged as a major domain of quality. At the same time, social media in particular and the internet more broadly are widely recognised as having produced huge effects across societies. For example, few would have predicted the Arab Spring, yet it was clearly enabled by media such as Facebook and Twitter. Now these technologies are beginning to pervade the healthcare space, just as they have so many others. But what will their effects be? These three domains—patient-centred healthcare, social media and the internet— are beginning to come together, with powerful and unpredictable consequences. We believe that they have the potential to create a major shift in how patients and healthcare organisations connect, in effect, the ‘perfect storm’, a phrase that has been used to describe a situation in which a rare combination of circumstances result in an event of unusual magnitude creating the potential for non-linear change. Historically, patients have paid relatively little attention to quality, safety and the experiences large groups of other patients have had, and have made choices about where to get healthcare based largely on factors like reputation, the recommendations of a friend or proximity. Part of the reason for this was that information about quality or the opinions of others about their care was hard to access before the internet. Today, patients appear to be becoming more engaged with their care in general, and one of the many results is that they are increasingly using the internet to share and rate their experiences of health care. They are also using the internet to connect with others having similar illnesses, to share experiences, and beginning to manage their illnesses by leveraging these technologies. While it is not yet clear what impact patients’ use of the internet and social media will have on healthcare, they will definitely have a major effect. Healthcare organisations have generally been laggards in this space—they need to start thinking about how they will use the internet in a variety of ways, with specific examples being leveraging the growing number of patients that are using the internet to describe their experiences of healthcare and how they can incorporate patient’s feedback via the internet into the organisational quality improvement process.", "title": "" }, { "docid": "5e590d59150a4dd26add66491cbd9409", "text": "Recent work has shown that temporally extended actions (options) can be learned fully end-to-end as opposed to being specified in advance. While the problem of how to learn options is increasingly well understood, the question of what good options should be has remained elusive. We formulate our answer to what good options should be in the bounded rationality framework (Simon, 1957) through the notion of deliberation cost. We then derive practical gradient-based learning algorithms to implement this objective. Our results in the Arcade Learning Environment (ALE) show increased performance and interpretability.", "title": "" } ]
scidocsrr
b8b5fc3ff34f5c245da9507dd850f46c
An end-to-end workflow for engineering of biological networks from high-level specifications.
[ { "docid": "9c1e518c80dfbf201291923c9c55f1fd", "text": "Computation underlies the organization of cells into higher-order structures, for example during development or the spatial association of bacteria in a biofilm. Each cell performs a simple computational operation, but when combined with cell–cell communication, intricate patterns emerge. Here we study this process by combining a simple genetic circuit with quorum sensing to produce more complex computations in space. We construct a simple NOR logic gate in Escherichia coli by arranging two tandem promoters that function as inputs to drive the transcription of a repressor. The repressor inactivates a promoter that serves as the output. Individual colonies of E. coli carry the same NOR gate, but the inputs and outputs are wired to different orthogonal quorum-sensing ‘sender’ and ‘receiver’ devices. The quorum molecules form the wires between gates. By arranging the colonies in different spatial configurations, all possible two-input gates are produced, including the difficult XOR and EQUALS functions. The response is strong and robust, with 5- to >300-fold changes between the ‘on’ and ‘off’ states. This work helps elucidate the design rules by which simple logic can be harnessed to produce diverse and complex calculations by rewiring communication between cells.", "title": "" }, { "docid": "cc17b3548d2224b15090ead8c398f808", "text": "Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. Disease control is hampered by the occurrence of multi-drug-resistant strains of the malaria parasite Plasmodium falciparum. Synthetic antimalarial drugs and malarial vaccines are currently being developed, but their efficacy against malaria awaits rigorous clinical testing. Artemisinin, a sesquiterpene lactone endoperoxide extracted from Artemisia annua L (family Asteraceae; commonly known as sweet wormwood), is highly effective against multi-drug-resistant Plasmodium spp., but is in short supply and unaffordable to most malaria sufferers. Although total synthesis of artemisinin is difficult and costly, the semi-synthesis of artemisinin or any derivative from microbially sourced artemisinic acid, its immediate precursor, could be a cost-effective, environmentally friendly, high-quality and reliable source of artemisinin. Here we report the engineering of Saccharomyces cerevisiae to produce high titres (up to 100 mg l-1) of artemisinic acid using an engineered mevalonate pathway, amorphadiene synthase, and a novel cytochrome P450 monooxygenase (CYP71AV1) from A. annua that performs a three-step oxidation of amorpha-4,11-diene to artemisinic acid. The synthesized artemisinic acid is transported out and retained on the outside of the engineered yeast, meaning that a simple and inexpensive purification process can be used to obtain the desired product. Although the engineered yeast is already capable of producing artemisinic acid at a significantly higher specific productivity than A. annua, yield optimization and industrial scale-up will be required to raise artemisinic acid production to a level high enough to reduce artemisinin combination therapies to significantly below their current prices.", "title": "" } ]
[ { "docid": "1f1dec890f0bcb25d240b7b7f576593c", "text": "Existing keyphrase generation studies suffer from the problems of generating duplicate phrases and deficient evaluation based on a fixed number of predicted phrases. We propose a recurrent generative model that generates multiple keyphrases sequentially from a text, with specific modules that promote generation diversity. We further propose two new metrics that consider a variable number of phrases. With both existing and proposed evaluation setups, our model demonstrates superior performance to baselines on three types of keyphrase generation datasets, including two newly introduced in this work: STACKEXCHANGE and TEXTWORLD ACG. In contrast to previous keyphrase generation approaches, our model generates sets of diverse keyphrases of a variable number.", "title": "" }, { "docid": "e8ebec3b64e05cad3ab4c9b3d2bfa191", "text": "Multidimensional databases have recently gained widespread acceptance in the commercial world for supporting on-line analytical processing (OLAP) applications. We propose a hypercube-based data model and a few algebraic operations that provide semantic foundation to multidimensional databases and extend their current functionality. The distinguishing feature of the proposed model is the symmetric treatment not only of all dimensions but also measures. The model also is very exible in that it provides support for multiple hierarchies along each dimension and support for adhoc aggregates. The proposed operators are composable, reorderable, and closed in application. These operators are also minimal in the sense that none can be expressed in terms of others nor can any one be dropped without sacri cing functionality. They make possible the declarative speci cation and optimization of multidimensional database queries that are currently speci ed operationally. The operators have been designed to be translated to SQL and can be implemented either on top of a relational database system or within a special purpose multidimensional database engine. In e ect, they provide an algebraic application programming interface (API) that allows the separation of the frontend from the backend. Finally, the proposed model provides a framework in which to study multidimensional databases and opens several new research problems. Current Address: Oracle Corporation, Redwood City, California. Current Address: University of California, Berkeley, California.", "title": "" }, { "docid": "32f48e3e7997a912dbd3b33c283e596f", "text": "In the last couple of years, the interest of Mobile IT has arisen tremendously and future directions point towards an explosive expansive area. The objective with this paper is to explore how mobile eCommerce services map customers' requirements in geographical bound retailing. This is done through WineGuide, a geographically bound recommendation service for wine and food adapted to mobile phones. The service addresses well-known problems within the area of shopping, by: (1) offering expert recommendations; (2) notifying the user where products are available; (3) distributing information in appropriate situations; (4) letting the user search for products. The findings of the study indicate that the full potential of mobile eCommerce services can only be established through a complete eCommerce transaction implementation. The mobile phone gets the role of a remote controller, where products are ordered, paid for and home delivered through a few pressings on the", "title": "" }, { "docid": "8afd1ab45198e9960e6a047091a2def8", "text": "We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.", "title": "" }, { "docid": "df155f17d4d810779ee58bafcaab6f7b", "text": "OBJECTIVE\nTo explore the types, prevalence and associated variables of cyberbullying among students with intellectual and developmental disability attending special education settings.\n\n\nMETHODS\nStudents (n = 114) with intellectual and developmental disability who were between 12-19 years of age completed a questionnaire containing questions related to bullying and victimization via the internet and cellphones. Other questions concerned sociodemographic characteristics (IQ, age, gender, diagnosis), self-esteem and depressive feelings.\n\n\nRESULTS\nBetween 4-9% of students reported bullying or victimization of bullying at least once a week. Significant associations were found between cyberbullying and IQ, frequency of computer usage and self-esteem and depressive feelings. No associations were found between cyberbullying and age and gender.\n\n\nCONCLUSIONS\nCyberbullying is prevalent among students with intellectual and developmental disability in special education settings. Programmes should be developed to deal with this issue in which students, teachers and parents work together.", "title": "" }, { "docid": "cb6e2fd0082e16549e02db6e2d7fbef7", "text": "E-Health clouds are gaining increasing popularity by facilitating the storage and sharing of big data in healthcare. However, such an adoption also brings about a series of challenges, especially, how to ensure the security and privacy of highly sensitive health data. Among them, one of the major issues is authentication, which ensures that sensitive medical data in the cloud are not available to illegal users. Three-factor authentication combining password, smart card and biometrics perfectly matches this requirement by providing high security strength. Recently, Wu et al. proposed a three-factor authentication protocol based on elliptic curve cryptosystem which attempts to fulfill three-factor security and resist various existing attacks, providing many advantages over existing schemes. However, we first show that their scheme is susceptible to user impersonation attack in the registration phase. In addition, their scheme is also vulnerable to offline password guessing attack in the login and password change phase, under the condition that the mobile device is lost or stolen. Furthermore, it fails to provide user revocation when the mobile device is lost or stolen. To remedy these flaws, we put forward a robust three-factor authentication protocol, which not only guards various known attacks, but also provides more desired security properties. We demonstrate that our scheme provides mutual authentication using the Burrows–Abadi–Needham logic.", "title": "" }, { "docid": "7b5d610a7e7ff3f889b77a9a012d1bd2", "text": "Our paper deals with the Software Defined Networking which is in extensive use in present times due to its programmability that helps in initializing, controlling and managing the network dynamics. It allows the network administrators to work on centralized network configuration and improve data center network efficiency. SDN is basically becoming popular for replacing the static architecture of traditional networks and limited computing and storage of the modern computing environments like data centers. Operations are performed by the controllers with the static switches. Due to imbalance caused due to dynamic traffic controllers are underutilized. On the other hand controllers which are overloaded may cause switches to suffer time delays. Wireless networks involve no cabling, therefore it is cost-effective, efficient, easy-installable, manageable and adaptable. We present how SDN makes it easy to achieve end point security by checking the device's status. Local agents collect device information and send to cloud service to check for vulnerabilities. The results of those checks are sent to the SDN Controller through published Application Program Interfaces (APIs). The SDN Controller instructs Open Flow switches to direct vulnerable devices to a Quarantine Network, thus detecting suspicious traffic. The implementation is done using the data network mathematical model.", "title": "" }, { "docid": "044de981e34f0180accfb799063a7ec1", "text": "This paper proposes a novel hybrid full-bridge three-level LLC resonant converter. It integrates the advantages of the hybrid full-bridge three-level converter and the LLC resonant converter. It can operate not only under three-level mode but also under two-level mode, so it is very suitable for wide input voltage range application, such as fuel cell power system. The input current ripple and output filter can also be reduced. Three-level leg switches just sustain only half of the input voltage. ZCS is achieved for the rectifier diodes, and the voltage stress across the rectifier diodes can be minimized to the output voltage. The main switches can realize ZVS from zero to full load. A 200-400 V input, 360 V/4 A output prototype converter is built in our lab to verify the operation principle of the proposed converter", "title": "" }, { "docid": "60094e041c1be864ba8a636308b7ee12", "text": "This paper presents two chatbot systems, ALICE and Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the Dialogue Diversity Corpus to retrain a chatbot system with human dialogue examples. A Java program to convert from dialog transcript to AIML format provides a basic implementation of corpusbased chatbot training.. We conclude that dialogue researchers should adopt clearer standards for transcription and markup format in dialogue corpora to be used in training a chatbot system more effectively.", "title": "" }, { "docid": "5680257be3ac330b19645017953f6fb4", "text": "Debugging consumes significant time and effort in any major software development project. Moreover, even after the root cause of a bug is identified, fixing the bug is non-trivial. Given this situation, automated program repair methods are of value. In this paper, we present an automated repair method based on symbolic execution, constraint solving and program synthesis. In our approach, the requirement on the repaired code to pass a given set of tests is formulated as a constraint. Such a constraint is then solved by iterating over a layered space of repair expressions, layered by the complexity of the repair code. We compare our method with recently proposed genetic programming based repair on SIR programs with seeded bugs, as well as fragments of GNU Coreutils with real bugs. On these subjects, our approach reports a higher success-rate than genetic programming based repair, and produces a repair faster.", "title": "" }, { "docid": "dcee61dad66f59b2450a3e154726d6b1", "text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.", "title": "" }, { "docid": "13ecd4155910512bf6159710f572e0c1", "text": "Purpose – The purpose of this paper is to present the design and analysis of a robotic finger mechanism for robust industrial applications. Design/methodology/approach – The resultant design is a compact rigid link finger, which is adaptive to different shapes and sizes providing necessary grasping features. A number of such fingers can be assembled to function as a special purpose end effector. Findings – The mechanism removes a number of significant problems usually experienced with tendon-based designs. The finger actuation mechanism forms a compact and positive drive unit within the end effector’s body using solid mechanical linkages and integrated actuators. Practical implications – The paper discusses the design issues associated with a limited number of actuators to operate in a constrained environment and presents various considerations necessary to ensure safe and reliable operations. Originality/value – The design is original in existence and developed for special purpose handling applications that offers a strong and reliable system where space and safety is of prime concern.", "title": "" }, { "docid": "f2f43e7087d3506a848849b64b062954", "text": "We present an Adaptive User Interface (AUI) for online courses in higher education as a method for solving the challenges posed by the different knowledge levels in a heterogeneous group of students. The scenario described in this paper is an online beginners' course in Mathematics which is extended by an adaptive course layout to better fit the needs of every individual student. The course offers an entry-level test to check each student's prior knowledge and skills. The results are used to automatically determine which parts of the course are relevant for the student and which ones can be hidden, based on parameters set by the course teachers. Initial results are promising; the new adaptive learning platform in mathematics is leading to higher student satisfaction and better performance.", "title": "" }, { "docid": "c9135f79c4516c73e7ba924e00d51218", "text": "The experimental conditions by which electromagnetic signals (EMS) of low frequency can be emitted by diluted aqueous solutions of some bacterial and viral DNAs are described. That the recorded EMS and nanostructures induced in water carry the DNA information (sequence) is shown by retrieval of that same DNA by classical PCR amplification using the TAQ polymerase, including both primers and nucleotides. Moreover, such a transduction process has also been observed in living human cells exposed to EMS irradiation. These experiments suggest that coherent long-range molecular interaction must be present in water to observe the above-mentioned features. The quantum field theory analysis of the phenomenon is presented in this article.", "title": "" }, { "docid": "6a5f5275a8f262947a982bb3ace45cd6", "text": "A multibeam pillbox antenna system incorporating monopulse phase comparison technique is proposed in the 24-GHz band. This low-profile antenna architecture combines the scanning capabilities of pillbox configurations and enhanced resolution of two-quadrant monopulse technique; this approach avoids mechanical orientation of the antenna system for tracking applications. The radiating panel consists of two subarrays of 23 slotted waveguides, with 8 slots per waveguide. The beam is scanned in E-plane over a field of view of ±40°, and sum/difference patterns are generated in H-plane. Around the design frequency, f = 24.15 GHz, the antenna gain varies between 25 and 21.5 dBi for the central and extreme beams, respectively. The measured null depth is better than -16 dB for all difference beam patterns. The antenna bandwidth for VSWR <; 2 (Sii <; -10 dB) and isolation better than 10 dB (Sij <; -10 dB) is equal to 4.5%.", "title": "" }, { "docid": "8a9076c9212442e3f52b828ad96f7fe7", "text": "The building industry uses great quantities of raw materials that also involve high energy consumption. Choosing materials with high content in embodied energy entails an initial high level of energy consumption in the building production stage but also determines future energy consumption in order to fulfil heating, ventilation and air conditioning demands. This paper presents the results of an LCA study comparing the most commonly used building materials with some eco-materials using three different impact categories. The aim is to deepen the knowledge of energy and environmental specifications of building materials, analysing their possibilities for improvement and providing guidelines for materials selection in the eco-design of new buildings and rehabilitation of existing buildings. The study proves that the impact of construction products can be significantly reduced by promoting the use of the best techniques available and eco-innovation in production plants, substituting the use of finite natural resources for waste generated in other production processes, preferably available locally. This would stimulate competition between manufacturers to launch more eco-efficient products and encourage the use of the Environmental Product Declarations. This paper has been developed within the framework of the “LoRe-LCA Project” co-financed by the European Commission’s Intelligent Energy for Europe Program and the “PSE CICLOPE Project” co-financed by the Spanish Ministry of Science and Technology and the European Regional Development Fund. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dfba2b7750fc705f6fb0f87e4ff3a51a", "text": "The Internet is a technological development that has the potential to change not only the way society retains and accesses knowledge but also to transform and restructure traditional models of higher education, particularly the delivery and interaction in and with course materials and associated resources. Utilising the Internet to deliver eLearning initiatives has created expectations both in the business market and in higher education institutions. Indeed, eLearning has enabled universities to expand on their current geographical reach, to capitalise on new prospective students and to establish themselves as global educational providers. This paper examines the issues surrounding the implementation of eLearning into higher education, including the structure and delivery of higher education, the implications to both students and lecturers and the global impact on society. This journal article is available in Journal of University Teaching & Learning Practice: http://ro.uow.edu.au/jutlp/vol2/iss1/3 Journa l o f Un ivers i t y Teach ing and Learn ing Prac t i ce A Study Into The Effects Of eLearning On Higher Education", "title": "" }, { "docid": "5f01cb5c34ac9182f6485f70d19101db", "text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.", "title": "" }, { "docid": "3fe42f71b484068b843fedbd3c24ec45", "text": "We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) — an improved attention-based architecture for multiple object recognition. The proposed model is a fully differentiable unit that can be optimized end-to-end by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was employed as visual attention mechanism which allows to learn the geometric transformation of objects within images. With the combination of the Spatial Transformer and the powerful recurrent architecture, the proposed EDRAM can localize and recognize objects simultaneously. EDRAM has been evaluated on two publicly available datasets including MNIST Cluttered (with 70K cluttered digits) and SVHN (with up to 250k real world images of house numbers). Experiments show that it obtains superior performance as compared with the state-of-the-art models.", "title": "" }, { "docid": "14f539b7c27aeb96025045a660416e39", "text": "This paper describes a method for the automatic self-calibration of a 3D Laser sensor. We wish to acquire crisp point clouds and so we adopt a measure of crispness to capture point cloud quality. We then pose the calibration problem as the task of maximising point cloud quality. Concretely, we use Rényi Quadratic Entropy to measure the degree of organisation of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimisation. Beyond details on the sensor design itself, we fully describe the end-to-end intrinsic parameter calibration process and the estimation of the clock skews between the constituent microprocessors. We analyse performance using real and simulated data and demonstrate robust performance over thirty test sites.", "title": "" } ]
scidocsrr
7ee214c674008688f8a5bacf44fc4c2e
The Stressometer: A Simple, Valid, and Responsive Measure of Psychological Stress in Inflammatory Bowel Disease Patients.
[ { "docid": "43b598c714eb462a4549eacaf59db60b", "text": "OBJECTIVES\nTo test the construct validity of the short-form version of the Depression anxiety and stress scale (DASS-21), and in particular, to assess whether stress as indexed by this measure is synonymous with negative affectivity (NA) or whether it represents a related, but distinct, construct. To provide normative data for the general adult population.\n\n\nDESIGN\nCross-sectional, correlational and confirmatory factor analysis (CFA).\n\n\nMETHODS\nThe DASS-21 was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,794). Competing models of the latent structure of the DASS-21 were evaluated using CFA.\n\n\nRESULTS\nThe model with optimal fit (RCFI = 0.94) had a quadripartite structure, and consisted of a general factor of psychological distress plus orthogonal specific factors of depression, anxiety, and stress. This model was a significantly better fit than a competing model that tested the possibility that the Stress scale simply measures NA.\n\n\nCONCLUSIONS\nThe DASS-21 subscales can validly be used to measure the dimensions of depression, anxiety, and stress. However, each of these subscales also taps a more general dimension of psychological distress or NA. The utility of the measure is enhanced by the provision of normative data based on a large sample.", "title": "" } ]
[ { "docid": "7954b2262edac9a45cf3e13fc50e9aa2", "text": "Recent research has focused heavily on the practicality and feasibility of alternative architectures for supporting continuous auditing. In this paper, we explore the alternative architectures for continuous auditing that have been proposed in both the research and practice environments. We blend a focus on the practical realities of the current technological options and ERP structures with the emerging theory and research on continuous assurance models. The focus is on identifying the strengths and weaknesses of each architectural form as a basis for forming a research agenda that could allow researchers to contribute to the future evolution of both ERP system designs and auditor implementation strategies. There are substantial implications and insights that should be of interest to both researchers and practitioners interested in exploring continuous audit feasibility, capability, and organizational impact.", "title": "" }, { "docid": "790d30535edadb8e6318b6907b8553f3", "text": "Learning to anticipate future events on the basis of past experience with the consequences of one's own behavior (operant conditioning) is a simple form of learning that humans share with most other animals, including invertebrates. Three model organisms have recently made significant contributions towards a mechanistic model of operant conditioning, because of their special technical advantages. Research using the fruit fly Drosophila melanogaster implicated the ignorant gene in operant conditioning in the heat-box, research on the sea slug Aplysia californica contributed a cellular mechanism of behavior selection at a convergence point of operant behavior and reward, and research on the pond snail Lymnaea stagnalis elucidated the role of a behavior-initiating neuron in operant conditioning. These insights demonstrate the usefulness of a variety of invertebrate model systems to complement and stimulate research in vertebrates.", "title": "" }, { "docid": "51f7c27a999e0cb761825cbb49e0b830", "text": "BACKGROUND\nCurrently, there is little information available on the treatment and outcome of intraoperative periprosthetic humeral fractures that occur during shoulder arthroplasty. The purpose of this study was to report on the incidence, treatment, and outcome of, as well as the risk factors for, intraoperative periprosthetic humeral fractures.\n\n\nMETHODS\nBetween 1980 and 2002, forty-five intraoperative periprosthetic humeral fractures occurred during shoulder arthroplasty at our institution. Twenty-eight fractures occurred during primary total shoulder arthroplasty, three occurred during primary hemiarthroplasty, and fourteen occurred during revision arthroplasty. Nineteen fractures involved the greater tuberosity, sixteen involved the humeral shaft, six involved the metaphysis, three involved the greater tuberosity and the humeral shaft, and one involved both the greater and lesser tuberosities. All patients were followed for a minimum of two years. At the time of the latest follow-up, outcomes were assessed, radiographs were examined, and relative risks were calculated.\n\n\nRESULTS\nOver the twenty-two-year study period, the rate of intraoperative humeral fractures at our institution was 1.5%. All fractures healed at a mean of seventeen weeks. In the primary arthroplasty group (thirty-one patients), range of motion and pain scores improved significantly (p < 0.05) at the time of follow-up. In the revision arthroplasty group (fourteen patients), range of motion remained unchanged whereas pain scores improved significantly (p < 0.005). Transient nerve injuries occurred in six patients. Four fractures displaced postoperatively and were then treated nonoperatively; all four healed. Significant relative risks for intraoperative fracture were female sex, revision surgery, and press-fit implants (p < 0.05).\n\n\nCONCLUSIONS\nThe data from the present study suggest that although intraoperative humeral fractures are associated with a high rate of healing, there was a substantial rate of associated complications, including transient nerve injuries and fracture displacement. Significant risk factors for intraoperative fractures include female sex, revision surgery, and press-fit humeral implants.", "title": "" }, { "docid": "7835bb8463eff6a7fbeec256068e1f09", "text": "Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIS) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIS. The panelists will present examples of compelling IUIS that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.", "title": "" }, { "docid": "421cb7fb80371c835a5d314455fb077c", "text": "This paper explains, in an introductory fashion, the method of specifying the correct behavior of a program by the use of input/output assertions and describes one method for showing that the program is correct with respect to those assertions. An initial assertion characterizes conditions expected to be true upon entry to the program and a final assertion characterizes conditions expected to be true upon exit from the program. When a program contains no branches, a technique known as symbolic execution can be used to show that the truth of the initial assertion upon entry guarantees the truth of the final assertion upon exit. More generally, for a program with branches one can define a symbolic execution tree. If there is an upper bound on the number of times each loop in such a program may be executed, a proof of correctness can be given by a simple traversal of the (finite) symbolic execution tree. However, for most programs, no fixed bound on the number of times each loop is executed exists and the corresponding symbolic execution trees are infinite. In order to prove the correctness of such programs, a more general assertion structure must be provided. The symbolic execution tree of such programs must be traversed inductively rather than explicitly. This leads naturally to the use of additional assertions which are called \"inductive assertions.\"", "title": "" }, { "docid": "1dbb04e806b1fd2a8be99633807d9f4d", "text": "Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.", "title": "" }, { "docid": "22724325cdadd29a0d41498a44ab7aca", "text": "INTRODUCTION: Traumatic loss of teeth in the esthetic zone commonly results in significant loss of buccal bone. This leads to reduced esthetics, problems with phonetics and reduction in function. Single tooth replacement has become an indication for implant-based restoration. In case of lack of bone volume the need of surgical reconstruction of the alveolar ridge is warranted. Several bone grafting techniques have been described to ensure sufficient bone volume for implantation. OBJECTIVES: Evaluation of using the zygomatic buttress as an intraoral bone harvesting donor site for pre-implant grafting. MATERIALS AND METHODS: Twelve patients were selected with limited alveolar ridge defect in the esthetic zone that needs bone grafting procedure prior to dental implants. Patients were treated using a 2-stage technique where bone blocks harvested from the zygomatic buttress region were placed as onlay grafts and fixed with osteosynthesis micro screws. After 4 months of healing, screws were removed for implant placement RESULTS: Harvesting of 12 bone blocks were performed for all patients indicating a success rate of 100% for the zygomatic buttress area as a donor site. Final rehabilitation with dental implants was possible in 11 of 12 patients, yielding a success rate of 91.6%. Three patients (25%) had postoperative complications at the donor site and one patient (8.3%) at the recipient site. The mean value of bone width pre-operatively was 3.64 ± .48 mm which increased to 5.47 ± .57 mm post-operatively, the increase in mean value of bone width was statistically significant (p < 0.001). CONCLUSIONS: Harvesting of intraoral bone blocks from the zygomatic buttress region is an effective and safe method to treat localized alveolar ridge defect before implant placement.", "title": "" }, { "docid": "06d05d4cbfd443d45993d6cc98ab22cb", "text": "Genetic deficiency of ectodysplasin A (EDA) causes X-linked hypohidrotic ectodermal dysplasia (XLHED), in which the development of sweat glands is irreversibly impaired, an condition that can lead to life-threatening hyperthermia. We observed normal development of mouse fetuses with Eda mutations after they had been exposed in utero to a recombinant protein that includes the receptor-binding domain of EDA. We administered this protein intraamniotically to two affected human twins at gestational weeks 26 and 31 and to a single affected human fetus at gestational week 26; the infants, born in week 33 (twins) and week 39 (singleton), were able to sweat normally, and XLHED-related illness had not developed by 14 to 22 months of age. (Funded by Edimer Pharmaceuticals and others.).", "title": "" }, { "docid": "c943d44e452c5cd5e027df814f8aac32", "text": "Three experiments tested the hypothesis that the social roles implied by specific contexts can attenuate or reverse the typical pattern of racial bias obtained on both controlled and automatic evaluation measures. Study 1 assessed evaluations of Black and Asian faces in contexts related to athlete or student roles. Study 2 compared evaluations of Black and White faces in 3 role-related contexts (prisoner, churchgoer, and factory worker). Study 3 manipulated role cues (lawyer or prisoner) within the same prison context. All 3 studies produced significant reversals of racial bias as a function of implied role on measures of both controlled and automatic evaluation. These results support the interpretation that differential evaluations based on Race x Role interactions provide one way that context can moderate both controlled and automatic racial bias.", "title": "" }, { "docid": "422ac6f062dc30a58b9dba2e666d076a", "text": "Formal specifications can help with program testing, optimization, refactoring, documentation, and, most importantly, debugging and repair. However, they are difficult to write manually, and automatic mining techniques suffer from 90-99 percent false positive rates. To address this problem, we propose to augment a temporal-property miner by incorporating code quality metrics. We measure code quality by extracting additional information from the software engineering process and using information from code that is more likely to be correct, as well as code that is less likely to be correct. When used as a preprocessing step for an existing specification miner, our technique identifies which input is most indicative of correct program behavior, which allows off-the-shelf techniques to learn the same number of specifications using only 45 percent of their original input. As a novel inference technique, our approach has few false positives in practice (63 percent when balancing precision and recall, 3 percent when focused on precision), while still finding useful specifications (e.g., those that find many bugs) on over 1.5 million lines of code.", "title": "" }, { "docid": "4e23abcd1746d23c54e36c51e0a59208", "text": "Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal selfsimilarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, HOF, etc.), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.", "title": "" }, { "docid": "bd125ed6f7d0c8759533343acbeb0da6", "text": "A new compact omnidirectional circularly polarized (CP) cylindrical dielectric resonator antenna (DRA) with a top-loaded modified Alford loop is investigated. Fed by an axial probe, the DRA is excited in its TM01δ-mode, which radiates like a vertically polarized electric monopole. The modified Alford loop comprises a central circular patch and four curved branches. It is placed on the top of the DRA and provides an equivalent horizontally polarized magnetic dipole mode. Omnidirectional CP fields can be obtained when the two orthogonally polarized fields are equal in amplitude but different in phase by 90°. This CP DRA is applied to the design of a two-port CP diversity DRA which provides omnidirectional and broadside radiation patterns. The broadside radiation pattern is obtained by making use of the broadside HEM12δ+ 1-mode of the DRA, which is excited by a balanced slot serially fed by a microstrip line. For demonstration, both the omnidirectional CP DRA and the diversity CP DRA were designed at ~ 2.4 GHz for WLAN applications. Their S-parameters, axial ratios, radiation patterns, antenna gains, and antenna efficiencies are studied. The envelope correlation is also found for the diversity design. Reasonable agreement between the simulated and measured results is observed.", "title": "" }, { "docid": "75b4640071754d331783d26020f9ac7a", "text": "Traditionally, positive emotions and thoughts, strengths, and the satisfaction of basic psychological needs for belonging, competence, and autonomy have been seen as the cornerstones of psychological health. Without disputing their importance, these foci fail to capture many of the fluctuating, conflicting forces that are readily apparent when people navigate the environment and social world. In this paper, we review literature to offer evidence for the prominence of psychological flexibility in understanding psychological health. Thus far, the importance of psychological flexibility has been obscured by the isolation and disconnection of research conducted on this topic. Psychological flexibility spans a wide range of human abilities to: recognize and adapt to various situational demands; shift mindsets or behavioral repertoires when these strategies compromise personal or social functioning; maintain balance among important life domains; and be aware, open, and committed to behaviors that are congruent with deeply held values. In many forms of psychopathology, these flexibility processes are absent. In hopes of creating a more coherent understanding, we synthesize work in emotion regulation, mindfulness and acceptance, social and personality psychology, and neuropsychology. Basic research findings provide insight into the nature, correlates, and consequences of psychological flexibility and applied research provides details on promising interventions. Throughout, we emphasize dynamic approaches that might capture this fluid construct in the real-world.", "title": "" }, { "docid": "c1cdc9bb29660e910ccead445bcc896d", "text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.", "title": "" }, { "docid": "5b31efe9dc8e79d975a488c2b9084aea", "text": "Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale - manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80% on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem.", "title": "" }, { "docid": "e0583afbdc609792ad947223006c851f", "text": "Orthogonal frequency-division multiplexing (OFDM) signal coding and system architecture were implemented to achieve radar and data communication functionalities. The resultant system is a software-defined unit, which can be used for range measurements, radar imaging, and data communications. Range reconstructions were performed for ranges up to 4 m using trihedral corner reflectors with approximately 203 m of radar cross section at the carrier frequency; range resolution of approximately 0.3 m was demonstrated. Synthetic aperture radar (SAR) image of a single corner reflector was obtained; SAR signal processing specific to OFDM signals is presented. Data communication tests were performed in radar setup, where the signal was reflected by the same target and decoded as communication data; bit error rate of was achieved at 57 Mb/s. The system shows good promise as a multifunctional software-defined sensor which can be used in radar sensor networks.", "title": "" }, { "docid": "18775f382c9daa44a59875ec1257c439", "text": "Research on software testing produces many innovative automated techniques, but because software testing is by necessity incomplete and approximate, any new technique faces the challenge of an empirical assessment. In the past, we have demonstrated scientific advance in automated unit test generation with the EVOSUITE tool by evaluating it on manually selected open-source projects or examples that represent a particular problem addressed by the underlying technique. However, demonstrating scientific advance is not necessarily the same as demonstrating practical value; even if VOSUITE worked well on the software projects we selected for evaluation, it might not scale up to the complexity of real systems. Ideally, one would use large “real-world” software systems to minimize the threats to external validity when evaluating research tools. However, neither choosing such software systems nor applying research prototypes to them are trivial tasks.\n In this article we present the results of a large experiment in unit test generation using the VOSUITE tool on 100 randomly chosen open-source projects, the 10 most popular open-source projects according to the SourceForge Web site, seven industrial projects, and 11 automatically generated software projects. The study confirms that VOSUITE can achieve good levels of branch coverage (on average, 71% per class) in practice. However, the study also exemplifies how the choice of software systems for an empirical study can influence the results of the experiments, which can serve to inform researchers to make more conscious choices in the selection of software system subjects. Furthermore, our experiments demonstrate how practical limitations interfere with scientific advances, branch coverage on an unbiased sample is affected by predominant environmental dependencies. The surprisingly large effect of such practical engineering problems in unit testing will hopefully lead to a larger appreciation of work in this area, thus supporting transfer of knowledge from software testing research to practice.", "title": "" }, { "docid": "2d3b452d7a8cf8f29ac1896f14c43faa", "text": "Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1% ROUGE points, leading to a conclusion that it provides a better document representation and a better document summary as a result.", "title": "" }, { "docid": "ecc105b449b0ec054cfb523704978980", "text": "Modern information seekers face dynamic streams of large-scale heterogeneous data that are both intimidating and overwhelming. They need a strategy to filter this barrage of massive data sets, and to find all of the information responding to their information needs, despite the pressures imposed by schedules and budgets. In this applied research, we present an exploratory search strategy that allows professional information seekers to efficiently and effectively triage all of the data. We demonstrate that exploratory search is particularly useful for information filtering and large-scale information triage, regardless of the language of the data, and regardless of the particular industry, whether finance, medical, business, government, information technology, news, or legal. Our strategy reduces a dauntingly large volume of information into a manageable, high-precision data set, suitable for focused reading. This strategy is interdisciplinary, integrating concepts from information filtering, information triage, and exploratory search. Key aspects include advanced search software, interdisciplinary paired search, asynchronous collaborative search, attention to linguistic phenomena, and aggregated search results in the form of a search matrix or search grid. We present the positive results of a task-oriented evaluation in a real-world setting, discuss these results from a qualitative perspective, and share future research areas.", "title": "" }, { "docid": "ac55cf0cf677dc0a1604558ac4c27109", "text": "Blockchain technology is being considered as one of the ultimate revolutions that will be able to disrupt several pillars of our society. It is a public and distributed ledger built for security and interoperability. Blockchain provides all parties a secure and synchronized record of immutable transactions assembled together and permanently stored with a fingerprint, creating therefore an irreversible chain. In order to operate, this technology does not rely on any central authority. All transactions are sent over the network and the consensus is achieved by the mutual calculation and agreement. In this paper, we evaluate the blockchain technology and its evolution. Then, we characterize some essential features of the distributed ledger technologies (DLT) focusing on the three main blockchains actors: Bitcoin, Ethereum and Hyperledger. Besides, we present their security challenges and explore their drawbacks that can lead to use the blockchain network in order to conduct several attack scenarios. Finally, we describe their relationship with the onion router network (Tor) that beyond the malicious uses of the blockchain via Tor, these two networks share many common points.", "title": "" } ]
scidocsrr
0294c39f833f4f3c0657a8e46ada6edf
Foreign Subtitles Help but Native-Language Subtitles Harm Foreign Speech Perception
[ { "docid": "204df6c32bde81851ebdb0a0b4d18b93", "text": "Language experience systematically constrains perception of speech contrasts that deviate phonologically and/or phonetically from those of the listener’s native language. These effects are most dramatic in adults, but begin to emerge in infancy and undergo further development through at least early childhood. The central question addressed here is: How do nonnative speech perception findings bear on phonological and phonetic aspects of second language (L2) perceptual learning? A frequent assumption has been that nonnative speech perception can also account for the relative difficulties that late learners have with specific L2 segments and contrasts. However, evaluation of this assumption must take into account the fact that models of nonnative speech perception such as the Perceptual Assimilation Model (PAM) have focused primarily on naïve listeners, whereas models of L2 speech acquisition such as the Speech Learning Model (SLM) have focused on experienced listeners. This chapter probes the assumption that L2 perceptual learning is determined by nonnative speech perception principles, by considering the commonalities and complementarities between inexperienced listeners and those learning an L2, as viewed from PAM and SLM. Among the issues examined are how language learning may affect perception of phonetic vs. phonological information, how monolingual vs. multiple language experience may impact perception, and what these may imply for attunement of speech perception to changes in the listener’s language environment. Commonalities and complementarities 3", "title": "" } ]
[ { "docid": "063295bfa624d5aa09420e17f5d21c4c", "text": "In this paper, we introduce new methods and discuss results of text-based LSTM (Long Short-Term Memory) networks for automatic music composition. The proposed network is designed to learn relationships within text documents that represent chord progressions and drum tracks in two case studies. In the experiments, word-RNNs (Recurrent Neural Networks) show good results for both cases, while character-based RNNs (char-RNNs) only succeed to learn chord progressions. The proposed system can be used for fully automatic composition or as semiautomatic systems that help humans to compose music by controlling a diversity parameter of the model.", "title": "" }, { "docid": "48e48f7ba29f7749a8bf18b37651d0ca", "text": "Distributed key generation (DKG) has been studied extensively in the cryptographic literature. However, it has never been examined outside of the synchronous setting, and the known DKG protocols cannot guarantee safety or liveness over the Internet. In this work, we present the first realistic DKG protocol for use over the Internet. We propose a practical system model for the Internet and define an efficient verifiable secret sharing (VSS) scheme in it. We observe the necessity of Byzantine agreement for asynchronous DKG and analyze the difficulty of using a randomized protocol for it. Using our VSS scheme and a leader-based agreement protocol, we then design a provably secure DKG protocol. We also consider and achieve cryptographic properties such as uniform randomness of the shared secret and compare static versus adaptive adversary models. Finally, we implement our DKG protocol, and establish its efficiency and reliability by extensively testing it on the PlanetLab platform. Counter to a general non-scalability perception about asynchronous systems, our experiments demonstrate that our asynchronous DKG protocol scales well with the system size and it is suitable for realizing multiparty computation and threshold cryptography over the Internet.", "title": "" }, { "docid": "f1ab2b5768da8f2f221b59a16c565f69", "text": "Non-functional requirements (NFRs) have been the focus of research in Requirements Engineering (RE) for more than 20 years. Despite this attention, their ontological nature is still an open question, thereby hampering efforts to develop concepts, tools and techniques for eliciting, modeling, and analyzing them, in order to produce a specification for a system-to-be. In this paper, we propose to treat NFRs as qualities, based on definitions of the UFO foundational ontology. Furthermore, based on these ontological definitions, we provide guidelines for distinguishing between non-functional and functional requirements, and sketch a syntax of a specification language that can be used for capturing NFRs.", "title": "" }, { "docid": "3a3e872846f997f6a8400ae6e7612a40", "text": "In this paper, we propose an approach to understand the driver behavior using smartphone sensors. The aim for analyzing the sensory data acquired using a smartphone is to design a car-independent system which does not need vehicle mounted sensors measuring turn rates, gas consumption or tire pressure. The sensory data utilized in this paper includes the accelerometer, gyroscope and the magnetometer. Using these sensors we obtain position, speed, acceleration, deceleration and deflection angle sensory information and estimate commuting safety by statistically analyzing driver behavior. In contrast to state of the art, this work uses no external sensors, resulting in a cost efficient, simplistic and user-friendly system.", "title": "" }, { "docid": "8d0066400985b2577f4fbe8013d5ba1d", "text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, and empirical research. Despite a large number of emerging scientific studies to address the problem, a major limitation of existing work is the lack of comparative evaluations, which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and gated recurrent networks. We conduct an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available Twitter datasets to date, and show that compared to previously reported results on these datasets, our proposed method is able to capture both word sequence and order information in short texts, and it sets new benchmark by outperforming on 6 out of 7 datasets by between 1 and 13 percents in F1. We also extend the existing dataset collection on this task by creating a new dataset covering different topics.", "title": "" }, { "docid": "960b2fe4d1edd7b3ec05fbde5bd5c934", "text": "The Web is the most ubiquitous computing platform. There are already billions of devices connected to the web that have access to a plethora of visual information. Understanding images is a complex and demanding task which requires sophisticated algorithms and implementations. OpenCV is the defacto library for general computer vision application development, with hundreds of algorithms and efficient implementation in C++. However, there is no comparable computer vision library for the Web offering an equal level of functionality and performance. This is in large part due to the fact that most web applications used to adopt a clientserver approach in which the computational part is handled by the server. However, with HTML5 and new client-side technologies browsers are capable of handling more complex tasks. This work brings OpenCV to the Web by making it available natively in JavaScript, taking advantage of its efficiency, completeness, API maturity, and its community’s collective knowledge. We developed an automatic approach to compile OpenCV source code into JavaScript in a way that is easier for JavaScript engines to optimize significantly and provide an API that makes it easier for users to adopt the library and develop applications. We were able to translate more than 800 OpenCV functions from different vision categories while achieving near-native performance for most of them.", "title": "" }, { "docid": "02ffa1b39ac9e76239eff040121938a3", "text": "Machine learning can be utilized in many different ways in the field of automatic manufacturing and logistics. In this thesis supervised machine learning have been utilized to train a classifiers for detection and recognition of objects in images. The techniques AdaBoost and Random forest have been examined, both are based on decision trees. The thesis has considered two applications: barcode detection and optical character recognition (OCR). Supervised machine learning methods are highly appropriate in both applications since both barcodes and printed characters generally are rather distinguishable. The first part of this thesis examines the use of machine learning for barcode detection in images, both traditional 1D-barcodes and the more recent Maxi-codes, which is a type of two-dimensional barcode. In this part the focus has been to train classifiers with the technique AdaBoost. The Maxi-code detection is mainly done with Local binary pattern features. For detection of 1D-codes, features are calculated from the structure tensor. The classifiers have been evaluated with around 200 real test images, containing barcodes, and shows promising results. The second part of the thesis involves optical character recognition. The focus in this part has been to train a Random forest classifier by using the technique point pair features. The performance has also been compared with the more proven and widely used Haar-features. Although, the result shows that Haar-features are superior in terms of accuracy. Nevertheless the conclusion is that point pairs can be utilized as features for Random forest in OCR.", "title": "" }, { "docid": "d456cdecdb66e62d971a069f45d9594c", "text": "In this paper, a new rectangle detection approach is proposed. It is a bottom-up approach that contains four stages: line segment extraction, corner detection, corner-relation-graph generation and rectangle detection. Graph structure is used to construct the relations between corners and simplify the problem of rectangle detection. In addition, the approach can be extended to detect any polygons. Experiments on bin detection, traffic sign detection and license plate detection prove that the approach is robust.", "title": "" }, { "docid": "99fa507d3b36e1a42f0dbda5420e329a", "text": "Reference Points and Effort Provision A key open question for theories of reference-dependent preferences is what determines the reference point. One candidate is expectations: what people expect could affect how they feel about what actually occurs. In a real-effort experiment, we manipulate the rational expectations of subjects and check whether this manipulation influences their effort provision. We find that effort provision is significantly different between treatments in the way predicted by models of expectation-based reference-dependent preferences: if expectations are high, subjects work longer and earn more money than if expectations are low. JEL Classification: C91, D01, D84, J22", "title": "" }, { "docid": "82b065557addca3f3a188b68cf788cf9", "text": "Leaders should be a key source of ethical guidance for employees. Yet, little empirical research focuses on an ethical dimension of leadership. We propose social learning theory as a theoretical basis for understanding ethical leadership and offer a constitutive definition of the ethical leadership construct. In seven interlocking studies, we investigate the viability and importance of this construct. We develop and test a new instrument to measure ethical leadership, examine the proposed connections of ethical leadership with other constructs in a nomological network, and demonstrate its predictive validity for important employee outcomes. Specifically, ethical leadership is related to consideration behavior, honesty, trust in the leader, interactional fairness, socialized charismatic leadership (as measured by the idealized influence dimension of transformational leadership), and abusive supervision, but is not subsumed by any of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "d0dd13964de87acab0f7fe76585d0bbf", "text": "The continual growth of electronic medical record (EMR) databases has paved the way for many data mining applications, including the discovery of novel disease-drug associations and the prediction of patient survival rates. However, these tasks are hindered because EMRs are usually segmented or incomplete. EMR analysis is further limited by the overabundance of medical term synonyms and morphologies, which causes existing techniques to mismatch records containing semantically similar but lexically distinct terms. Current solutions fill in missing values with techniques that tend to introduce noise rather than reduce it. In this paper, we propose to simultaneously infer missing data and solve semantic mismatching in EMRs by first integrating EMR data with molecular interaction networks and domain knowledge to build the HEMnet, a heterogeneous medical information network. We then project this network onto a low-dimensional space, and group entities in the network according to their relative distances. Lastly, we use this entity distance information to enrich the original EMRs. We evaluate the effectiveness of this method according to its ability to separate patients with dissimilar survival functions. We show that our method can obtain significant (p-value < 0.01) results for each cancer subtype in a lung cancer dataset, while the baselines cannot.", "title": "" }, { "docid": "5f6d142860a4bd9ff1fa9c4be9f17890", "text": "Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl’s algorithm for singly-connected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable which breaks it. The main advantage of this algorithm is that it computes the probability directly on the original network instead of building a cluster tree, and this can save time when debugging a model and when the sparsity of evidence allows a pruning of the network. The algorithm is also advantageous when some families in the network interact through AND/OR gates. A parallel implementation of the algorithm with a processor for each node is possible even in the case of multiply-connected networks.", "title": "" }, { "docid": "8ea44a793f57f036db0142cf51b12928", "text": "This paper presents a comparative study of various classification methods in the application of automatic brain tumor segmentation. The data used in the study are 3D MRI volumes from MICCAI2016 brain tumor segmentation (BRATS) benchmark. 30 volumes are chosen randomly as a training set and 57 volumes are randomly chosen as a test set. The volumes are preprocessed and a feature vector is retrieved from each volume's four modalities (T1, T1 contrast-enhanced, T2 and Fluid-attenuated inversion recovery). The popular Dice score is used as an accuracy measure to record each classifier recognition results. All classifiers are implemented in the popular machine learning suit of algorithms, WEKA.", "title": "" }, { "docid": "aba7cb0f5f50a062c42b6b51457eb363", "text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.", "title": "" }, { "docid": "7f16ed65f6fd2bcff084d22f76740ff3", "text": "The past few years have witnessed a growth in size and computational requirements for training and inference with neural networks. Currently, a common approach to address these requirements is to use a heterogeneous distributed environment with a mixture of hardware devices such as CPUs and GPUs. Importantly, the decision of placing parts of the neural models on devices is often made by human experts based on simple heuristics and intuitions. In this paper, we propose a method which learns to optimize device placement for TensorFlow computational graphs. Key to our method is the use of a sequence-tosequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices. The execution time of the predicted placements is then used as the reward signal to optimize the parameters of the sequence-to-sequence model. Our main result is that on Inception-V3 for ImageNet classification, and on RNN LSTM, for language modeling and neural machine translation, our model finds non-trivial device placements that outperform hand-crafted heuristics and traditional algorithmic methods.", "title": "" }, { "docid": "4b717fc5c3ef0096a3f2829dd10b3bd6", "text": "The problem of learning to distinguish good inputs from malicious has come to be known as adversarial classification emphasizing the fact that, unlike traditional classification, the adversary can manipulate input instances to avoid being so classified. We offer the first general theoretical analysis of the problem of adversarial classification, resolving several important open questions in the process. First, we significantly generalize previous results on adversarial classifier reverse engineering (ACRE), showing that if a classifier can be efficiently learned, it can subsequently be efficiently reverse engineered with arbitrary precision. We extend this result to randomized classification schemes, but now observe that reverse engineering is imperfect, and its efficacy depends on the defender’s randomization scheme. Armed with this insight, we proceed to characterize optimal randomization schemes in the face of adversarial reverse engineering and classifier manipulation. What we find is quite surprising: in all the model variations we consider, the defender’s optimal policy tends to be either to randomize uniformly (ignoring baseline classification accuracy), which is the case for targeted attacks, or not to randomize at all, which is typically optimal when attacks are indiscriminate.", "title": "" }, { "docid": "83530198697ed04a3870a1e9d403728b", "text": "Conventional charge pump circuits use a fixed switching frequency that leads to power efficiency degradation for loading less than the rated loading. This paper proposes a level shifter design that also functions as a frequency converter to automatically vary the switching frequency of a dual charge pump circuit according to the loading. The switching frequency is designed to be 25 kHz with 12 mA loading on both inverting and noninverting outputs. The switching frequency is automatically reduced when loading is lighter to improve the power efficiency. The frequency tuning range of this circuit is designed to be from 100 Hz to 25 kHz. A start-up circuit is included to ensure proper pumping action and avoid latch-up during power-up. A slow turn-on, fast turn-off driving scheme is used in the clock buffer to reduce power dissipation. The new dual charge pump circuit was fabricated in a 3m p-well double-poly single-metal CMOS technology with breakdown voltage of 18 V, the die size is 4.7 4.5 mm2. For comparison, a charge pump circuit with conventional level shifter and clock buffer was also fabricated. The measured results show that the new charge pump has two advantages: 1) the power dissipation of the charge pump is improved by a factor of 32 at no load and by 2% at rated loading of 500 and 2) the breakdown voltage requirement is reduced from 19.2 to 17 V.", "title": "" }, { "docid": "fde9d6a4fc1594a1767e84c62c7d3b89", "text": "This paper explores the effects of emotions embedded in a seller review on its perceived helpfulness to readers. Drawing on frameworks in literature on emotion and cognitive processing, we propose that over and above a well-known negativity bias, the impact of discrete emotions in a review will vary, and that one source of this variance is reader perceptions of reviewers’ cognitive effort. We focus on the roles of two distinct, negative emotions common to seller reviews: anxiety and anger. In the first two studies, experimental methods were utilized to identify and explain the differential impact of anxiety and anger in terms of perceived reviewer effort. In the third study, seller reviews from Yahoo! Shopping web sites were collected to examine the relationship between emotional review content and helpfulness ratings. Our findings demonstrate the importance of examining discrete emotions in online word-of-mouth, and they carry important practical implications for consumers and online retailers.", "title": "" }, { "docid": "9b98e43825bd36736c7c87bb2cee5a8c", "text": "Corresponding Author: Daniel Strmečki Faculty of Organization and Informatics, Pavlinska 2, 42000 Varaždin, Croatia Email: danstrmecki@gmail.com Abstract: Gamification is the usage of game mechanics, dynamics, aesthetics and game thinking in non-game systems. Its main objective is to increase user’s motivation, experience and engagement. For the same reason, it has started to penetrate in e-learning systems. However, when using gamified design elements in e-learning, we must consider various types of learners. In the phases of analysis and design of such elements, the cooperation of education, technology, pedagogy, design and finance experts is required. This paper discusses the development phases of introducing gamification into e-learning systems, various gamification design elements and their suitability for usage in e-learning systems. Several gamified design elements are found suited for e-learning (including points, badges, trophies, customization, leader boards, levels, progress tracking, challenges, feedback, social engagement loops and the freedom to fail). Advices for the usage of each of those elements in e-learning systems are also provided in this study. Based on those advises and the identified phases of introducing gamification info e-learning systems, we conducted an experimental study to investigate the effectiveness of gamification of an informatics online course. Results showed that students enrolled in the gamified version of the online module achieved greater learning success. Positive results encourage us to investigate the gamification of online learning content for other topics and courses. We also encourage more research on the influence of specific gamified design elements on learner’s motivation and engagement.", "title": "" }, { "docid": "6cac6ab24b5e833e73c98db476e1437d", "text": "The observation that a particular drug state may acquire the properties of a discriminative stimulus is explicable on the basis of drug-induced interoceptive cues. The present investigation sought to determine (a) whether the hallucinogens mescaline and LSD could serve as discriminative stimuli when either drug is paired with saline and (b) whether discriminative responding would occur when the paired stimuli are produced by equivalent doses of LSD and mescaline. In a standard two-lever operant test chamber, rats received a reinforcer (sweetened milk) for correct responses according to a variable interval schedule. All sessions were preceded by one of two treatments; following treatment A, only responses on lever A were reinforced and, in a similar fashion, lever B was correct following treatment B. No responses were reinforced during the first five minutes of a daily thirty-minute session. It was found that mescaline and LSD can serve as discriminative stimuli when either drug is paired with saline and that the degree of discrimination varies with drug dose. When equivalent doses of the two drugs were given to the same animal, no discriminated responding was observed. The latter finding suggests that mescaline and LSD produce qualitatively similar interoceptive cues in the rat.", "title": "" } ]
scidocsrr
b8ce75fbec20d0fbbfc6790076760b24
Real-time System = Discrete System + Clock Variables y
[ { "docid": "c09f3698f350ef749d3ef3e626c86788", "text": "The te rm \"reactive system\" was introduced by David Harel and Amir Pnueli [HP85], and is now commonly accepted to designate permanent ly operating systems, and to distinguish them from \"trans]ormational systems\" i.e, usual programs whose role is to terminate with a result, computed from an initial da ta (e.g., a compiler). In synchronous programming, we understand it in a more restrictive way, distinguishing between \"interactive\" and \"reactive\" systems: Interactive systems permanent ly communicate with their environment, but at their own speed. They are able to synchronize with their environment, i.e., making it wait. Concurrent processes considered in operat ing systems or in data-base management , are generally interactive. Reactive systems, in our meaning, have to react to an environment which cannot wait. Typical examples appear when the environment is a physical process. The specific features of reactive systems have been pointed out many times [Ha193,BCG88,Ber89]:", "title": "" } ]
[ { "docid": "b1394b4534d1a2d62767f885c180903b", "text": "OBJECTIVE\nTo determine the value of measuring fetal femur and humerus length at 11-14 weeks of gestation in screening for chromosomal defects.\n\n\nMETHODS\nFemur and humerus lengths were measured using transabdominal ultrasound in 1018 fetuses immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In the group of chromosomally normal fetuses, regression analysis was used to determine the association between long bone length and crown-rump length (CRL). Femur and humerus lengths in fetuses with trisomy 21 were compared with those of normal fetuses.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The karyotype was normal in 920 fetuses and abnormal in 98, including 65 cases of trisomy 21. In the chromosomally normal group the fetal femur and humerus lengths increased significantly with CRL (femur length = - 6.330 + 0.215 x CRL in mm, r = 0.874, P < 0.0001; humerus length = - 6.240 + 0.220 x CRL in mm, r = 0.871, P < 0.0001). In the Bland-Altman plot the mean difference between paired measurements of femur length was 0.21 mm (95% limits of agreement - 0.52 to 0.48 mm) and of humerus length was 0.23 mm (95% limits of agreement - 0.57 to 0.55 mm). In the trisomy 21 fetuses the median femur and humerus lengths were significantly below the appropriate normal mean for CRL by 0.4 and 0.3 mm, respectively (P = 0.002), but they were below the respective 5th centile of the normal range in only six (9.2%) and three (4.6%) of the cases, respectively.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation the femur and humerus lengths in trisomy 21 fetuses are significantly reduced but the degree of deviation from normal is too small for these measurements to be useful in screening for trisomy 21.", "title": "" }, { "docid": "5c89616107c278013aeed114897c6477", "text": "—This paper presents a new method of detection and identification, called PYTHON programming environment, which can realize the gesture track recognition based on the depth image information get by the Kinect sensor. First, Kinect sensor is used to obtain depth image information. Then it extracts splith and with the official Microsoft SDK. Finally, this paper presents how to calculate the palm center's coordinates based on the moment of hand contour feature. Experiments show that the advantages of using the hand split and gesture recognition of the Kinect's depth image can be very effective to achieve interactive features.", "title": "" }, { "docid": "98cfa94144ddcc5caf2a06dab8872de4", "text": "Protocols this text provides a very helpful especially for students teachers. I was like new provides, academic researchers shows. All topics related to be comfortable with excellent comprehensive reference section is on communications issues. Provides academic researchers he has, published numerous papers and applications free. This book of wireless sensor networks there is on ad hoc networks. Shows which circumstances they generally walk through of references.", "title": "" }, { "docid": "c11f1b087955db1cac8c6350ad8a256e", "text": "Cloud computing enables users to consume various IT resources in an on-demand manner, and with low management overhead. However, customers can face new security risks when they use cloud computing platforms. In this paper, we focus on one such threat—the co-resident attack, where malicious users build side channels and extract private information from virtual machines co-located on the same server. Previous works mainly attempt to address the problem by eliminating side channels. However, most of these methods are not suitable for immediate deployment due to the required modifications to current cloud platforms. We choose to solve the problem from a different perspective, by studying how to improve the virtual machine allocation policy, so that it is difficult for attackers to co-locate with their targets. Specifically, we (1) define security metrics for assessing the attack; (2) model these metrics, and compare the difficulty of achieving co-residence under three commonly used policies; (3) design a new policy that not only mitigates the threat of attack, but also satisfies the requirements for workload balance and low power consumption; and (4) implement, test, and prove the effectiveness of the policy on the popular open-source platform OpenStack.", "title": "" }, { "docid": "061c8e8e9d6a360c36158193afee5276", "text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.", "title": "" }, { "docid": "ce43ddd46037dbc2caf935f8eeb7de13", "text": "As business conditions change rapidly, the need for integrating business and technical systems calls for novel ICT frameworks and solutions to remain concurrent in highly competitive markets. A number of problems and issues arise in this regard. In this paper, four big challenges of enterprise information systems (EIS) are defined and discussed: (1) data value chain management; (2) context awareness; (3) usability, interaction and visualization; and (4) human learning and continuous education. Major contributions and research orientations of ICT technologies are elaborated based on selected key issues and lessons learned. First, the semantic mediator is proposed as a key enabler for dealing with semantic interoperability. Second, the context-aware infrastructures are proposed as a main solution for making efficient use of EIS to offer a high level of customization of delivered services and data. Third, the product avatar is proposed as a contribution to an evolutionary social, collaborative and product-centric and interaction metaphor with EIS. Fourth, human learning solutions are considered to develop individual competences in order to cope with new technological advances. The paper ends with a discussion on the impact of the proposed solutions on the economic and social landscape and proposes a set of recommendations as a perspective towards next generation of information systems. ã 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d7ec8f90efe6e85dc05a6da2be732f9f", "text": "Oral hairy leukoplakia (OHL) is a lesion frequently, although not exclusively, observed in patients infected by human immunodeficiency viruses (HIV). OHL is clinically characterized by bilateral, often elevated, white patches of the lateral borders and dorsum of the tongue. Histologically, there is profound acanthosis, sometimes with koilocytic changes, and a lack of a notable inflammatory infiltrate. The koilocytic changes are due to intense replication of Epstein-Barr virus (EBV), while epithelial hyperplasia and acanthosis are likely to result from the combined action of the EBV-encoded proteins, latent membrane protein-1, and antiapoptotic BHRF1. How OHL is initiated and whether it develops after EBV reactivation from latency or superinfection remain unresolved; nevertheless, definitive diagnosis requires the demonstration of EBV replicating vegetatively in histological or cytological specimens. In patients with HIV infection, the development of OHL may herald severe HIV disease and the rapid onset of AIDS, but despite its title, OHL is not regarded as premalignant and is unlikely to give rise to oral squamous cell carcinoma.", "title": "" }, { "docid": "cf0a4f12c23b42c08b6404fe897ed646", "text": "By performing computation at the location of data, non-Von Neumann (VN) computing should provide power and speed benefits over conventional (e.g., VN-based) approaches to data-centric workloads such as deep learning. For the on-chip training of largescale deep neural networks using nonvolatile memory (NVM) based synapses, success will require performance levels (e.g., deep neural network classification accuracies) that are competitive with conventional approaches despite the inherent imperfections of such NVM devices, and will also require massively parallel yet low-power read and write access. In this paper, we focus on the latter requirement, and outline the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices to implement this acceleration through what is, at least locally, analog computing. We address how the circuit requirements for this new neuromorphic computing approach are somewhat reminiscent of, yet significantly different from, the well-known requirements found in conventional memory applications. We discuss tradeoffs that can influence both the effective acceleration factor (“speed”) and power requirements of such on-chip learning accelerators. P. Narayanan A. Fumarola L. L. Sanches K. Hosokawa S. C. Lewis R. M. Shelby G. W. Burr", "title": "" }, { "docid": "94cf1976c10d632cfce12ce3f32be4cc", "text": "In today’s economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, investigative support, provider lock-in and disaster recovery. We focus on risk and control analysis in relation to a sample of Swiss companies with regard to their prospective adoption of public cloud services. We observe a sufficient degree of risk awareness with a focus on those risks that are relevant to the IT function to be migrated to the cloud. Moreover, the recommendations as to the adoption of cloud services depend on the company’s size with larger and more technologically advanced companies being better prepared for the cloud. As an exploratory first step, the results of this study would allow us to design and implement broader research into cloud computing risk management in Switzerland.", "title": "" }, { "docid": "180e407f4e658ef24c4d1f7fb1a28dcb", "text": "The intention of this project is to explore the correlation between dietary habits and reports of overall well-being. Specifically, this study will consider the impact of meat-eating versus non meat eating (vegetarian/vegan) diets. Dietary choices are also considered in comparison to general lifestyle choices. Questionnaires were distributed to students on the Boca Raton campus of Palm Beach State College. The results of this survey indicated that vegetarians believe that dietary choices have a greater impact on well-being than they actually do. In addition, the subjective well-being of vegetarians compared to that of meat eaters showed inconsistent results. This may be attributable to the fact that some vegetarians choose this lifestyle for ethical reasons such as guilt over the slaughter of animals, leading to an increased feeling of well-being. On the other hand, a higher percentage of vegetarians report regular marijuana use, which could lead to depression caused by a chemical imbalance in the brain. However, because most participants in the study were meat eaters, fewer vegetarians were included in the sample. Further exploration with a larger sample base is needed to explain the inconsistent results. Introduction “Food consumption is an everyday activity, one that is crucial for survival and sense of well-being. Many of our social engagements revolve around rituals associated with eating” (Marcus, 2008). What we consciously and unconsciously consume has a profound impact on our body chemistry and affects how we function in the world. The purpose of this project is to increase understanding about the impact of plant-based and meat-based diets on overall wellbeing. In addition, this report considers the role of secondary factors related to diet and their impact on overall well-being. The survey conducted as part of this project was designed to determine whether or not vegetarians have a greater perceived sense of well-being than people who regularly eat meat. Several types of vegetarian diets exist, including vegan (no red meat, fish, poultry, dairy, and eggs), octo-lovo (consume milk, eggs, or both but no red meat, fish, or poultry), pescatarian (consume fish, milk, and eggs but no red meat and poultry), semi-vegetarian (eat fish, poultry and other meats less than once a week) (Fraser, 2009), fruitarian (raw vegan diets based on fruits) and raw-foodist (plant-based diet characterized by a high consumption of uncooked and unprocessed foods, i.e. fruits, vegetables, nuts and seeds) (Craig & Mangels, 2009). Even within these dietary patterns, considerable variations may exist in the extent to which animal products are excluded. While some researchers suggest that a vegetarian diet can lower the risk for many diseases (Fraser, 2009), others warn of “nutrient deficiencies common amongst vegetarians and particularly vegans” (Sabaté, 2003). Vegetarian diets have been described as being deficient in several nutrients, including protein, iron, zinc, calcium, vitamin B12 and A, n-3 fatty acids, and iodine. Numerous studies have demonstrated that the observed deficiencies are usually due to poor meal planning (Leitzmann, 2005). However, according to the American Dietetic Association (2009), a well-balanced vegetarian diet is suitable for all stages of life, from childhood to the elderly, as well as pregnant women and athletes. A vegetarian diet that includes regular consumption of fruits and vegetables is associated with reducing the risk of many diseases, including cardiovascular disease, hypertension, type-2 diabetes, cancer, osteoporosis, renal disease, dementia, diverticular disease, gallstones, rheumatoid arthritis, stroke, cataracts, Alzheimer disease, as well as a general decline in functions associated with aging (Liu, 2003; Leitzmann, 2005). What this research demonstrates is that there are numerous factors to consider when examining the risk for disease or deficiencies amongst vegetarians, such as how meals are planned and whether there is an adequate intake of fruits and vegetables. At the same time, research on meat-based diets demonstrates that a meat-based diet can also be deficient in certain nutrients, but such diets are more commonly identified as a risk factor for disease, which can result in having a negative effect on one’s well-being (Cousens, 2010). A meat-based diet is one-dimensional, meaning it provides exclusively one type of protein. “As it is used in standard nutritional and agricultural writings, the term meat is actually a misnomer. Meat’s correct definition is muscles of animals, and is nothing but wet protein tissues” (Smil, 2002). Looking at meat in this manner, and excluding fish (also a source of protein but providing monounsaturated fatty acids which confer health benefits) from the definition of meat, leads to the conclusion that all meat protein is basically the same. This is an idea that some people debate. However, assuming that all meat proteins are the same, one can conclude that consuming a primarily meat-based diet, which is high in saturated fats, can lead to an array of health issues such as cardiovascular disease, diabetes mellitus, and some cancers (Walker, Rhubart, Pamela, Shawn, Kelling & Lawrence, 2005). These issues are particularly prevalent in the US, where people typically consume diets that are high in meat proteins and saturated fat yet low in fruits, vegetables and whole grains (Walker et al., 2005), a pattern of eating that increases the risk of the aforementioned diseases. However, the impact of meat proteins is different in impoverished countries. For example, in many African countries where nutrient deficiencies are common, an increase in meat and dairy is likely to improve people’s nutritional outcomes and overall health (Walker et al., 2005). Well-being does not rely exclusively on diet but ultimately “what is good for a person” (Crisp, 2008). In general, well-being incorporates a holistic approach, focusing on multiple dimensions that affect quality of life, subsequently leading to a more balanced, healthier, and happier person. Dimensions of well-being are often presented graphically in the form of “wellbeing wheels” which are used to demonstrate the relationships between each dimension, with the premise being that for an individual to be considered “well,” he or she must actively strive to improve in each dimension (Washington State University, 2011). These dimensions include emotional, environmental, financial, intellectual, occupational, physical, social and spiritual aspects, all combining to create general health and wellness (Washington State University, 2011). These dimensions also play a role in the etiology of positive and negative emotional states. There is mounting evidence that positive emotions co-occur with negative emotions, especially during intensely stressful periods of life (Sprangers et al., 2010). Creating a balance between these dimensions of well-being may direct a person to make choices that affect his or her diet either positively or negatively. A study done in Ireland indicates that there is broad-scale support for the impact of diet and lifestyle on mental health. At the same time, the researchers found that people had a poor understanding of food labeling and nutritional claims. The study showed that residents of Northern Ireland, where there is a high rate of reported vegetarians, are much more likely to report positive mental outlooks than those in the Republic of Ireland, where there appear to be fewer vegetarians (National Food Survey of Ireland, 2005). Examining the correlation between diet and well-being further, it has been shown that foods high in fat have the power to modify motivation and reward systems in the brain. It has been found that certain neuropeptides are activated during activities involving reward and pleasure. Similarly, use of cocaine and nicotine also activate these same reward centers, even with only the expectation of consumption of fatty foods (Choi, Davis, Fitzgerald, & Benoit, 2009). It has also been found that binge eating and overconsumption of fat and sugar lead to an increased number of opioid receptors in the part of the brain that modulates food intake. In other words, eating fatty and sugary foods trigger the same reward mechanisms in the brain as cocaine and nicotine (Bello et al., 2009). As a result, a person may tend to over-eat fatty and sugary foods, which could lead to a variety of health issues. What we consume can have a significant effect on our mood, which is another dimension of well-being. “Your brain is a biochemical thinking machine, and all of the biochemical building blocks of your brain eventually are affected by what you eat. Even the genes you inherited from your parents are influenced by what you put in your mouth” (Challam, 2007). It has been found that loneliness can have a powerful effect on mood, shyness, anxiety, and self-esteem. Moreover, popular concepts such as committing acts of kindness, expressing gratitude or forgiveness, and thoughtful self-reflection can produce an increase in levels of happiness (Sprangers et al., 2010). The food we choose to consume often paves the way for our mood and behavior (Challam, 2007). It has long been known that food alters our mood and that too much meat can lead to health problems. “It takes only 3 ounces of meat a day to maximize all of its nutritional benefits. Consumption of any more and the increased intake of saturated fat, protein, and cholesterol will compromise your health and increase your risk of developing degenerative diseases” (Somer, 1995). By comparison, a “vegetarian diet is not likely associated with poor mood states or depression” (Beezhold, Daigle, & Johnston, 2010). It has been shown that a vegetarian diet can prevent many health problems, which in turn can impact our mood. To illustrate, persons diagnosed with heart disease who implement a vegetarian diet into their lifestyle can reap the positive bene", "title": "" }, { "docid": "57d3505a655e9c0efdc32101fd09b192", "text": "POX is a Python based open source OpenFlow/Software Defined Networking (SDN) Controller. POX is used for faster development and prototyping of new network applications. POX controller comes pre installed with the mininet virtual machine. Using POX controller you can turn dumb openflow devices into hub, switch, load balancer, firewall devices. The POX controller allows easy way to run OpenFlow/SDN experiments. POX can be passed different parameters according to real or experimental topologies, thus allowing you to run experiments on real hardware, testbeds or in mininet emulator. In this paper, first section will contain introduction about POX, OpenFlow and SDN, then discussion about relationship between POX and Mininet. Final Sections will be regarding creating and verifying behavior of network applications in POX.", "title": "" }, { "docid": "e16b4b93913db0f37032224e07a0c057", "text": "Large number of weights in deep neural networks makes the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as “inferencing as a service” environments on cloud. Prior work has considered reduction in the size of the models, through compression techniques like pruning, quantization, Huffman encoding etc. However, efficient inferencing using the compressed models has received little attention, specially with the Huffman encoding in place. In this paper, we propose efficient parallel algorithms for inferencing of single image and batches, under various memory constraints. Our experimental results show that our approach of using variable batch size for inferencing achieves 15-25% performance improvement in the inference throughput for AlexNet, while maintaining memory and latency constraints.", "title": "" }, { "docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea", "text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.", "title": "" }, { "docid": "dd0d89e7f223023bd1624e6e46017cb1", "text": "We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.", "title": "" }, { "docid": "6bca70ccf17fd4380502b7b4e2e7e550", "text": "A consistent UI leaves an overall impression on user’s psychology, aesthetics and taste. Human–computer interaction (HCI) is the study of how humans interact with computer systems. Many disciplines contribute to HCI, including computer science, psychology, ergonomics, engineering, and graphic design. HCI is a broad term that covers all aspects of the way in which people interact with computers. In their daily lives, people are coming into contact with an increasing number of computer-based technologies. Some of these computer systems, such as personal computers, we use directly. We come into contact with other systems less directly — for example, we have all seen cashiers use laser scanners and digital cash registers when we shop. We have taken the same but in extensible line and made more solid justified by linking with other scientific pillars and concluded some of the best holistic base work for future innovations. It is done by inspecting various theories of Colour, Shape, Wave, Fonts, Design language and other miscellaneous theories in detail. Keywords— Karamvir Singh Rajpal, Mandeep Singh Rajpal, User Interface, User Experience, Design, Frontend, Neonex Technology,", "title": "" }, { "docid": "a27a05cb00d350f9021b5c4f609d772c", "text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.", "title": "" }, { "docid": "e9fa76fba0256cb99abf7992323a674b", "text": "Identity formation in adolescence is closely linked to searching for and acquiring meaning in one's life. To date little is known about the manner in which these 2 constructs may be related in this developmental stage. In order to shed more light on their longitudinal links, we conducted a 3-wave longitudinal study, investigating how identity processes and meaning in life dimensions are interconnected across time, testing the moderating effects of gender and age. Participants were 1,062 adolescents (59.4% female), who filled in measures of identity and meaning in life at 3 measurement waves during 1 school year. Cross-lagged models highlighted positive reciprocal associations between (a) commitment processes and presence of meaning and (b) exploration processes and search for meaning. These results were not moderated by adolescents' gender or age. Strong identification with present commitments and reduced ruminative exploration helped adolescents in having a clear sense of meaning in their lives. We also highlighted the dual nature of search for meaning. This dimension was sustained by exploration in breadth and ruminative exploration, and it positively predicted all exploration processes. We clarified the potential for a strong sense of meaning to support identity commitments and that the process of seeking life meaning sustains identity exploration across time. (PsycINFO Database Record", "title": "" }, { "docid": "5b17c5637af104b1f20ff1ca9ce9c700", "text": "According to the traditional understanding of cerebrospinal fluid (CSF) physiology, the majority of CSF is produced by the choroid plexus, circulates through the ventricles, the cisterns, and the subarachnoid space to be absorbed into the blood by the arachnoid villi. This review surveys key developments leading to the traditional concept. Challenging this concept are novel insights utilizing molecular and cellular biology as well as neuroimaging, which indicate that CSF physiology may be much more complex than previously believed. The CSF circulation comprises not only a directed flow of CSF, but in addition a pulsatile to and fro movement throughout the entire brain with local fluid exchange between blood, interstitial fluid, and CSF. Astrocytes, aquaporins, and other membrane transporters are key elements in brain water and CSF homeostasis. A continuous bidirectional fluid exchange at the blood brain barrier produces flow rates, which exceed the choroidal CSF production rate by far. The CSF circulation around blood vessels penetrating from the subarachnoid space into the Virchow Robin spaces provides both a drainage pathway for the clearance of waste molecules from the brain and a site for the interaction of the systemic immune system with that of the brain. Important physiological functions, for example the regeneration of the brain during sleep, may depend on CSF circulation.", "title": "" }, { "docid": "456fd41267a82663fee901b111ff7d47", "text": "The tagging of Named Entities, the names of particular things or classes, is regarded as an important component technology for many NLP applications. The first Named Entity set had 7 types, organization, location, person, date, time, money and percent expressions. Later, in the IREX project artifact was added and ACE added two, GPE and facility, to pursue the generalization of the technology. However, 7 or 8 kinds of NE are not broad enough to cover general applications. We proposed about 150 categories of NE (Sekine et al. 2002) and now we have extended it again to 200 categories. Also we have developed dictionaries and an automatic tagger for NEs in Japanese.", "title": "" }, { "docid": "f6c8e3afce6f47dd80ed4fadc68dc1f0", "text": "PURPOSE\nThe CD20 B-lymphocyte surface antigen expressed by B-cell lymphomas is an attractive target for radioimmunotherapy, treatment using radiolabeled antibodies. We conducted a phase I dose-escalation trial to assess the toxicity, tumor targeting, and efficacy of nonmyeloablative doses of an anti-CD20 monoclonal antibody (anti-B1) labeled with iodine-131 (131I) in 34 patients with B-cell lymphoma who had failed chemotherapy.\n\n\nPATIENTS AND METHODS\nPatients were first given tracelabeled doses of 131I-labeled anti-B1 (15 to 20 mg, 5 mCi) to assess radiolabeled antibody biodistribution, and then a radioimmunotherapeutic dose (15 to 20 mg) labeled with a quantity of 131I that would deliver a specified centigray dose of whole-body radiation predicted by the tracer dose. Whole-body radiation doses were escalated from 25 to 85 cGy in sequential groups of patients in 10-cGy increments. To evaluate if radiolabeled antibody biodistribution could be optimized, initial patients were given one or two additional tracer doses on successive weeks, each dose preceded by an infusion of 135 mg of unlabeled anti-B1 one week and 685 mg the next. The unlabeled antibody dose resulting in the most optimal tracer biodistribution was also given before the radioimmunotherapeutic dose. Later patients were given a single tracer dose and radioimmunotherapeutic dose preceded by infusion of 685 mg of unlabeled anti-B1.\n\n\nRESULTS\nTreatment was well tolerated. Hematologic toxicity was dose-limiting, and 75 cGy was established as the maximally tolerated whole-body radiation dose. Twenty-eight patients received radioimmunotherapeutic doses of 34 to 161 mCi, resulting in complete remission in 14 patients and a partial response in eight. All 13 patients with low-grade lymphoma responded, and 10 achieved a complete remission. Six of eight patients with transformed lymphoma responded. Thirteen of 19 patients whose disease was resistant to their last course of chemotherapy and all patients with chemotherapy-sensitive disease responded. The median duration of complete remission exceeds 16.5 months. Six patients remain in complete remission 16 to 31 months after treatment.\n\n\nCONCLUSION\nNonmyeloablative radioimmunotherapy with 131I-anti-B1 is associated with a high rate of durable remissions in patients with B-cell lymphoma refractory to chemotherapy.", "title": "" } ]
scidocsrr
b7d7c3f6d34428eedd2e0f447d9527fe
Exploring the taxonomie and associative link between emotion and function for robot sound design
[ { "docid": "09ac80ede8822e3e71642b8bd57ff262", "text": "Auditory displays are described for several application domains: transportation, industrial processes, health care, operation theaters, and service sectors. Several types of auditory displays are compared, such as warning, state, and intent displays. Also, the importance for blind people in a visualized world is considered with suitable approaches. The service robot domain has been chosen as an example for the future use of auditory displays within multimedia process supervision and control applications in industrial, transportation, and medical systems. The design of directional sounds and of additional sounds for robot states, as well as the design of more complicated robot sound tracks, are explained. Basic musical elements and robot movement sounds have been combined. Two exploratory experimental studies, one on the understandability of the directional sounds and the robot state sounds as well as another on the auditory perception of intended robot trajectories in a simulated supermarket scenario, are described. Subjective evaluations of sound characteristics such as urgency, expressiveness, and annoyance have been carried out by nonmusicians and musicians. These experimental results are briefly compared with time-frequency analyses.", "title": "" }, { "docid": "ef4e7445ec9bbbfc8d25d92a16042f88", "text": "CONCRETE", "title": "" } ]
[ { "docid": "bcc16ced6e108660b76413bfbaca8c70", "text": "Emotion cause extraction is one of the promising research topics in sentiment analysis, but has not been well-investigated so far. This task enables us to obtain useful information for sentiment classification and possibly to gain further insights about human emotion as well. This paper proposes a bootstrapping technique to automatically acquire conjunctive phrases as textual cue patterns for emotion cause extraction. The proposed method first gathers emotion causes via manually given cue phrases. It then acquires new conjunctive phrases from emotion phrases that contain similar emotion causes to previously gathered ones. In existing studies, the cost for creating comprehensive cue phrase rules for building emotion cause corpora was high because of their dependencies both on languages and on textual natures. The contribution of our method is its ability to automatically create the corpora from just a few cue phrases as seeds. Our method can expand cue phrases at low cost and acquire a large number of emotion causes of the promising quality compared to human annotations.", "title": "" }, { "docid": "3925371ff139ca9cd23222db78f8694a", "text": "In this paper, we investigate how the Gauss–Newton Hessian matrix affects the basin of convergence in Newton-type methods. Although the Newton algorithm is theoretically superior to the Gauss–Newton algorithm and the Levenberg–Marquardt (LM) method as far as their asymptotic convergence rate is concerned, the LM method is often preferred in nonlinear least squares problems in practice. This paper presents a theoretical analysis of the advantage of the Gauss–Newton Hessian matrix. It is proved that the Gauss–Newton approximation function is the only nonnegative convex quadratic approximation that retains a critical property of the original objective function: taking the minimal value of zero on an (n − 1)-dimensional manifold (or affine subspace). Due to this property, the Gauss–Newton approximation does not change the zero-on-(n − 1)-D “structure” of the original problem, explaining the reason why the Gauss–Newton Hessian matrix is preferred for nonlinear least squares problems, especially when the initial point is far from the solution.", "title": "" }, { "docid": "5cd3809ab7ed083de14bb622f12373fe", "text": "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.", "title": "" }, { "docid": "ed75192dcb1356820fdb6411593dd233", "text": "We introduce QVEC-CCA—an intrinsic evaluation metric for word vector representations based on correlations of learned vectors with features extracted from linguistic resources. We show that QVECCCA scores are an effective proxy for a range of extrinsic semantic and syntactic tasks. We also show that the proposed evaluation obtains higher and more consistent correlations with downstream tasks, compared to existing approaches to intrinsic evaluation of word vectors that are based on word similarity.", "title": "" }, { "docid": "44a84af55421c88347034d6dc14e4e30", "text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.", "title": "" }, { "docid": "3b2376110b0e6949379697b7ba6730b5", "text": "............................................................................................................................... i Acknowledgments............................................................................................................... ii Table of", "title": "" }, { "docid": "3d3c60b2491f9e720171f55e8ecb0a5c", "text": "There is an increasing need for fault tolerance capabilities in logic devices brought about by the scaling of transistors to ever smaller geometries. This paper presents a hypervisor-based replication approach that can be applied to commodity hardware to allow for virtually lockstepped execution. It offers many of the benefits of hardware-based lockstep while being cheaper and easier to implement and more flexible in the configurations supported. A novel form of processor state fingerprinting is also presented, which can significantly reduce the fault detection latency. This further improves reliability by triggering rollback recovery before errors are recorded to a checkpoint. The mechanisms are validated using a full prototype and the benchmarks considered indicate an average performance overhead of approximately 14 percent with the possibility for significant optimization. Finally, a unique method of using virtual lockstep for fault injection testing is presented and used to show that significant detection latency reduction is achievable by comparing only a small amount of data across replicas.", "title": "" }, { "docid": "96f9aa02b797faa479821db8eb4b2b4e", "text": "Building on work of Deutsch and Jozsa, we construct oracles relative to which (1) there is a decision problem that can be solved with certainty in worst-case polynomial time on the quantum computer, yet it cannot be solved classically in probabilis-tic expected polynomial time if errors are not tolerated nor even in nondeterministic polynomial time, and (2) there is a decision problem that can be solved in exponential time on the quantum computer, which requires double exponential time on all but nitely many instances on any classical deterministic computer.", "title": "" }, { "docid": "71b9722200c92901d8ec3c7e6195c931", "text": "Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background \"noise.\" Thus, enterprises are seeking solutions to \"connect the suspicious dots\" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.", "title": "" }, { "docid": "3f53b5e2143364506c4f2de4c8d98979", "text": "In this paper, a different method for designing an ultra-wideband (UWB) microstrip monopole antenna with dual band-notched characteristic has been presented. The main novelty of the proposed structure is the using of protruded strips as resonators to design an UWB antenna with dual band-stop property. In the proposed design, by cutting the rectangular slot with a pair of protruded T-shaped strips in the ground plane, additional resonance is excited and much wider impedance bandwidth can be produced. To generate a single band-notched function, we convert the square radiating patch to the square-ring structure with a pair of protruded step-shaped strips. By cutting a rectangular slot with the protruded Γ-shaped strip at the feed line, a dual band-notched function is achieved. The measured results reveal that the presented dual band-notched antenna offers a very wide bandwidth from 2.8 to 11.6 GHz, with two notched bands, around of 3.3-3.7 GHz and 5-6 GHz covering all WiMAX and WLAN bands.", "title": "" }, { "docid": "fd63f9b9454358810a68fc003452509b", "text": "The years that students spend in college are perhaps the most influential years on the rest of their lives. College students face many different decisions day in and day out that may determine how successful they will be in the future. They will choose majors, whether or not to play a sport, which clubs to join, whether they should join a fraternity or sorority, which classes to take, and how much time to spend studying. It is unclear what aspects of college will benefit a person the most down the road. Are some majors better than others? Is earning a high GPA important? Or will simply getting a degree be enough to make a good living? These are a few of the many questions that college students have.", "title": "" }, { "docid": "4b4a3eb0e24f48bab61d348f61b31f32", "text": "In recent years, gesture recognition has received much attention from research communities. Computer vision-based gesture recognition has many potential applications in the area of human-computer interaction as well as sign language recognition. Sign languages use a combination of hand shapes, motion and locations as well as facial expressions. Finger-spelling is a manual representation of alphabet letters, which is often used where there is no sign word to correspond to a spoken word. In Australia, a sign language called Auslan is used by the deaf community and and the finger-spelling letters use two handed motion, unlike the well known finger-spelling of American Sign Language (ASL) that uses static shapes. This thesis presents the Auslan Finger-spelling Recognizer (AFR) that is a real-time system capable of recognizing signs that consists of Auslan manual alphabet letters from video sequences. The AFR system has two components: the first is the feature extraction process that extracts a combination of spatial and motion features from the images. Which classifies a sequence of features using Hidden Markov Models (HMMs). Tests using a vocabulary of twenty signed words showed the system could achieve 97% accuracy at the letter level and 88% at the word level using a finite state grammar network and embedded training.", "title": "" }, { "docid": "73efa57fe1d799a1c174d5ede1bcfe8a", "text": "A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we present protocols that allow a thirdparty auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts.", "title": "" }, { "docid": "984dba43888e7a3572d16760eba6e9a5", "text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.", "title": "" }, { "docid": "30bfae4f531ff6875674bf960218b187", "text": "Over the past few years, Convolutional Neural Networks (CNNs) have shown promise on facial expression recognition. However, the performance degrades dramatically under real-world settings due to variations introduced by subtle facial appearance changes, head pose variations, illumination changes, and occlusions. In this paper, a novel island loss is proposed to enhance the discriminative power of deeply learned features. Specifically, the island loss is designed to reduce the intra-class variations while enlarging the inter-class differences simultaneously. Experimental results on four benchmark expression databases have demonstrated that the CNN with the proposed island loss (IL-CNN) outperforms the baseline CNN models with either traditional softmax loss or center loss and achieves comparable or better performance compared with the state-of-the-art methods for facial expression recognition.", "title": "" }, { "docid": "a6fd8b8506a933a7cc0530c6ccda03a8", "text": "Native ecosystems are continuously being transformed mostly into agricultural lands. Simultaneously, a large proportion of fields are abandoned after some years of use. Without any intervention, altered landscapes usually show a slow reversion to native ecosystems, or to novel ecosystems. One of the main barriers to vegetation regeneration is poor propagule supply. Many restoration programs have already implemented the use of artificial perches in order to increase seed availability in open areas where bird dispersal is limited by the lack of trees. To evaluate the effectiveness of this practice, we performed a series of meta-analyses comparing the use of artificial perches versus control sites without perches. We found that setting-up artificial perches increases the abundance and richness of seeds that arrive in altered areas surrounding native ecosystems. Moreover, density of seedlings is also higher in open areas with artificial perches than in control sites without perches. Taken together, our results support the use of artificial perches to overcome the problem of poor seed availability in degraded fields, promoting and/or accelerating the restoration of vegetation in concordance with the surrounding landscape.", "title": "" }, { "docid": "0d8075b26c8e8554ec8eec5f41a73c23", "text": "As robots are going to spread in human society, the study of their appearance becomes a critical matter when assessing robots performance and appropriateness for an application and for the employment in different countries, with different background cultures and religions. Robot appearance categories are commonly divided in anthropomorphic, zoomorphic and functional. In this paper, we offer a theoretical contribution by introducing a new category, called `theomorphic robots', in which robots carry the shape and the identity of a supernatural creature or object within a religion. Discussing the theory of dehumanisation and the different categories of supernatural among different religions, we hypothesise the possible advantages of the theomorphic design for different applications.", "title": "" }, { "docid": "7db7d64ce262c5e4681d91c6faf29f67", "text": "Conceptual natural language processing systems usually rely on case frame instantiation to recognize events and role objects in text. But generating a good set of case frames for a domain is timeconsuming, tedious, and prone to errors of omission. We have developed a corpus-based algorithm for acquiring conceptual case frames empirically from unannotated text. Our algorithm builds on previous research on corpus-based methods for acquiring extraction patterns and semantic lexicons. Given extraction patterns and a semantic lexicon for a domain, our algorithm learns semantic preferences for each extraction pattern and merges the syntactically compatible patterns to produce multi-slot case frames with selectional restrictions. The case frames generate more cohesive output and produce fewer false hits than the original extraction patterns. Our system requires only preclassified training texts and a few hours of manual review to filter the dictionaries, demonstrating that conceptual case frames can be acquired from unannotated text without special training resources.", "title": "" }, { "docid": "2164fbc381033f7be87d075440053c0e", "text": "Recently there has been a surge of interest in neural architectures for complex structured learning tasks. Along this track, we are addressing the supervised task of relation extraction and named-entity recognition via recursive neural structures and deep unsupervised feature learning. Our models are inspired by several recent works in deep learning for natural language. We have extended the previous models, and evaluated them in various scenarios, for relation extraction and namedentity recognition. In the models, we avoid using any external features, so as to investigate the power of representation instead of feature engineering. We implement the models and proposed some more general models for future work. We will briefly review previous works on deep learning and give a brief overview of recent progresses relation extraction and named-entity recognition.", "title": "" }, { "docid": "f1e9c9106dd3cdd7b568d5513b39ac7a", "text": "This paper presents a novel zero-voltage switching (ZVS) approach to a grid-connected single-stage flyback inverter. The soft-switching of the primary switch is achieved by allowing negative current from the grid side through bidirectional switches placed on the secondary side of the transformer. Basically, the negative current discharges the metal-oxide-semiconductor field-effect transistor's output capacitor, thereby allowing turn on of the primary switch under zero voltage. To optimize the amount of reactive current required to achieve ZVS, a variable-frequency control scheme is implemented over the line cycle. In addition, the bidirectional switches on the secondary side of the transformer have ZVS during the turn- on times. Therefore, the switching losses of the bidirectional switches are negligible. A 250-W prototype has been implemented to validate the proposed scheme. Experimental results confirm the feasibility and superior performance of the converter compared with the conventional flyback inverter.", "title": "" } ]
scidocsrr
848c5795eb511129daf830035882f41e
CAM: a topology aware minimum cost flow based resource manager for MapReduce applications in the cloud
[ { "docid": "70a07b1aedcb26f7f03ffc636b1d84a8", "text": "This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.\n We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.", "title": "" }, { "docid": "25adc988a57d82ae6de7307d1de5bf71", "text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [1] is a popular open-source map-reduce implementation which is being used in companies like Yahoo, Facebook etc. to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language - HiveQL, which are compiled into map-reduce jobs that are executed using Hadoop. In addition, HiveQL enables users to plug in custom map-reduce scripts into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog - Metastore - that contains schemas and statistics, which are useful in data exploration, query optimization and query compilation. In Facebook, the Hive warehouse contains tens of thousands of tables and stores over 700TB of data and is being used extensively for both reporting and ad-hoc analyses by more than 200 users per month.", "title": "" } ]
[ { "docid": "105be9ee3e824f3ef5f79e6b00ab2607", "text": "Living in a digital age, where all kinds of information are accessible electronically at all times, organizations worldwide struggle to keep their information assets secure. Interestingly, the majority of organizational information systems security (ISS) incidents are the direct or indirect result of human errors. To explore how organizations can defend themselves against harmful ISS behaviour, employees’ information security awareness (ISA) has become a top-priority in research and practice. ISA is referred to as a state of consciousness and knowledge about security issues and is a strong predictor of security compliant behaviour. However, to date knowledge about the factors that are responsible for some employees having a higher level of ISA than others is limited and widely dispersed among multidisciplinary outlets. Therefore, our study provides an extensive review of the literature on ISA’s antecedents with the aim to synthesize the literature and to reveal areas for further research. We analysed 44 publications to discern various institutional, individual, and socio-environmental ISA antecedents. Identifying and understanding these factors will be useful for stakeholders interested in improving the effectiveness of awareness strategies, in increasing employees’ ISA and in ultimately lowering the substantial ISS threats for organizations and society.", "title": "" }, { "docid": "79de6591c4d7bc26d2f2eea2f2b19756", "text": "This paper presents a MOOC-ready online FPGA laboratory platform which targets computer system experiments. Goal of design is to provide user with highly approximate experience and results as offline experiments. Rich functions are implemented by utilizing SoC FPGA as the controller of lab board. The design details and effects are discussed in this paper.", "title": "" }, { "docid": "a14af931467e6f19443ff574f4c8b543", "text": "EDURange, a cloud-based platform, uses cybersecurity exercises to help undergraduates develop analytical abilities and a security mindset.", "title": "" }, { "docid": "fe16f2d946b3ea7bc1169d5667365dbe", "text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.", "title": "" }, { "docid": "5950c26d7a823192dc25b1637203ac43", "text": "The nature of pain has been the subject of bitter controversy since the turn of the century (1). There are currently two opposing theories of pain: (i) specificity theory, which holds that pain is a specific modality like vision or hearing, \"with its own central and peripheral apparatus\" (2), and (ii) pattern theory, which maintains that the nerve impulse pattern for pain is produced by intense stimulation of nonspecific receptors since \"there are no specific fibers and no specific endings\" (3). Both theories derive from earlier concepts proposed by von Frey (4) and Goldscheider (5) in 1894, and historically they are held to be mutually exclusive. Since it is our purpose here to propose a new theory of pain mechanisms, we shall state explicitly at the outset where we agree and disagree with specificity and pattern theories.", "title": "" }, { "docid": "ab13e9d7c06549e4a74625982e5e38e3", "text": "Spiking neural networks (SNNs) attempt to emulate information processing in the mammalian brain based on massively parallel arrays of neurons that communicate via spike events. SNNs offer the possibility to implement embedded neuromorphic circuits, with high parallelism and low power consumption compared to the traditional von Neumann computer paradigms. Nevertheless, the lack of modularity and poor connectivity shown by traditional neuron interconnect implementations based on shared bus topologies is prohibiting scalable hardware implementations of SNNs. This paper presents a novel hierarchical network-on-chip (H-NoC) architecture for SNN hardware, which aims to address the scalability issue by creating a modular array of clusters of neurons using a hierarchical structure of low and high-level routers. The proposed H-NoC architecture incorporates a spike traffic compression technique to exploit SNN traffic patterns and locality between neurons, thus reducing traffic overhead and improving throughput on the network. In addition, adaptive routing capabilities between clusters balance local and global traffic loads to sustain throughput under bursting activity. Analytical results show the scalability of the proposed H-NoC approach under different scenarios, while simulation and synthesis analysis using 65-nm CMOS technology demonstrate high-throughput, low-cost area, and power consumption per cluster, respectively.", "title": "" }, { "docid": "18d7fbf79f58f01e7c01881c1e697c50", "text": "This paper presents a simulation-based method for evaluating the static offset in discrete-time comparators. The proposed procedure is based on a closed-loop algorithm which forces the input signal of the comparator to quickly converge to its effective threshold. From this value, the final offset is computed by subtracting the ideal reference. The proposal was validated using realistic behavioral models and transistor-level simulations in a 0.18μm CMOS technology. The application of the method reduces by several orders of magnitude the number of cycles needed to characterize the offset during design, drastically improving productivity.", "title": "" }, { "docid": "2d356c3d189bbd3bf9ba9db9b5878780", "text": "Training deep networks for semantic segmentation requires annotation of large amounts of data, which can be time-consuming and expensive. Unfortunately, these trained networks still generalize poorly when tested in domains not consistent with the training data. In this paper, we show that by carefully presenting a mixture of labeled source domain and proxy-labeled target domain data to a network, we can achieve state-of-the-art unsupervised domain adaptation results. With our design, the network progressively learns features specific to the target domain using annotation from only the source domain. We generate proxy labels for the target domain using the network’s own predictions. Our architecture then allows selective mining of easy samples from this set of proxy labels, and hard samples from the annotated source domain. We conduct a series of experiments with the GTA5, Cityscapes and BDD100k datasets on synthetic-to-real domain adaptation and geographic domain adaptation, showing the advantages of our method over baselines and existing approaches.", "title": "" }, { "docid": "ad6d10ad2165bbfd664e366d47c3ab89", "text": "This paper presents a novel boundary based semiautomatic tool, ByLabel, for accurate image annotation. Given an image, ByLabel first detects its edge features and computes high quality boundary fragments. Current labeling tools require the human to accurately click on numerous boundary points. ByLabel simplifies this to just selecting among the boundary fragment proposals that ByLabel automatically generates. To evaluate the performance of By-Label, 10 volunteers, with no experiences of annotation, labeled both synthetic and real images. Compared to the commonly used tool LabelMe, ByLabel reduces image-clicks and time by 73% and 56% respectively, while improving the accuracy by 73% (from 1.1 pixel average boundary error to 0.3 pixel). The results show that our ByLabel outperforms the state-of-the-art annotation tool in terms of efficiency, accuracy and user experience. The tool is publicly available: http://webdocs.cs.ualberta.ca/~vis/ bylabel/.", "title": "" }, { "docid": "dcf24411ffed0d5bf2709e005f6db753", "text": "Dynamic Causal Modelling (DCM) is an approach first introduced for the analysis of functional magnetic resonance imaging (fMRI) to quantify effective connectivity between brain areas. Recently, this framework has been extended and established in the magneto/encephalography (M/EEG) domain. DCM for M/EEG entails the inversion a full spatiotemporal model of evoked responses, over multiple conditions. This model rests on a biophysical and neurobiological generative model for electrophysiological data. A generative model is a prescription of how data are generated. The inversion of a DCM provides conditional densities on the model parameters and, indeed on the model itself. These densities enable one to answer key questions about the underlying system. A DCM comprises two parts; one part describes the dynamics within and among neuronal sources, and the second describes how source dynamics generate data in the sensors, using the lead-field. The parameters of this spatiotemporal model are estimated using a single (iterative) Bayesian procedure. In this paper, we will motivate and describe the current DCM framework. Two examples show how the approach can be applied to M/EEG experiments.", "title": "" }, { "docid": "b9c54211575909291cbd4428781a3b05", "text": "The purpose is to arrive at recognition of multicolored objects invariant to a substantial change in viewpoint, object geometry and illumination. Assuming dichromatic reflectance and white illumination, it is shown that normalized color rgb, saturation S and hue H, and the newly proposed color models c 1 c 2 c 3 and l 1 l 2 l 3 are all invariant to a change in viewing direction, object geometry and illumination. Further, it is shown that hue H and l 1 l 2 l 3 are also invariant to highlights. Finally, a change in spectral power distribution of the illumination is considered to propose a new color constant color model m 1 m 2 m 3 . To evaluate the recognition accuracy differentiated for the various color models, experiments have been carried out on a database consisting of 500 images taken from 3-D multicolored man-made objects. The experimental results show that highest object recognition accuracy is achieved by l 1 l 2 l 3 and hue H followed by c 1 c 2 c 3 , normalized color rgb and m 1 m 2 m 3 under the constraint of white illumination. Also, it is demonstrated that recognition accuracy degrades substantially for all color features other than m 1 m 2 m 3 with a change in illumination color. The recognition scheme and images are available within the PicToSeek and Pic2Seek systems on-line at: http: //www.wins.uva.nl/research/isis/zomax/. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "3141caaf2d19070e46e66b7a219c131e", "text": "The sudden inability to walk is one of the most glaring impairments following spinal cord injury (SCI). Regardless of time since injury, recovery of walking has been found to be one of the top priorities for those with SCI as well as their rehabilitation professionals [1]. Despite clinical management and promising basic science research advances, a recent multicenter prospective study revealed that 59% of those with SCI are unable to ambulate without assistance from others at one year following injury [2]. The worldwide incidence of SCI is between 10.4-83 per million [3], and there are approximately 265,000 persons with SCI living in the United States [4]. Thus, there is a tremendous consumer demand to improve ambulation outcomes following SCI.", "title": "" }, { "docid": "ac1018fb262f38faf50071603292c3c0", "text": "This paper provides an overview and an evaluation of the Cetus source-to-source compiler infrastructure. The original goal of the Cetus project was to create an easy-to-use compiler for research in automatic parallelization of C programs. In meantime, Cetus has been used for many additional program transformation tasks. It serves as a compiler infrastructure for many projects in the US and internationally. Recently, Cetus has been supported by the National Science Foundation to build a community resource. The compiler has gone through several iterations of benchmark studies and implementations of those techniques that could improve the parallel performance of these programs. These efforts have resulted in a system that favorably compares with state-of-the-art parallelizers, such as Intel’s ICC. A key limitation of advanced optimizing compilers is their lack of runtime information, such as the program input data. We will discuss and evaluate several techniques that support dynamic optimization decisions. Finally, as there is an extensive body of proposed compiler analyses and transformations for parallelization, the question of the importance of the techniques arises. This paper evaluates the impact of the individual Cetus techniques on overall program performance.", "title": "" }, { "docid": "a9e3a274d732f57efc0aa093e24653f8", "text": "This work presents our recent progress in the development of an Si wire waveguiding system for microphotonics devices. The Si wire waveguide promises size reduction and high-density integration of optical circuits due to its strong light confinement. However, large connection and propagation losses had been serious problems. We solved these problems by using a spot-size converter and improving the microfabrication technology. As a result, propagation losses as low as 2.8 dB/cm for a 400/spl times/200 nm waveguide and a coupling loss of 0.5 dB per connection were obtained. As we have the technologies for the fabrication of complex, practical optical devices using Si wire waveguides, we used them to make microphotonics devices, such as a ring resonator and lattice filter. The devices we made exhibit excellent characteristics because of the microfabrication with the precision of a few nanometers. We have also demonstrated that Si wire waveguides have great potential for use in nonlinear optical devices.", "title": "" }, { "docid": "b7524787cce58c3bf34a9d7fd3c8af90", "text": "Convolutional Neural Networks and Graphics Processing Units have been at the core of a paradigm shift in computer vision research that some researchers have called “the algorithmic perception revolution.” This thesis presents the implementation and analysis of several techniques for performing artistic style transfer using a Convolutional Neural Network architecture trained for large-scale image recognition tasks. We present an implementation of an existing algorithm for artistic style transfer in images and video. The neural algorithm separates and recombines the style and content of arbitrary images. Additionally, we present an extension of the algorithm to perform weighted artistic style transfer.", "title": "" }, { "docid": "423f246065662358b1590e8f59a2cc55", "text": "Caused by the rising interest in traffic surveillance for simulations and decision management many publications concentrate on automatic vehicle detection or tracking. Quantities and velocities of different car classes form the data basis for almost every traffic model. Especially during mass events or disasters a wide-area traffic monitoring on demand is needed which can only be provided by airborne systems. This means a massive amount of image information to be handled. In this paper we present a combination of vehicle detection and tracking which is adapted to the special restrictions given on image size and flow but nevertheless yields reliable information about the traffic situation. Combining a set of modified edge filters it is possible to detect cars of different sizes and orientations with minimum computing effort, if some a priori information about the street network is used. The found vehicles are tracked between two consecutive images by an algorithm using Singular Value Decomposition. Concerning their distance and correlation the features are assigned pairwise with respect to their global positioning among each other. Choosing only the best correlating assignments it is possible to compute reliable values for the average velocities.", "title": "" }, { "docid": "1bdd7392e4fc5d78c7976bd3497cce64", "text": "PURPOSE\nInterests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images.\n\n\nMETHODS\nThe proposed method builds upon recent developments of deep learning and convolutional neural networks in the computer vision literature. The proposed DCNN model has 27 convolutional layers interleaved with pooling and unpooling layers and 35 million free parameters, which can be trained to learn a direct end-to-end mapping from MR images to their corresponding CTs. Training such a large model on our limited data is made possible through the principle of transfer learning and by initializing model weights from a pretrained model. Eighteen brain tumor patients with both CT and T1-weighted MR images are used as experimental data and a sixfold cross-validation study is performed. Each sCT generated is compared against the real CT image of the same patient on a voxel-by-voxel basis. Comparison is also made with respect to an atlas-based approach that involves deformable atlas registration and patch-based atlas fusion.\n\n\nRESULTS\nThe proposed DCNN method produced a mean absolute error (MAE) below 85 HU for 13 of the 18 test subjects. The overall average MAE was 84.8 ± 17.3 HU for all subjects, which was found to be significantly better than the average MAE of 94.5 ± 17.8 HU for the atlas-based method. The DCNN method also provided significantly better accuracy when being evaluated using two other metrics: the mean squared error (188.6 ± 33.7 versus 198.3 ± 33.0) and the Pearson correlation coefficient(0.906 ± 0.03 versus 0.896 ± 0.03). Although training a DCNN model can be slow, training only need be done once. Applying a trained model to generate a complete sCT volume for each new patient MR image only took 9 s, which was much faster than the atlas-based approach.\n\n\nCONCLUSIONS\nA DCNN model method was developed, and shown to be able to produce highly accurate sCT estimations from conventional, single-sequence MR images in near real time. Quantitative results also showed that the proposed method competed favorably with an atlas-based method, in terms of both accuracy and computation speed at test time. Further validation on dose computation accuracy and on a larger patient cohort is warranted. Extensions of the method are also possible to further improve accuracy or to handle multi-sequence MR images.", "title": "" }, { "docid": "db54bd5886f1a181b65fe593e753891e", "text": "In recent years, more efficient and positive use of current water resources together with global warming becomes important. New technologies and ideas have been developed for many years to optimal use of water resources especially in agricultural field. Growers irrigate their own areas uniformly. However demand of water, fertilizer and agricultural chemicals are different for each trees or crops depending on plant ages and chemical content of soil. Determination of water demand for crops and trees is important to protect fresh water resources. In this study, a prototype of solar powered, low cost, remote controlled real time monitoring irrigation system was designed to control drip irrigation. Software (ValCon, developed by authors with C# language in Visual Studio.Net 2008 editor) was developed to control irrigation valve and monitor water content of soil. Control method of irrigation (automatic or manual) can be selected by users. Only water content of soil was monitored. Nevertheless by using sensors which measure other features of water or air, it is also possible to extend the designed system. Remote controlled site-specific irrigation scheme prevents moisture stress of trees and salification besides providing the efficient use of fresh water resources. Also, this irrigation method removes labour that is needed for flooding irrigation", "title": "" }, { "docid": "d1d862185a20e1f1efc7d3dc7ca8524b", "text": "In what ways do the online behaviors of wizards and ogres map to players’ actual leadership status in the offline world? What can we learn from players’ experience in Massively Multiplayer Online games (MMOGs) to advance our understanding of leadership, especially leadership in online settings (E-leadership)? As part of a larger agenda in the emerging field of empirically testing the ‘‘mapping’’ between the online and offline worlds, this study aims to tackle a central issue in the E-leadership literature: how have technology and technology mediated communications transformed leadership-diagnostic traits and behaviors? To answer this question, we surveyed over 18,000 players of a popular MMOG and also collected behavioral data of a subset of survey respondents over a four-month period. Motivated by leadership theories, we examined the connection between respondents’ offline leadership status and their in-game relationship-oriented and task-related-behaviors. Our results indicate that individuals’ relationship-oriented behaviors in the virtual world are particularly relevant to players’ leadership status in voluntary organizations, while their task-oriented behaviors are marginally linked to offline leadership status in voluntary organizations, but not in companies. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "71c34b48cd22a0a8bc9b507e05919301", "text": "Under the action of wind, tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. While the alongwind loads have been successfully treated using quasi-steady and strip theories in terms of gust loading factors, the acrosswind and torsional loads cannot be treated in this manner, since these loads cannot be related in a straightforward manner to the fluctuations in the approach flow. Accordingly, most current codes and standards provide little guidance for the acrosswind and torsional response. To fill this gap, a preliminary, interactive database of aerodynamic loads is presented, which can be accessed by any user with Microsoft Explorer at the URL address http://www.nd.edu/;nathaz/. The database is comprised of high-frequency base balance measurements on a host of isolated tall building models. Combined with the analysis procedure provided, the nondimensional aerodynamic loads can be used to compute the wind-induced response of tall buildings. The influence of key parameters, such as the side ratio, aspect ratio, and turbulence characteristics for rectangular sections, is also discussed. The database and analysis procedure are viable candidates for possible inclusion as a design guide in the next generation of codes and standards. DOI: 10.1061/~ASCE!0733-9445~2003!129:3~394! CE Database keywords: Aerodynamics; Wind loads; Wind tunnels; Databases; Random vibration; Buildings, high-rise; Turbulence. 394 / JOURNAL OF STRUCTURAL ENGINEERING © ASCE / MARCH 2003 tic model tests are presently used as routine tools in commercial design practice. However, considering the cost and lead time needed for wind tunnel testing, a simplified procedure would be desirable in the preliminary design stages, allowing early assessment of the structural resistance, evaluation of architectural or structural changes, or assessment of the need for detailed wind tunnel tests. Two kinds of wind tunnel-based procedures have been introduced in some of the existing codes and standards to treat the acrosswind and torsional response. The first is an empirical expression for the wind-induced acceleration, such as that found in the National Building Code of Canada ~NBCC! ~NRCC 1996!, while the second is an aerodynamic-load-based procedure such as those in Australian Standard ~AS 1989! and the Architectural Institute of Japan ~AIJ! Recommendations ~AIJ 1996!. The latter approach offers more flexibility as the aerodynamic load provided can be used to determine the response of any structure having generally the same architectural features and turbulence environment of the tested model, regardless of its structural characteristics. Such flexibility is made possible through the use of well-established wind-induced response analysis procedures. Meanwhile, there are some databases involving isolated, generic building shapes available in the literature ~e.g., Kareem 1988; Choi and Kanda 1993; Marukawa et al. 1992!, which can be expanded using HFBB tests. For example, a number of commercial wind tunnel facilities have accumulated data of actual buildings in their natural surroundings, which may be used to supplement the overall loading database. Though such HFBB data has been collected, it has not been assimilated and made accessible to the worldwide community, to fully realize its potential. Fortunately, the Internet now provides the opportunity to pool and archive the international stores of wind tunnel data. This paper takes the first step in that direction by introducing an interactive database of aerodynamic loads obtained from HFBB measurements on a host of isolated tall building models, accessible to the worldwide Internet community via Microsoft Explorer at the URL address http://www.nd.edu/;nathaz. Through the use of this interactive portal, users can select the Engineer, Malouf Engineering International, Inc., 275 W. Campbell Rd., Suite 611, Richardson, TX 75080; Fomerly, Research Associate, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: yzhou@nd.edu Graduate Student, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: tkijewsk@nd.edu Robert M. Moran Professor, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: kareem@nd.edu. Note. Associate Editor: Bogusz Bienkiewicz. Discussion open until August 1, 2003. Separate discussions must be submitted for individual papers. To extend the closing date by one month, a written request must be filed with the ASCE Managing Editor. The manuscript for this paper was submitted for review and possible publication on April 24, 2001; approved on December 11, 2001. This paper is part of the Journal of Structural Engineering, Vol. 129, No. 3, March 1, 2003. ©ASCE, ISSN 0733-9445/2003/3-394–404/$18.00. Introduction Under the action of wind, typical tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. It has been recognized that for many high-rise buildings the acrosswind and torsional response may exceed the alongwind response in terms of both serviceability and survivability designs ~e.g., Kareem 1985!. Nevertheless, most existing codes and standards provide only procedures for the alongwind response and provide little guidance for the critical acrosswind and torsional responses. This is partially attributed to the fact that the acrosswind and torsional responses, unlike the alongwind, result mainly from the aerodynamic pressure fluctuations in the separated shear layers and wake flow fields, which have prevented, to date, any acceptable direct analytical relation to the oncoming wind velocity fluctuations. Further, higher-order relationships may exist that are beyond the scope of the current discussion ~Gurley et al. 2001!. Wind tunnel measurements have thus served as an effective alternative for determining acrosswind and torsional loads. For example, the high-frequency base balance ~HFBB! and aeroelasgeometry and dimensions of a model building, from the available choices, and specify an urban or suburban condition. Upon doing so, the aerodynamic load spectra for the alongwind, acrosswind, or torsional response is displayed along with a Java interface that permits users to specify a reduced frequency of interest and automatically obtain the corresponding spectral value. When coupled with the concise analysis procedure, discussion, and example provided, the database provides a comprehensive tool for computation of the wind-induced response of tall buildings. Wind-Induced Response Analysis Procedure Using the aerodynamic base bending moment or base torque as the input, the wind-induced response of a building can be computed using random vibration analysis by assuming idealized structural mode shapes, e.g., linear, and considering the special relationship between the aerodynamic moments and the generalized wind loads ~e.g., Tschanz and Davenport 1983; Zhou et al. 2002!. This conventional approach yields only approximate estimates of the mode-generalized torsional moments and potential inaccuracies in the lateral loads if the sway mode shapes of the structure deviate significantly from the linear assumption. As a result, this procedure often requires the additional step of mode shape corrections to adjust the measured spectrum weighted by a linear mode shape to the true mode shape ~Vickery et al. 1985; Boggs and Peterka 1989; Zhou et al. 2002!. However, instead of utilizing conventional generalized wind loads, a base-bendingmoment-based procedure is suggested here for evaluating equivalent static wind loads and response. As discussed in Zhou et al. ~2002!, the influence of nonideal mode shapes is rather negligible for base bending moments, as opposed to other quantities like base shear or generalized wind loads. As a result, base bending moments can be used directly, presenting a computationally efficient scheme, averting the need for mode shape correction and directly accommodating nonideal mode shapes. Application of this procedure for the alongwind response has proven effective in recasting the traditional gust loading factor approach in a new format ~Zhou et al. 1999; Zhou and Kareem 2001!. The procedure can be conveniently adapted to the acrosswind and torsional response ~Boggs and Peterka 1989; Kareem and Zhou 2003!. It should be noted that the response estimation based on the aerodynamic database is not advocated for acrosswind response calculations in situations where the reduced frequency is equal to or slightly less than the Strouhal number ~Simiu and Scanlan 1996; Kijewski et al. 2001!. In such cases, the possibility of negative aerodynamic damping, a manifestation of motion-induced effects, may cause the computed results to be inaccurate ~Kareem 1982!. Assuming a stationary Gaussian process, the expected maximum base bending moment response in the alongwind or acrosswind directions or the base torque response can be expressed in the following form:", "title": "" } ]
scidocsrr
9f97096f251f4f6900bda5a2f7e61948
Automatic Question Generation from Sentences
[ { "docid": "1e464db177e96b6746f8f827c582cc31", "text": "In order to respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents the first work on a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.", "title": "" } ]
[ { "docid": "ba118d5a155e1c74d748ae6db557838d", "text": "Born 1963; diploma in architecture and in civil engineering; Ph.D. in structural engineering RWTH Aachen; founder of Bureau d’études Weinand, Liège; professor at EPFL and director of the IBOIS/EPFL Lausanne; co-founder of SHEL Architecture Engineering and Production Design, Geneva. Olivier BAVEREL Associate Prof. Dr. Navier Research center, ENPC, Champs-sur-Marne ENSAG, France baverel@lami.enpc.fr", "title": "" }, { "docid": "287873a6428cfbf8fc9066c24d977d50", "text": "Deployment of embedded technologies is increasingly being examined in industrial supply chains as a means for improving efficiency through greater control over purchase orders, inventory and product related information. Central to this development has been the advent of technologies such as bar codes, Radio Frequency Identification (RFID) systems, and wireless sensors which when attached to a product, form part of the product’s embedded systems infrastructure. The increasing integration of these technologies dramatically contributes to the evolving notion of a “smart product”, a product which is capable of incorporating itself into both physical and information environments. The future of this revolution in objects equipped with smart embedded technologies is one in which objects can not only identify themselves, but can also sense and store their condition, communicate This work was partly funded as part of the BRIDGE project by the European Commission within the Sixth Framework Programme (2002-2006) IP Nr. IST-FP6-033546. T. Sánchez López (B) · B. Patkai · D. McFarlane Engineering Department, Institute for Manufacturing, University of Cambridge, 16 Mill Lane, Cambridge CB2 1RX, UK e-mail: tsl26@cam.ac.uk B. Patkai e-mail: bp282@cam.ac.uk D. McFarlane e-mail: dcm@cam.ac.uk D. C. Ranasinghe The School of Computer Science, The University of Adelaide, Adelaide, South Australia, 5005, Australia e-mail: damith@cs.adelaide.edu.au with other objects and distributed infrastructures, and take decisions related to managing their life cycle. The object can essentially “plug” itself into a compatible systems infrastructure owned by different partners in a supply chain. However, as in any development process that will involve more than one end user, the establishment of a common foundation and understanding is essential for interoperability, efficient communication among involved parties and for developing novel applications. In this paper, we contribute to creating that common ground by providing a characterization to aid the specification and construction of “smart objects” and their underlying technologies. Furthermore, our work provides an extensive set of examples and potential applications of different categories of smart objects.", "title": "" }, { "docid": "025827e421d6430c4039de4fe35f6dba", "text": "We present FlowComposer, a web application that helps users compose musical lead sheets, i.e. melodies with chord labels. FlowComposer integrates a constrained-based lead sheet generation tool in which the user retains full control over the generation process. Users specify the style of the lead sheet by selecting a corpus of existing lead sheets. The system then produces a complete lead sheet in that style, either from scratch, or from a partial lead sheet entered by the user. The generation algorithm is based on a graphical model that combines two Markov chains enriched by Regular constraints, representing the melody and its related chord sequence. The model is sampled using our recent result in efficient sampling of the Regular constraint. The paper reports on the design and deployment of FlowComposer as a web-service, part of an ecosystem of online tools for the creation of lead sheets. FlowComposer is currently used in professional musical productions, from which we collect and show a number of representative examples.", "title": "" }, { "docid": "5c3137529a63c0c1ba45c22b292f3008", "text": "Information extraction by text segmentation (IETS) applies to cases in which data values of interest are organized in implicit semi-structured records available in textual sources (e.g. postal addresses, bibliographic information, ads). It is an important practical problem that has been frequently addressed in the recent literature. In this paper we introduce ONDUX (On Demand Unsupervised Information Extraction), a new unsupervised probabilistic approach for IETS. As other unsupervised IETS approaches, ONDUX relies on information available on pre-existing data to associate segments in the input string with attributes of a given domain. Unlike other approaches, we rely on very effective matching strategies instead of explicit learning strategies. The effectiveness of this matching strategy is also exploited to disambiguate the extraction of certain attributes through a reinforcement step that explores sequencing and positioning of attribute values directly learned on-demand from test data, with no previous human-driven training, a feature unique to ONDUX. This assigns to ONDUX a high degree of flexibility and results in superior effectiveness, as demonstrated by the experimental evaluation we report with textual sources from different domains, in which ONDUX is compared with a state-of-art IETS approach.", "title": "" }, { "docid": "d4e5a5aa65017360db9a87590a728892", "text": "This work presents a chaotic path planning generator which is used in autonomous mobile robots, in order to cover a terrain. The proposed generator is based on a nonlinear circuit, which shows chaotic behavior. The bit sequence, produced by the chaotic generator, is converted to a sequence of planned positions, which satisfies the requirements for unpredictability and fast scanning of the entire terrain. The nonlinear circuit and the trajectory-planner are described thoroughly. Simulation tests confirm that with the proposed path planning generator better results can be obtained with regard to previous works. © 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ac5fa7720ad4bf726b1f9f12ee0ac7e6", "text": "Sherif F. Nagueh, Chair, MD, FASEa1, Otto A. Smiseth, Co-Chair, MD, PhDb2, Christopher P. Appleton, MDc1, Benjamin F. Byrd III, MD, FASEd1, Hisham Dokainish, MD, FASEe1, Thor Edvardsen, MD, PhDb2, Frank A. Flachskampf, MD, PhD, FESCf2, Thierry C. Gillebert, MD, PhD, FESCg2, Allan L. Klein, MD, FASEh1, Patrizio Lancellotti, MD, PhD, FESCi2, Paolo Marino, MD, FESCj2, Jae K. Oh, MDk1, Bogdan Alexandru Popescu, MD, PhD, FESC, FASEl2, and Alan D. Waggoner, MHS, RDCSm1, Houston, Texas; Oslo, Norway; Phoenix, Arizona; Nashville, Tennessee; Hamilton, Ontario, Canada; Uppsala, Sweden; Ghent and Liège, Belgium; Cleveland, Ohio; Novara, Italy; Rochester, Minnesota; Bucharest, Romania; and St. Louis, Missouri", "title": "" }, { "docid": "25deed9855199ef583524a2eef0456f0", "text": "We introduce a method for creating very dense reconstructions of datasets, particularly turn-table varieties. The method takes in initial reconstructions (of any origin) and makes them denser by interpolating depth values in two-dimensional image space within a superpixel region and then optimizing the interpolated value via image consistency analysis across neighboring images in the dataset. One of the core assumptions in this method is that depth values per pixel will vary gradually along a gradient for a given object. As such, turntable datasets, such as the dinosaur dataset, are particularly easy for our method. Our method modernizes some existing techniques and parallelizes them on a GPU, which produces results faster than other densification methods.", "title": "" }, { "docid": "e733b08455a5ca2a5afa596268789993", "text": "In this paper a new PWM inverter topology suitable for medium voltage (2300/4160 V) adjustable speed drive (ASD) systems is proposed. The modular inverter topology is derived by combining three standard 3-phase inverter modules and a 0.33 pu output transformer. The output voltage is high quality, multistep PWM with low dv/dt. Further, the approach also guarantees balanced operation and 100% utilization of each 3-phase inverter module over the entire speed range. These features enable the proposed topology to be suitable for powering constant torque as well as variable torque type loads. Clean power utility interface of the proposed inverter system can be achieved via an 18-pulse input transformer. Analysis, simulation, and experimental results are shown to validate the concepts.", "title": "" }, { "docid": "3a1705ac3a95ec08280995d15ce8d705", "text": "Although hybrid-electric vehicles have been studied mainly with the aim of increasing fuel economy, little has been done in order to improve both fuel economy and performance. However, vehicular-dynamic-performance characteristics such as acceleration and climbing ability are of prime importance in military vehicles such as the high-mobility multipurpose wheeled vehicle (HMMWV). This paper concentrates on the models that describe hybridized HMMWV vehicles and the simulation results of those models. Parallel and series configurations have been modeled using the advanced-vehicle-simulator software developed by the National Renewable Energy Laboratory. Both a retrofit approach and a constant-power approach have been tested, and the results are compared to the conventional model results. In addition, the effects of using smaller engines than the existing ones in hybrid HMMWV drive trains have been studied, and the results are compared to the data collected from an actual implementation of such a vehicle. Moreover, the integrated-starter/alternator (ISA) configuration has been considered, and the results were encouraging", "title": "" }, { "docid": "6b04721c0fc7135ddd0fdf76a9cfdd79", "text": "Functional magnetic resonance imaging (fMRI) was used to compare brain activity during the retrieval of coarse- and fine-grained spatial details and episodic details associated with a familiar environment. Long-time Toronto residents compared pairs of landmarks based on their absolute geographic locations (requiring either coarse or fine discriminations) or based on previous visits to those landmarks (requiring episodic details). An ROI analysis of the hippocampus showed that all three conditions activated the hippocampus bilaterally. Fine-grained spatial judgments recruited an additional region of the right posterior hippocampus, while episodic judgments recruited an additional region of the right anterior hippocampus, and a more extensive region along the length of the left hippocampus. To examine whole-brain patterns of activity, Partial Least Squares (PLS) analysis was used to identify sets of brain regions whose activity covaried with the three conditions. All three comparison judgments recruited the default mode network including the posterior cingulate/retrosplenial cortex, middle frontal gyrus, hippocampus, and precuneus. Fine-grained spatial judgments also recruited additional regions of the precuneus, parahippocampal cortex and the supramarginal gyrus. Episodic judgments recruited the posterior cingulate and medial frontal lobes as well as the angular gyrus. These results are discussed in terms of their implications for theories of hippocampal function and spatial and episodic memory.", "title": "" }, { "docid": "60a6c8588c46fa2aa63a3348723f2bb1", "text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "01e35e372cde2ce0df50d1ff85e59df6", "text": "In this paper, we present an automatic method for hair segmentation. Our algorithm is divided into two steps. Firstly, we take information from frequential and color analysis in order to create binary masks as descriptor of the hair location. Secondly, we perform a 'matting treatment' which is a process to extract foreground object from an image. This approach is based on markers which positions are initialized from the fusion of frequential and color masks. At the end the matting treatment result is use to segment the hair. Results are evaluated using semi- manual segmentation references.", "title": "" }, { "docid": "b99944ad31c5ad81d0e235c200a332b4", "text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.", "title": "" }, { "docid": "d7e4cc890523bb7670f0f323bba8bf0f", "text": "Most popular evolutionary algorithms for multiobjective optimisation maintain a population of solutions from which individuals are selected for reproduction. In this paper, we introduce a simpler evolution scheme for multiobjective problems, called the Pareto Archived Evolution Strategy (PAES). We argue that PAES may represent the simplest possible non-trivial algorithm capable of generating diverse solutions in the Pareto optimal set. The algorithm is identified as being a (1 + 1) evolution strategy, using local search from a population of one but using a reference archive of previously found solutions in order to identify the approximate dominance ranking of the current and candidate solution vectors. PAES is intended as a good baseline approach, against which more involved methods may be compared, and may also serve well in some real-world applications when local search seems superior to or competitive with population-based methods. The performance of the new algorithm is compared with that of a MOEA based on the Niched Pareto GA on a real world application from the telecommunications field. In addition, we include results from experiments carried out on a suite of four test functions, to demonstrate the algorithm’s general capability.", "title": "" }, { "docid": "4e8dbd3470028541cb53f70cefd54abd", "text": "Design strategy and efficiency optimization of ultrahigh-frequency (UHF) micro-power rectifiers using diode-connected MOS transistors with very low threshold voltage is presented. The analysis takes into account the conduction angle, leakage current, and body effect in deriving the output voltage. Appropriate approximations allow analytical expressions for the output voltage, power consumption, and efficiency to be derived. A design procedure to maximize efficiency is presented. A superposition method is proposed to optimize the performance of multiple-output rectifiers. Constant-power scaling and area-efficient design are discussed. Using a 0.18-mum CMOS process with zero-threshold transistors, 900-MHz rectifiers with different conversion ratios were designed, and extensive HSPICE simulations show good agreement with the analysis. A 24-stage triple-output rectifier was designed and fabricated, and measurement results verified the validity of the analysis", "title": "" }, { "docid": "f2d8a2b77fd3bc9625ae4f2881bf2729", "text": "Urothelial carcinoma (UC) is characterized by expression of a plethora of cell surface antigens, thus offering opportunities for specific therapeutic targeting with use of antibody-drug conjugates (ADCs). ADCs are structured from two major constituents, a monoclonal antibody (mAb) against a specific target and a cytotoxic drug connected via a linker molecule. Several ADCs are developed against different UC surface markers, but the ones at most advanced stages of development include sacituzumab govitecan (IMMU-132), enfortumab vedotin (ASG-22CE/ASG-22ME), ASG-15ME for advanced UC, and oportuzumab monatox (VB4-845) for early UC. Several new targets are identified and utilized for novel or existing ADC testing. The most promising ones include human epidermal growth factor receptor 2 (HER2) and members of the fibroblast growth factor receptor axis (FGF/FGFR). Positive preclinical and early clinical results are reported in many cases, thus the next step involves further improving efficacy and reducing toxicity as well as testing combination strategies with approved agents.", "title": "" }, { "docid": "826e01210bb9ce8171ed72043b4a304d", "text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.", "title": "" }, { "docid": "b45d1003afac487dd3d5477621a85f74", "text": "Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting interactions between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to ‘tease apart’ the extent to which each factor influences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.", "title": "" }, { "docid": "423c37020f097cf42635b0936709c7fe", "text": "Two major goals in machine learning are the discovery of comp lex multidimensional solutions and continual improvement of existing solutions. In this paper, we argue thatcomplexification, i.e. the incremental elaboration of solutions through adding new structure, ach ieves both these goals. We demonstrate the power of complexification through the NeuroEvolution of Augmenti ng Topologies (NEAT) method, which evolves increasingly complex neural network architectures. NEAT i s applied to an open-ended coevolutionary robot duel domain where robot controllers compete head to head. Be caus the robot duel domain supports a wide range of sophisticated strategies, and because coevolutio n benefits from an escalating arms race, it serves as a suitable testbed for observing the effect of evolving in creasingly complex controllers. The result is an arms race of increasingly sophisticated strategies. When c ompared to the evolution of networks with fixed structure, complexifying networks discover significantly more sophisticated strategies. The results suggest that in order to realize the full potential of evolution, and search in general, solutions must be allowed to complexify as well as optimize.", "title": "" }, { "docid": "701cad5b373f3dbc0497c23057c55c8f", "text": "In this paper, we focus on the problem of answer triggering addressed by Yang et al. (2015), which is a critical component for a real-world question answering system. We employ a hierarchical gated recurrent neural tensor (HGRNT) model to capture both the context information and the deep interactions between the candidate answers and the question. Our result on F value achieves 42.6%, which surpasses the baseline by over 10 %.", "title": "" } ]
scidocsrr
c83097b29942dc4e878c66424f47a918
An intelligent home networking system
[ { "docid": "be99f6ba66d573547a09d3429536049e", "text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.", "title": "" } ]
[ { "docid": "2323e926fb6aab6984be3e8537e17eef", "text": "In this paper, a novel method is proposed for Facial Expression Recognition (FER) using dictionary learning to learn both identity and expression dictionaries simultaneously. Accordingly, an automatic and comprehensive feature extraction method is proposed. The proposed method accommodates real-valued scores to a probability of what percent of the given Facial Expression (FE) is present in the input image. To this end, a dual dictionary learning method is proposed to learn both regression and feature dictionaries for FER. Then, two regression classification methods are proposed using a regression model formulated based on dictionary learning and two known classification methods including Sparse Representation Classification (SRC) and Collaborative Representation Classification (CRC). Convincing results are acquired for FER on the CK+, CK, MMI and JAFFE image databases compared to several state-of-the-arts. Also, promising results are obtained from evaluating the proposed method for generalization on other databases. The proposed method not only demonstrates excellent performance by obtaining high accuracy on all four databases but also outperforms other state-of-the-art approaches.", "title": "" }, { "docid": "83d0dc6c2ad117cabbd7cd80463dbe43", "text": "Internet addiction is a new and often unrecognized clinical disorder that can cause relational, occupational, and social problems. Pathological gambling is compared to problematic internet use because of overlapping diagnostic criteria. As computers are used with great frequency, detection and diagnosis of internet addiction is often difficult. Symptoms of a possible problem may be masked by legitimate use of the internet. Clinicians may overlook asking questions about computer use. To help clinicians identify internet addiction in practice, this paper provides an overview of the problem and the various subtypes that have been identified. The paper reviews conceptualizations of internet addiction, various forms that the disorder takes, and treatment considerations for working with this emergent client population.", "title": "" }, { "docid": "bf23a6fcf1a015d379dee393a294761c", "text": "This study addresses the inconsistency of contemporary literature on defining the link between leadership styles and personality traits. The plethora of literature on personality traits has culminated into symbolic big five personality dimensions but there is still a dearth of research on developing representative leadership styles despite the perennial fascination with the subject. Absence of an unequivocal model for developing representative styles in conjunction with the use of several non-mutually exclusive existing leadership styles has created a discrepancy in developing a coherent link between leadership and personality. This study sums up 39 different styles of leadership into five distinct representative styles on the basis of similar theoretical underpinnings and common characteristics to explore how each of these five representative leadership style relates to personality dimensions proposed by big five model.", "title": "" }, { "docid": "ea3fd6ece19949b09fd2f5f2de57e519", "text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.", "title": "" }, { "docid": "be89ea7764b6a22ce518bac03a8c7540", "text": "In remote, rugged or sensitive environments ground based mapping for condition assessment of species is both time consuming and potentially destructive. The application of photogrammetric methods to generate multispectral imagery and surface models based on UAV imagery at appropriate temporal and spatial resolutions is described. This paper describes a novel method to combine processing of NIR and visible image sets to produce multiband orthoimages and DEM models from UAV imagery with traditional image location and orientation uncertainties. This work extends the capabilities of recently developed commercial software (Pix4UAV from Pix4D) to show that image sets of different modalities (visible and NIR) can be automatically combined to generate a 4 band orthoimage. Reconstruction initially uses all imagery sets (NIR and visible) to ensure all images are in the same reference frame such that a 4-band orthoimage can be created. We analyse the accuracy of this automatic process by using ground control points and an evaluation on the matching performance between images of different modalities is shown. By combining sub-decimetre multispectral imagery with high spatial resolution surface models and ground based observation it is possible to generate detailed maps of vegetation assemblages at the species level. Potential uses with other conservation monitoring are discussed.", "title": "" }, { "docid": "2fbcd34468edf53ee08e0a76a048c275", "text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.", "title": "" }, { "docid": "914ffd6fd4ef493c9b2c67d89b8e2d18", "text": "PET/CT medical image fusion has important clinical significance. As the multiwavelet transform has several particular advantages in comparison with scalar wavelets on image processing, this paper proposes a medical image fusion algorithm based on multiwavelet transform after in-depth study of wavelet theory. The algorithm achieves PET/CT fusion with wavelet coefficients fusion method. Experimental results show that fusion image combines information of the source images, adds more details and texture information, and achieves a good fusion result. Based on the proposed algorithm, we can obtain the best result when using gradient fusion in the low-frequency part and classification fusion in the high-frequency part.", "title": "" }, { "docid": "f8d554c215cc40ddc71171b3f266c43a", "text": "Nowadays, Edge computing allows to push the application intelligence at the boundaries of a network in order to get high-performance processing closer to both data sources and end-users. In this scenario, the Horizon 2020 BEACON project - enabling federated Cloud-networking - can be used to setup Fog computing environments were applications can be deployed in order to instantiate Edge computing applications. In this paper, we focus on the deployment orchestration of Edge computing distributed services on such fog computing environments. We assume that a distributed service is composed of many microservices. Users, by means of geolocation deployment constrains can select regions in which microservices will be deployed. Specifically, we present an Orchestration Broker that starting from an ad-hoc OpenStack-based Heat Orchestraton Template (HOT) service manifest of an Edge computing distributed service produces several HOT microservice manifests including the the deployment instruction for each involved Fog computing node. Experiments prove the goodness of our approach.", "title": "" }, { "docid": "4bf7ad74cb51475e7e20f32aa4b767d9", "text": "function parameters = main(); % File main.m, created by Eugene Izhikevich. August 28, 2001 % Uses voltage-clamp data from N voltage-step experiments to % determine (in)activation parameters of a transient current. % Data provided by user: global v times current E p q load v.dat % N by 2 matrix of voltage steps % [from, to; from, to;...] load times.dat % Time mesh of the voltage-clamped data load current.dat % Matrix of the current values. E = 50; % Reverse potential p = 3; % The number of activation gates q = 1; % The number of inactivation gates % Guess of initial values of parameters % activation V_1/2 k V_max sigma C_amp C_base par(1:6) = [ -50 20 -40 30 0.5 0.1]; % inactivation V_1/2 k V_max sigma C_amp C_base par(7:12) =[ -60 -5 -70 20 5 1]; par(13) = 1; % Maximal conductance g_max % If E, p, or q are not known, add par(14)=60, etc. % and modify test.m parameters = fmins(’test’,par);", "title": "" }, { "docid": "9c61e4971829a799b6e979f1b6d69387", "text": "This work examines humanoid social robots in Japan and the North America with a view to comparing and contrasting the projects cross culturally. In North America, I look at the work of Cynthia Breazeal at the Massachusetts Institute of Technology and her sociable robot project: Kismet. In Japan, at the Osaka University, I consider the project of Hiroshi Ishiguro: Repliée-Q2. I first distinguish between utilitarian and affective social robots. Then, drawing on published works of Breazeal and Ishiguro I examine the proposed vision of each project. Next, I examine specific characteristics (embodied and social intelligence, morphology and aesthetics, and moral equivalence) of Kismet and Repliée with a view to comparing the underlying concepts associated with each. These features are in turn connected to the societal preconditions of robots generally. Specifically, the role that history of robots, theology/spirituality, and popular culture plays in the reception and attitude toward robots is considered.", "title": "" }, { "docid": "21c3f6d61eeeb4df1bdb500f388f71f3", "text": "Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract The Extensible Authentication Protocol (EAP), defined in RFC 3748, enables extensible network access authentication. This document specifies the EAP key hierarchy and provides a framework for the transport and usage of keying material and parameters generated by EAP authentication algorithms, known as \"methods\". It also provides a detailed system-level security analysis, describing the conditions under which the key management guidelines described in RFC 4962 can be satisfied.", "title": "" }, { "docid": "00b98536f0ecd554442a67fb31f77f4c", "text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.", "title": "" }, { "docid": "85c4c0ffb224606af6bc3af5411d31ca", "text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.", "title": "" }, { "docid": "eda6795cb79e912a7818d9970e8ca165", "text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.", "title": "" }, { "docid": "e50842fc8438af7fe6ce4b6d9a5439a7", "text": "OBJECTIVE\nTimely recognition and optimal management of atherogenic dyslipidemia (AD) and residual vascular risk (RVR) in family medicine.\n\n\nBACKGROUND\nThe global increase of the incidence of obesity is accompanied by an increase in the incidence of many metabolic and lipoprotein disorders, in particular AD, as an typical feature of obesity, metabolic syndrome, insulin resistance and diabetes type 2. AD is an important factor in cardio metabolic risk, and is characterized by a lipoprotein profile with low levels of high-density lipoprotein (HDL), high levels of triglycerides (TG) and high levels of low-density lipoprotein (LDL) cholesterol. Standard cardiometabolic risk assessment using the Framingham risk score and standard treatment with statins is usually sufficient, but not always that effective, because it does not reduce RVR that is attributed to elevated TG and reduced HDL cholesterol. RVR is subject to reduction through lifestyle changes or by pharmacological interventions. In some studies it was concluded that dietary interventions should aim to reduce the intake of calories, simple carbohydrates and saturated fats, with the goal of reaching cardiometabolic suitability, rather than weight reduction. Other studies have found that the reduction of carbohydrates in the diet or weight loss can alleviate AD changes, while changes in intake of total or saturated fat had no significant influence. In our presented case, a lifestyle change was advised as a suitable diet with reduced intake of carbohydrates and a moderate physical activity of walking for at least 180 minutes per week, with an recommendation for daily intake of calories alignment with the total daily (24-hour) energy expenditure (24-EE), depending on the degree of physical activity, type of food and the current health condition. Such lifestyle changes together with combined medical therapy with Statins, Fibrates and Omega-3 fatty acids, resulted in significant improvement in atherogenic lipid parameters.\n\n\nCONCLUSION\nUnsuitable atherogenic nutrition and insufficient physical activity are the new risk factors characteristic for AD. Nutritional interventions such as diet with reduced intake of carbohydrates and calories, moderate physical activity, combined with pharmacotherapy can improve atherogenic dyslipidemic profile and lead to loss of weight. Although one gram of fat release twice more kilo calories compared to carbohydrates, carbohydrates seems to have a greater atherogenic potential, which should be explored in future.", "title": "" }, { "docid": "480c8d16f3e58742f0164f8c10a206dd", "text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.", "title": "" }, { "docid": "f5b9cde4b7848f803b3e742298c92824", "text": "For many years, analysis of short chain fatty acids (volatile fatty acids, VFAs) has been routinely used in identification of anaerobic bacteria. In numerous scientific papers, the fatty acids between 9 and 20 carbons in length have also been used to characterize genera and species of bacteria, especially nonfermentative Gram negative organisms. With the advent of fused silica capillary columns (which allows recovery of hydroxy acids and resolution of many isomers), it has become practical to use gas chromatography of whole cell fatty acid methyl esters to identify a wide range of organisms.", "title": "" }, { "docid": "4b951d88ad9c3ca0b14b88cce1a34b14", "text": "Burrows’s Delta is the most established measure for stylometric difference in literary authorship attribution. Several improvements on the original Delta have been proposed. However, a recent empirical study showed that none of the proposed variants constitute a major improvement in terms of authorship attribution performance. With this paper, we try to improve our understanding of how and why these text distance measures work for authorship attribution. We evaluate the effects of standardization and vector normalization on the statistical distributions of features and the resulting text clustering quality. Furthermore, we explore supervised selection of discriminant words as a procedure for further improving authorship attribution.", "title": "" }, { "docid": "a6e71e4be58c51b580fcf08e9d1a100a", "text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.", "title": "" } ]
scidocsrr
7bd878e9630d7ff947aa57cd3b1fb147
KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings
[ { "docid": "36fef38de53386e071ee2a1996aa733f", "text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.", "title": "" }, { "docid": "b4ab51818d868b2f9796540c71a7bd17", "text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.", "title": "" }, { "docid": "ca7e7fa988bf2ed1635e957ea6cd810d", "text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.", "title": "" } ]
[ { "docid": "eea9332a263b7e703a60c781766620e5", "text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.", "title": "" }, { "docid": "3a2456fce98db50aee2d342ef838b349", "text": "There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.", "title": "" }, { "docid": "8a6da37bae9c4ed6a771905a98b4cafc", "text": "Compressing convolutional neural networks (CNNs) has received ever-increasing research focus. However, most existing CNN compression methods do not interpret their inherent structures to distinguish the implicit redundancy. In this paper, we investigate the problem of CNN compression from a novel interpretable perspective. The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression. Kernel clustering is further conducted based on the KSE indicator to accomplish highprecision CNN compression. KSE is capable of simultaneously compressing each layer in an efficient way, which is significantly faster compared to previous data-driven feature map pruning methods. We comprehensively evaluate the compression and speedup of the proposed method on CIFAR-10, SVHN and ImageNet 2012. Our method demonstrates superior performance gains over previous ones. In particular, it achieves 4.7× FLOPs reduction and 2.9× compression on ResNet-50 with only a Top-5 accuracy drop of 0.35% on ImageNet 2012, which significantly outperforms state-of-the-art methods.", "title": "" }, { "docid": "08af1b80f0e58fbaa75a5a61b9a716e3", "text": "Case Based Reasoning (CBR) is an important technique in artificial intelligence, which has been applied to various kinds of problems in a wide range of domains. Selecting case representation formalism is critical for the proper operation of the overall CBR system. In this paper, we survey and evaluate all of the existing case representation methodologies. Moreover, the case retrieval and future challenges for effective CBR are explained. Case representation methods are grouped in to knowledge-intensive approaches and traditional approaches. The first group overweight the second one. The first methods depend on ontology and enhance all CBR processes including case representation, retrieval, storage, and adaptation. By using a proposed set of qualitative metrics, the existing methods based on ontology for case representation are studied and evaluated in details. All these systems have limitations. No approach exceeds 53% of the specified metrics. The results of the survey explain the current limitations of CBR systems. It shows that ontology usage in case representation needs improvements to achieve semantic representation and semantic retrieval in CBR system. Keywords—Case based reasoning; Ontological case representation; Case retrieval; Clinical decision support system; Knowledge management", "title": "" }, { "docid": "e5b1ddccf7807925c01af3134d77ceb1", "text": "Large high-resolution displays are becoming increasingly common in research settings, providing data scientists with visual interfaces for the analysis of large datasets. Numerous studies have demonstrated unique perceptual and cognitive benefits afforded by these displays in visual analytics and information visualization tasks. However, the effects of these displays on knowledge discovery in exploratory visual analysis are still poorly understood. We present the results of a small-scale study to better understand how display size and resolution affect insight. Analyzing participants' verbal statements, we find preliminary evidence that larger displays with more pixels can significantly increase the number of discoveries reported during visual exploration, while yielding broader, more integrative insights. Furthermore, we find important differences in how participants performed the same visual exploration task using displays of varying sizes. We tie these results to extant work and propose explanations by considering the cognitive and interaction costs associated with visual exploration.", "title": "" }, { "docid": "6c7211f6b98618fe8d003c546598ae1b", "text": "Relation classification is one of the important research issues in the field of Natural Language Processing (NLP). It is a crucial intermediate step in complex knowledge intensive applications like automatic knowledgebase construction, question answering, textual entailment, search engine etc. Recently neural network has given state of art results in various relation extraction tasks without depending much on manually engineered features. In this paper we present brief review on different model that has been proposed for relation classification and compare their results.", "title": "" }, { "docid": "cd5289448b62ede9b30df1872cf4f505", "text": "In recent years, the computer graphics and computer vision communities have devoted significant attention to research based on Internet visual media resources. The huge number of images and videos continually being uploaded by millions of people have stimulated a variety of visual media creation and editing applications, while also posing serious challenges of retrieval, organization, and utilization. This article surveys recent research as regards processing of large collections of images and video, including work on analysis, manipulation, and synthesis. It discusses the problems involved, and suggests possible future directions in this emerging research area.", "title": "" }, { "docid": "467d48d121ee8b9f792dbfbc7e281cc1", "text": "This paper focuses on improving face recognition performance with a new signature combining implicit facial features with explicit soft facial attributes. This signature has two components: the existing patch-based features and the soft facial attributes. A deep convolutional neural network adapted from state-of-the-art networks is used to learn the soft facial attributes. Then, a signature matcher is introduced that merges the contributions of both patch-based features and the facial attributes. In this matcher, the matching scores computed from patch-based features and the facial attributes are combined to obtain a final matching score. The matcher is also extended so that different weights are assigned to different facial attributes. The proposed signature and matcher have been evaluated with the UR2D system on the UHDB31 and IJB-A datasets. The experimental results indicate that the proposed signature achieve better performance than using only patch-based features. The Rank-1 accuracy is improved significantly by 4% and 0.37% on the two datasets when compared with the UR2D system.", "title": "" }, { "docid": "a94bc60cd8b33646bd24b12ecb5f3202", "text": "We study a sequential model of Bayesian social learning in networks in which agents have heterogeneous preferences, and neighbors tend to have similar preferences—a phenomenon known as homophily. We find that the density of network connections determines the impact of preference diversity and homophily on learning. When connections are sparse, diverse preferences are harmful to learning, and homophily may lead to substantial improvements. In contrast, in a dense network, preference diversity is beneficial. Intuitively, diverse ties introduce more independence between observations while providing less information individually. Homophilous connections individually carry more useful information, but multiple observations become redundant.", "title": "" }, { "docid": "539a70c18a22d303c6cdd91d07b7cd00", "text": "The group mutual exclusion problem extends the traditional mutual exclusion problem by associating a type (or a group) with each critical section. In this problem, processes requesting critical sections of the same type can execute their critical sections concurrently. However, processes requesting critical sections of different types must execute their critical sections in a mutually exclusive manner. We present a distributed algorithm for solving the group mutual exclusion problem based on the notion of surrogate-quorum. Intuitively, our algorithm uses the quorum that has been successfully locked by a request as a surrogate to service other compatible requests for the same type of critical section. Unlike the existing quorum-based algorithms for group mutual exclusion, our algorithm achieves a low message complexity of O(q) and a low (amortized) bit-message complexity of O(bqr), where q is the maximum size of a quorum, b is the maximum number of processes from which a node can receive critical section requests, and r is the maximum size of a request while maintaining both synchronization delay and waiting time at two message hops. As opposed to some existing quorum-based algorithms, our algorithm can adapt without performance penalties to dynamic changes in the set of groups. Our simulation results indicate that our algorithm outperforms the existing quorum-based algorithms for group mutual exclusion by as much as 45 percent in some cases. We also discuss how our algorithm can be extended to satisfy certain desirable properties such as concurrent entry and unnecessary blocking freedom.", "title": "" }, { "docid": "91ada9daf86fd400852394c974d42e2b", "text": "Enterprise Resource Planning: A Trio of Resources Cindy P. Stevens a a Currently an assistant professor at Wentworth Institute of Technology, Boston, Massachusetts, for the Management of Technology (BMT) program. She holds an M.A. in Technical and Professional Communication from East Carolina University in Greenville, North Carolina, and a Ph.D. in Technology Management specializing in Digital Communications from Indiana State University. She can be reached at stevensc@wit.edu. Published online: 21 Dec 2006.", "title": "" }, { "docid": "2d12d91005d1de356a61186cbde8b444", "text": "Research into the perceptual and cognitive effects of playing video games is an area of increasing interest for many investigators. Over the past decade, expert video game players (VGPs) have been shown to display superior performance compared to non-video game players (nVGPs) on a range of visuospatial and attentional tasks. A benefit of video game expertise has recently been shown for task switching, suggesting that VGPs also have superior cognitive control abilities compared to nVGPs. In two experiments, we examined which aspects of task switching performance this VGP benefit may be localized to. With minimal trial-to-trial interference from minimally overlapping task set rules, VGPs demonstrated a task switching benefit compared to nVGPs. However, this benefit disappeared when proactive interference between tasks was increased, with substantial stimulus and response overlap in task set rules. We suggest that VGPs have no generalized benefit in task switching-related cognitive control processes compared to nVGPs, with switch cost reductions due instead to a specific benefit in controlling selective attention.", "title": "" }, { "docid": "a836b7771937a15bc90d27de9fb8f9e4", "text": "Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works; furthermore, it crystallizes this knowledge by deriving from first principals, the mathematics behind PCA . This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of the power of PCA as well as the when, the how and the why of applying this technique.", "title": "" }, { "docid": "73edaa7319dcf225c081f29146bbb385", "text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.", "title": "" }, { "docid": "f48e2c6509147c4ac1cfb25e47fa0dcf", "text": "The initial impetus for the current popularity of statistical methods in computational linguistics was provided in large part by the papers on part-of-speech tagging by Church [20], DeRose [25], and Garside [34]. In contradiction to common wisdom, these taggers showed that it was indeed possible to carve partof-speech disambiguation out of the apparently monolithic problem of natural language understanding, and solve it with impressive accuracy. The concensus at the time was that part-of-speech disambiguation could only be done as part of a global analysis, including syntactic analysis, discourse analysis, and even world knowledge. For instance, to correctly disambiguate help in give John helpN versus let John helpV, one apparently needs to parse the sentences, making reference to the differing subcategorization frames of give and let. Similar examples show that even world knowledge must be taken into account. For instance, off is a preposition in I turned off highway I-90, but a particle in I turned off my radio, so assigning the correct part of speech in I turned off the spectroroute depends on knowing whether spectroroute is the name of a road or the name of a device. Such examples do demonstrate that the problem of part-of-speech disambiguation cannot be solved without solving all the rest of the natural-language understanding problem. But Church, DeRose and Garside showed that, even if an exact solution is far beyond reach, a reasonable approximate solution is quite feasible. In this chapter, I would like to survey further developments in part-of-speech disambiguation (‘tagging’). I would also like to consider a question raised by the success of tagging, namely, what piece of the NL-understanding problem we can carve off next. ‘Partial parsing’ is a cover term for a range of different techniques for recovering some but not all of the information contained in a traditional syntactic analysis. Partial parsing techniques, like tagging techniques, aim for reliability and robustness in the face of the vagaries of natural text, by sacrificing completeness of analysis and accepting a low but non-zero error rate.", "title": "" }, { "docid": "50f5bb2f0c71bf0d529a0e65cd6066b3", "text": "It would be a significant understatement to say that sales promotion is enjoying a dominant role in the promotional mixes of most consumer goods companies. The 1998 Cox Direct 20th Annual Survey of Promotional Practices suggests that many companies spend as much as 75% of their total promotional budgets on sales promotion and only 25% on advertising. This is up from 57% spent on sales promotions in 1981 (Landler and DeGeorge). The reasons for this unprecedented growth have been welldocumented. Paramount among these is the desire on the part of many organizations for a quick bolstering of sales. The obvious corollary to this is the desire among consumer groups for increased value in the products they buy. Value can be defined as the ratio of perceived benefits to price, and is linked to performance and meeting consumers' expectations (Zeithaml 1988). In today's value-conscious environment, marketers must stress the overall value of their products (Blackwell, Miniard and Engel 2001). Consumers have reported that coupons, price promotions and good value influence 75 80% of their brand choice decisions (Cox 1998). Today, \"many Americans, brought up on a steady diet of commercials, view advertising with cynicism or indifference. With less money to shop, they're far more apt to buy on price\" (Landler and DeGeorge 1991, 68).", "title": "" }, { "docid": "3c25366758f0e102a1008605eedf8f4d", "text": "Taobao is a network retailer which founded in May 2003 and now is the most popular online retail platform in China with nearly 500 million registered users. More than 60 million people visit Taobao everyday and over 48000 items are sold every minute on this platform. During the expansion progress, Taobao has transformed from a C2C network market into a worldwide E-commerce trading platform including C2C, group purchase, distribution and other electronic commerce modes. And its future strategy is focusing on community, content and local. This article studies service and business model of Taobao from five aspects: service description and market context, service supply chain, quality of service, service management system and risk management. An analysis of the present situation of Taobao reveals that it has formed its unique business pattern and raises problems and suggestions. For Taobao stepping into cross-border E-commerce, the article analyses its strength, weakness and points out the direction of its future.", "title": "" }, { "docid": "7d11d25dc6cd2822d7f914b11b7fe640", "text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.", "title": "" }, { "docid": "7ee1c7c8c54a61bf257b78ec8ff11ead", "text": "This paper presents a novel design of six-degree-of-freedom (6-DOF) magnetically levitated (maglev) positioner, where its translator and stator are implemented by four groups of 1-D Halbach permanent-magnet (PM) arrays and a set of square coils, respectively. By controlling the eight-phase square coil array underneath the Halbach PM arrays, the translator can achieve 6-DOF motion. The merits of the proposed design are mainly threefold. First, this design is potential to deliver unlimited-stroke planar motion with high power efficiency if additional coil switching system is equipped. Second, multiple translators are allowed to operate simultaneously above the same square coil stator. Third, the proposed maglev system is less complex in regard to the commutation law and the phase number of coils. Furthermore, in this paper, an analytical modeling approach is established to accurately predict the Lorentz force generated by the square coil with the 1-D Halbach PM array by considering the corner region, and the proposed modeling approach can be extended easily to apply on other coil designs such as the circular coil, etc. The proposed force model is evaluated experimentally, and the results show that the approach is accurate in both single- and multiple-coil cases. Finally, a prototype of the proposed maglev positioner is fabricated to demonstrate its 6-DOF motion ability. Experimental results show that the root-mean-square error of the implemented maglev prototype is around 50 nm in planar motion, and its velocity can achieve up to 100 mm/s.", "title": "" }, { "docid": "aa2af8bd2ef74a0b5fa463a373a4c049", "text": "What modern game theorists describe as “fictitious play” is not the learning process George W. Brown defined in his 1951 paper. Brown’s original version differs in a subtle detail, namely the order of belief updating. In this note we revive Brown’s original fictitious play process and demonstrate that this seemingly innocent detail allows for an extremely simple and intuitive proof of convergence in an interesting and large class of games: nondegenerate ordinal potential games. © 2006 Elsevier Inc. All rights reserved. JEL classification: C72", "title": "" } ]
scidocsrr
e1bd7f184fd81c8d7ef9f58e5bb7a8c1
Central Pattern Generators augmented with virtual model control for quadruped rough terrain locomotion
[ { "docid": "9b49a4673456ab8e9f14a0fe5fb8bcc7", "text": "Legged robots offer the potential to navigate a wide variety of terrains that are inaccessible to wheeled vehicles. In this paper we consider the planning and control tasks of navigating a quadruped robot over a wide variety of challenging terrain, including terrain which it has not seen until run-time. We present a software architecture that makes use of both static and dynamic gaits, as well as specialized dynamic maneuvers, to accomplish this task. Throughout the paper we highlight two themes that have been central to our approach: 1) the prevalent use of learning algorithms, and 2) a focus on rapid recovery and replanning techniques; we present several novel methods and algorithms that we developed for the quadruped and that illustrate these two themes. We evaluate the performance of these different methods, and also present and discuss the performance of our system on the official Learning Locomotion tests.", "title": "" }, { "docid": "efec2ff9384e17a698c88e742e41bcc9", "text": "— A new versatile Hydraulically-powered Quadruped robot (HyQ) has been developed to serve as a platform to study not only highly dynamic motions such as running and jumping, but also careful navigation over very rough terrain. HyQ stands 1 meter tall, weighs roughly 90kg and features 12 torque-controlled joints powered by a combination of hydraulic and electric actuators. The hydraulic actuation permits the robot to perform powerful and dynamic motions that are hard to achieve with more traditional electrically actuated robots. This paper describes design and specifications of the robot and presents details on the hardware of the quadruped platform, such as the mechanical design of the four articulated legs and of the torso frame, and the configuration of the hydraulic power system. Results from the first walking experiments are presented along with test studies using a previously built prototype leg. 1 INTRODUCTION The development of mobile robotic platforms is an important and active area of research. Within this domain, the major focus has been to develop wheeled or tracked systems that cope very effectively with flat and well-structured solid surfaces (e.g. laboratories and roads). In recent years, there has been considerable success with robotic vehicles even for off-road conditions [1]. However, wheeled robots still have major limitations and difficulties in navigating uneven and rough terrain. These limitations and the capabilities of legged animals encouraged researchers for the past decades to focus on the construction of biologically inspired legged machines. These robots have the potential to outperform the more traditional designs with wheels and tracks in terms of mobility and versatility. The vast majority of the existing legged robots have been, and continue to be, actuated by electric motors with high gear-ratio reduction drives, which are popular because of their size, price, ease of use and accuracy of control. However, electric motors produce small torques relative to their size and weight, thereby making reduction drives with high ratios essential to convert velocity into torque. Unfortunately, this approach results in systems with reduced speed capability and limited passive back-driveability and therefore not very suitable for highly dynamic motions and interactions with unforeseen terrain variance. Significant examples of such legged robots are: the biped series of HRP robots [2], Toyota humanoid robot [3], and Honda's Asimo [4]; and the quadruped robot series of Hirose et al. [5], Sony's AIBO [6] and Little Dog [7]. In combination with high position gain control and …", "title": "" } ]
[ { "docid": "543348825e8157926761b2f6a7981de2", "text": "With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.", "title": "" }, { "docid": "37ead2d23df0af074800e7d2220ef950", "text": "This study aimed to better understand the psychological mechanisms, referred to in the job demands–resources model as the energetic and motivational processes, that can explain relationships between job demands (role overload and ambiguity), job resources (job control and social support), and burnout (emotional exhaustion, depersonalization, and personal accomplishment). Drawing on self-determination theory, we examined whether psychological resources (perceived autonomy, competence, and relatedness) act as specific mediators between particular job demands and burnout as well as between job resources and burnout. Participants were 356 school board employees. Results of the structural equation analyses provide support for our hypothesized model, which proposes that certain job demands and resources are involved in both the energetic and motivational processes—given their relationships with psychological resources—and that they distinctively predict burnout components. Implications for burnout research and management practices are discussed.", "title": "" }, { "docid": "a0a9fc47ba3694864e64e4f29c3c5735", "text": "Severe cases of traumatic brain injury (TBI) require neurocritical care, the goal being to stabilize hemodynamics and systemic oxygenation to prevent secondary brain injury. It is reported that approximately 45 % of dysoxygenation episodes during critical care have both extracranial and intracranial causes, such as intracranial hypertension and brain edema. For this reason, neurocritical care is incomplete if it only focuses on prevention of increased intracranial pressure (ICP) or decreased cerebral perfusion pressure (CPP). Arterial hypotension is a major risk factor for secondary brain injury, but hypertension with a loss of autoregulation response or excess hyperventilation to reduce ICP can also result in a critical condition in the brain and is associated with a poor outcome after TBI. Moreover, brain injury itself stimulates systemic inflammation, leading to increased permeability of the blood-brain barrier, exacerbated by secondary brain injury and resulting in increased ICP. Indeed, systemic inflammatory response syndrome after TBI reflects the extent of tissue damage at onset and predicts further tissue disruption, producing a worsening clinical condition and ultimately a poor outcome. Elevation of blood catecholamine levels after severe brain damage has been reported to contribute to the regulation of the cytokine network, but this phenomenon is a systemic protective response against systemic insults. Catecholamines are directly involved in the regulation of cytokines, and elevated levels appear to influence the immune system during stress. Medical complications are the leading cause of late morbidity and mortality in many types of brain damage. Neurocritical care after severe TBI has therefore been refined to focus not only on secondary brain injury but also on systemic organ damage after excitation of sympathetic nerves following a stress reaction.", "title": "" }, { "docid": "d7e8a55c9d1ad24a82ea25a27ac08076", "text": "We present online learning techniques for statistical machine translation (SMT). The availability of large training data sets that grow constantly over time is becoming more and more frequent in the field of SMT—for example, in the context of translation agencies or the daily translation of government proceedings. When new knowledge is to be incorporated in the SMT models, the use of batch learning techniques require very time-consuming estimation processes over the whole training set that may take days or weeks to be executed. By means of the application of online learning, new training samples can be processed individually in real time. For this purpose, we define a state-of-the-art SMT model composed of a set of submodels, as well as a set of incremental update rules for each of these submodels. To test our techniques, we have studied two well-known SMT applications that can be used in translation agencies: post-editing and interactive machine translation. In both scenarios, the SMT system collaborates with the user to generate high-quality translations. These user-validated translations can be used to extend the SMT models by means of online learning. Empirical results in the two scenarios under consideration show the great impact of frequent updates in the system performance. The time cost of such updates was also measured, comparing the efficiency of a batch learning SMT system with that of an online learning system, showing that online learning is able to work in real time whereas the time cost of batch retraining soon becomes infeasible. Empirical results also showed that the performance of online learning is comparable to that of batch learning. Moreover, the proposed techniques were able to learn from previously estimated models or from scratch. We also propose two new measures to predict the effectiveness of online learning in SMT tasks. The translation system with online learning capabilities presented here is implemented in the open-source Thot toolkit for SMT.", "title": "" }, { "docid": "f31f45176e89163d27b065a52b429973", "text": "Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.", "title": "" }, { "docid": "32fd7a91091f74a5ea55226aa44403d3", "text": "Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.", "title": "" }, { "docid": "c5ca0bce645a6d460ca3e01e4150cce5", "text": "The technological advancement and sophistication in cameras and gadgets prompt researchers to have focus on image analysis and text understanding. The deep learning techniques demonstrated well to assess the potential for classifying text from natural scene images as reported in recent years. There are variety of deep learning approaches that prospects the detection and recognition of text, effectively from images. In this work, we presented Arabic scene text recognition using Convolutional Neural Networks (ConvNets) as a deep learning classifier. As the scene text data is slanted and skewed, thus to deal with maximum variations, we employ five orientations with respect to single occurrence of a character. The training is formulated by keeping filter size 3 × 3 and 5 × 5 with stride value as 1 and 2. During text classification phase, we trained network with distinct learning rates. Our approach reported encouraging results on recognition of Arabic characters from segmented Arabic scene images.", "title": "" }, { "docid": "1836f3cf9c6243b57fd23b8d84b859d1", "text": "While most Reinforcement Learning work utilizes temporal discounting to evaluate performance, the reasons for this are unclear. Is it out of desire or necessity? We argue that it is not out of desire, and seek to dispel the notion that temporal discounting is necessary by proposing a framework for undiscounted optimization. We present a metric of undiscounted performance and an algorithm for finding action policies that maximize that measure. The technique, which we call Rlearning, is modelled after the popular Q-learning algorithm [17]. Initial experimental results are presented which attest to a great improvement over Q-learning in some simple cases.", "title": "" }, { "docid": "479f00e59bdc5744c818e29cdf446df3", "text": "A new algorithm for Support Vector regression is described. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with non-constant radius. The algorithm is analysed theoretically and experimentally.", "title": "" }, { "docid": "20d00f63848b70f3a5688b68181088f2", "text": "This paper presents a method for modeling player decision making through the use of agents as AI-driven personas. The paper argues that artificial agents, as generative player models, have properties that allow them to be used as psychometrically valid, abstract simulations of a human player’s internal decision making processes. Such agents can then be used to interpret human decision making, as personas and playtesting tools in the game design process, as baselines for adapting agents to mimic classes of human players, or as believable, human-like opponents. This argument is explored in a crowdsourced decision making experiment, in which the decisions of human players are recorded in a small-scale dungeon themed puzzle game. Human decisions are compared to the decisions of a number of a priori defined“archetypical” agent-personas, and the humans are characterized by their likeness to or divergence from these. Essentially, at each step the action of the human is compared to what actions a number of reinforcement-learned agents would have taken in the same situation, where each agent is trained using a different reward scheme. Finally, extensions are outlined for adapting the agents to represent sub-classes found in the human decision making traces.", "title": "" }, { "docid": "1447a32a7274ac972d79bbd02c25ecb2", "text": "Refactoring is a software engineering technique that, by applying a series of small behavior-preserving transformations, can improve a software system’s design, readability and extensibility. Code smells are signs that indicate that source code might need refactoring. The goal of this thesis project was to develop a prototype of a code smell detection plug-in for the Eclipse IDE framework. In earlier research by Van Emden and Moonen, a tool was developed to detect code smells in Java source code and visualize them in graph views. CodeNose, the plug-in prototype created in this thesis project, presents code smells in the Tasks View in Eclipse, similar to the way compiler errors and warnings are presented. These code smell reports provide feedback about the quality of a software system. CodeNose uses the Eclipse JDT parser to build abstract syntax trees that represent the source code. A tree visitor detects primitive code smells and collects derived smell aspects, which are written to a fact database and passed to a relational algebra calculator, the Grok tool. The results of the calculations on these facts can be used to infer more complex code smells. In a case study, the plug-in was tested by performing the code smell detection process on an existing software system. We present the case study results, focusing at performance of the plug-in and usefulness of the code smells that were detected.", "title": "" }, { "docid": "45cea05e301d47ade7eae2f442529435", "text": "As consumer depth sensors become widely available, estimating scene flow from RGBD sequences has received increasing attention. Although the depth information allows the recovery of 3D motion from a single view, it poses new challenges. In particular, depth boundaries are not well-aligned with RGB image edges and therefore not reliable cues to localize 2D motion boundaries. In addition, methods that extend the 2D optical flow formulation to 3D still produce large errors in occlusion regions. To better use depth for occlusion reasoning, we propose a layered RGBD scene flow method that jointly solves for the scene segmentation and the motion. Our key observation is that the noisy depth is sufficient to decide the depth ordering of layers, thereby avoiding a computational bottleneck for RGB layered methods. Furthermore, the depth enables us to estimate a per-layer 3D rigid motion to constrain the motion of each layer. Experimental results on both the Middlebury and real-world sequences demonstrate the effectiveness of the layered approach for RGBD scene flow estimation.", "title": "" }, { "docid": "a9f11d3439f7e3f2d739ea16d3327d1e", "text": "Objective: Diabetes is a common, debilitating chronic illness with multiple impacts. The impact on treatment satisfaction, productivity impairment and the symptom experience may be among the most important for patient-reported outcomes. This study developed and validated disease-specific, patient-reported measures for these outcomes that address limitations in currently available measures. Methods: Data was collected from the literature, experts and patients and a conceptual model of the patient-reported impact of diabetes was created. Item pools, based on the conceptual model, were then generated. The items were administered to 991 diabetes patients via a web-based survey to perform item reduction, identify relevant factor structures and assess reliability and validity following an a-priori analysis plan. Results: All validation criteria and hypotheses were met resulting in three new, valid measures: a 21-item Satisfaction Measure (three sub-scales: burden, efficacy and symptoms), a 30-item Symptom Measure and a 14-item Productivity Measure assessing both life and work productivity impairments.Conclusion: This triad of measures captures important components of the multifaceted diabetes patient experience and can be considered as valid, viable options when choosing measures to assess patient-reported outcomes. Addressing these outcomes may assist researchers and clinicians to develop more patient-centered diabetes interventions and care.", "title": "" }, { "docid": "1b7a8725023d20e36ef929b427db51e5", "text": "Electronic Customer Relationship Management (eCRM) has become the latest paradigm in the world of Customer Relationship Management. Recent business surveys suggest that up to 50% of such implementations do not yield measurable returns on investment. A secondary analysis of 13 case studies suggests that many of these limited success implementations can be attributed to usability and resistance factors. The objective of this paper is to review the general usability and resistance principles in order build an integrative framework for analyzing eCRM case studies. The conclusions suggest that if organizations want to get the most from their eCRM implementations they need to revisit the general principles of usability and resistance and apply them.", "title": "" }, { "docid": "879282128be8b423114401f6ec8baf8a", "text": "Yelp is one of the largest online searching and reviewing systems for kinds of businesses, including restaurants, shopping, home services et al. Analyzing the real world data from Yelp is valuable in acquiring the interests of users, which helps to improve the design of the next generation system. This paper targets the evaluation of Yelp dataset, which is provided in the Yelp data challenge. A bunch of interesting results are found. For instance, to reach any one in the Yelp social network, one only needs 4.5 hops on average, which verifies the classical six degree separation theory; Elite user mechanism is especially effective in maintaining the healthy of the whole network; Users who write less than 100 business reviews dominate. Those insights are expected to be considered by Yelp to make intelligent business decisions in the future.", "title": "" }, { "docid": "ed0444685c9a629c7d1fda7c4912fd55", "text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.", "title": "" }, { "docid": "eb2459cbb99879b79b94653c3b9ea8ef", "text": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset, with weak supervision. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.", "title": "" }, { "docid": "ca1ebdf96eeeb6c55116a70ed6db5ea5", "text": "Acknowledgements: We would like to recognize our expert contributors who participated in the first eWorkshop on Agile Methods and thereby contributed to the section on State-of-the-Practice: We also would like to thank our collogues who helped arrange the eWorkshop and co-authored that same section:", "title": "" }, { "docid": "370ec5c556b70ead92bc45d1f419acaf", "text": "Despite the identification of circulating tumor cells (CTCs) and cell-free DNA (cfDNA) as potential blood-based biomarkers capable of providing prognostic and predictive information in cancer, they have not been incorporated into routine clinical practice. This resistance is due in part to technological limitations hampering CTC and cfDNA analysis, as well as a limited understanding of precisely how to interpret emergent biomarkers across various disease stages and tumor types. In recognition of these challenges, a group of researchers and clinicians focused on blood-based biomarker development met at the Canadian Cancer Trials Group (CCTG) Spring Meeting in Toronto, Canada on 29 April 2016 for a workshop discussing novel CTC/cfDNA technologies, interpretation of data obtained from CTCs versus cfDNA, challenges regarding disease evolution and heterogeneity, and logistical considerations for incorporation of CTCs/cfDNA into clinical trials, and ultimately into routine clinical use. The objectives of this workshop included discussion of the current barriers to clinical implementation and recent progress made in the field, as well as fueling meaningful collaborations and partnerships between researchers and clinicians. We anticipate that the considerations highlighted at this workshop will lead to advances in both basic and translational research and will ultimately impact patient management strategies and patient outcomes.", "title": "" }, { "docid": "1938d1b72bbeec9cb9c2eed3f2c0a19a", "text": "Domain Name System (DNS) traffic has become a rich source of information from a security perspective. However, the volume of DNS traffic has been skyrocketing, such that security analyzers experience difficulties in collecting, retrieving, and analyzing the DNS traffic in response to modern Internet threats. More precisely, much of the research relating to DNS has been negatively affected by the dramatic increase in the number of queries and domains. This phenomenon has necessitated a scalable approach, which is not dependent on the volume of DNS traffic. In this paper, we introduce a fast and scalable approach, called PsyBoG, for detecting malicious behavior within large volumes of DNS traffic. PsyBoG leverages a signal processing technique, power spectral density (PSD) analysis, to discover the major frequencies resulting from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets regardless of their evasive techniques, sporadic behavior, and even normal users’ traffic. Furthermore, our method allows us to deal with large-scale DNS data by only utilizing the timing information of query generation regardless of the number of queries and domains. Finally, PsyBoG discovers groups of hosts which show similar patterns of malicious behavior. PsyBoG was evaluated by conducting experiments with two different data sets, namely DNS traces generated by real malware in controlled environments and a large number of real-world DNS traces collected from a recursive DNS server, an authoritative DNS server, and Top-Level Domain (TLD) servers. We utilized the malware traces as the ground truth, and, as a result, PsyBoG performed with a detection accuracy of 95%. By using a large number of DNS traces, we were able to demonstrate the scalability and effectiveness of PsyBoG in terms of practical usage. Finally, PsyBoG detected 23 unknown and 26 known botnet groups with 0.1% false positives. © 2016 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr