query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
71c703c22b7c3cf3afc108d5c9738b39
Achieving Cooperation Through Deep Multiagent Reinforcement Learning in Sequential Prisoner ’ s Dilemmas
[ { "docid": "5177d973389af9b0a7b3fb2108abd01a", "text": "Opponent modeling is necessary in multi-agent settings where secondary agents with competing goals also adapt their strategies, yet it remains challenging because strategies interact with each other and change. Most previous work focuses on developing probabilistic models or parameterized strategies for specific applications. Inspired by the recent success of deep reinforcement learning, we present neural-based models that jointly learn a policy and the behavior of opponents. Instead of explicitly predicting the opponent’s action, we encode observation of the opponents into a deep Q-Network (DQN); however, we retain explicit modeling (if desired) using multitasking. By using a Mixture-of-Experts architecture, our model automatically discovers different strategy patterns of opponents without extra supervision. We evaluate our models on a simulated soccer game and a popular trivia game, showing superior performance over DQN and its variants.", "title": "" } ]
[ { "docid": "ab9bc035c6a1c852b96467a2069dcdb1", "text": "In this paper the advantages, constraints and drawbacks of the aircraft More Electric Engine (MEE) are presented through a comparison between the radial flux and the axial flux Permanent Magnet (PM) machines for four different positions inside the main gas turbine engine. The electromagnetic preliminary designs are performed on the basis of self-consisting dimensioning equations, starting from the constraints of the available volumes. The actual baseline output generated power is assumed to be 300kVA. Taking into account the technical and reliabilities goals imposed by the specific application, in the paper some technological choices characterizing the projects are discussed too. In order to help the designers in the evaluation of the most promising solutions for this application, the comparison between different proposed design solutions is performed on the basis of several specific design indexes.", "title": "" }, { "docid": "9fcdce293fec576f8d287b5692c6f45b", "text": "Enabling search directly over encrypted data is a desirable technique to allow users to effectively utilize encrypted data outsourced to a remote server like cloud service provider. So far, most existing solutions focus on an honest-but-curious server, while security designs against a malicious server have not drawn enough attention. It is not until recently that a few works address the issue of verifiable designs that enable the data owner to verify the integrity of search results. Unfortunately, these verification mechanisms are highly dependent on the specific encrypted search index structures, and fail to support complex queries. There is a lack of a general verification mechanism that can be applied to all search schemes. Moreover, no effective countermeasures (e.g., punishing the cheater) are available when an unfaithful server is detected. In this work, we explore the potential of smart contract in Ethereum, an emerging blockchain-based decentralized technology that provides a new paradigm for trusted and transparent computing. By replacing the central server with a carefully-designed smart contract, we construct a decentralized privacy-preserving search scheme where the data owner can receive correct search results with assurance and without worrying about potential wrongdoings of a malicious server. To better support practical applications, we introduce fairness to our scheme by designing a new smart contract for a financially-fair search construction, in which every participant (especially in the multiuser setting) is treated equally and incentivized to conform to correct computations. In this way, an honest party can always gain what he deserves while a malicious one gets nothing. Finally, we implement a prototype of our construction and deploy it to a locally simulated network and an official Ethereum test network, respectively. The extensive experiments and evaluations demonstrate the practicability of our decentralized search scheme over encrypted data.", "title": "" }, { "docid": "fecf95aa956e9dde6e7a0743d58673b9", "text": "Use of transactional multicore main-memory databases is growing due to dramatic increases in memory size and CPU cores available for a single machine. To leverage these resources, recent concurrency control protocols have been proposed for main-memory databases, but are largely optimized for specific workloads. Due to shifting and unknown access patterns, workloads may change and one specific algorithm cannot dynamically fit all varied workloads. Thus, it is desirable to choose the right concurrency control protocol for a given workload. To address this issue we present adaptive concurrency control (ACC), that dynamically clusters data and chooses the optimal concurrency control protocol for each cluster. ACC addresses three key challenges: i) how to cluster data to minimize cross-cluster access and maintain load-balancing, ii) how to model workloads and perform protocol selection accordingly, and iii) how to support mixed concurrency control protocols running simultaneously. In this paper, we outline these challenges and present preliminary results.", "title": "" }, { "docid": "56bcf5d20cc8d0f93d019f51b31ed354", "text": "OBJECTIVE\nThe goal of this study was to evaluate the efficacy and safety of gastroretentive gabapentin (G-GR) for the treatment of moderate-to-severe menopausal hot flashes.\n\n\nMETHODS\nThe primary endpoints of this randomized, placebo-controlled study of G-GR (600 mg am/1,200 mg pm) were the mean daily frequency and severity of hot flashes at weeks 4 and 12. Secondary endpoints included Patients' Global Impression of Change, Clinicians' Global Impression of Change, and daily sleep interference at week 24.\n\n\nRESULTS\nSix hundred women with 7 or more moderate-to-severe hot flashes/day enrolled; 66.2% completed 24 weeks of treatment. At weeks 4 and 12, G-GR-treated women experienced significantly greater reductions in mean hot flash frequency and severity than placebo-treated women (frequency: week 4, -1.7, P < 0.0001; week 12, -1.14, P = 0.0007; severity: week 4, -0.21, P < 0.0001; week 12, -0.19, P = 0.012). Similar reductions were maintained up to week 24. On the Patient Global Impression of Change, more women receiving G-GR than placebo were \"much\" or \"very much\" improved (week 12: 58% vs 44%, P = 0.0008; week 24: 76% vs 55%, P < 0.0001). G-GR significantly reduced sleep interference compared with placebo at week 12 (P = 0.0056) and week 24 (P = 0.0084). Approximately 5% more women taking G-GR withdrew because of adverse events (G-GR/placebo, 16.7%/11.5%). The most common adverse events were dizziness (12.7%/3.4%), headache (9.3%/8.1%), and somnolence (6.0%/2.7%); incidences dropped to sustained low levels after a few weeks.\n\n\nCONCLUSIONS\nG-GR is a modestly effective nonhormone therapy option for the treatment of moderate-to-severe hot flashes due to menopause and is well tolerated with titration.", "title": "" }, { "docid": "408e6637ed99299bb0067eae216a64fc", "text": "The aim of this article was to describe and analyze the doctor-patient relationship between fibromyalgia patients and rheumatologists in public and private health care contexts within the Mexican health care system. This medical anthropological study drew on hospital ethnography and patients' illness narratives, as well as the experiences of rheumatologists from both types of health care services. The findings show how each type of medical care subsystem shape different relationships between patients and doctors. Patient stigmatization, overt rejection, and denial of the disease's existence were identified. In this doctor-patient-with-fibromyalgia relationship, there are difficult encounters, rather than difficult patients. These encounters are more fluid in private consultations compared with public hospitals. The doctor-centered health care model is prevalent in public institutions. In the private sector, we find the characteristics of the patient-centered model coexisting with the traditional physician-centered approach.", "title": "" }, { "docid": "e49e65b40bf1cccdcbf223a109bec267", "text": "Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model’s prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on blackbox models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.", "title": "" }, { "docid": "25d14017403c96eceeafcbda1cbdfd2c", "text": "We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks (NNs). Our model is a three-layer NN that learns to encode the non-pivot features of an input example into a lowdimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation. The low-dimensional representation is then employed in a learning algorithm for the task. Moreover, we show how to inject pre-trained word embeddings into our model in order to improve generalization across examples with similar pivot features. We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines.1", "title": "" }, { "docid": "c784f538447637ad8c4d4e86f3f8643e", "text": "In this paper, we study the compressed sensing (CS) image recovery problem. The traditional method divides the image into blocks and treats each block as an independent sub-CS recovery task. This often results in losing global structure of an image. In order to improve the CS recovery result, we propose a nonlocal (NL) estimation step after the initial CS recovery for denoising purpose. The NL estimation is based on the well-known NL means filtering that takes an advantage of self-similarity in images. We formulate the NL estimation as the low-rank matrix approximation problem, where the low-rank matrix is formed by the NL similarity patches. An efficient algorithm, nonlocal Douglas-Rachford (NLDR), based on Douglas-Rachford splitting is developed to solve this low-rank optimization problem constrained by the CS measurements. Experimental results demonstrate that the proposed NLDR algorithm achieves significant performance improvements over the state-of-the-art in CS image recovery.", "title": "" }, { "docid": "ea4da468a0e7f84266340ba5566f4bdb", "text": "We present a novel realtime algorithm to compute the trajectory of each pedestrian in a crowded scene. Our formulation is based on an adaptive scheme that uses a combination of deterministic and probabilistic trackers to achieve high accuracy and efficiency simultaneously. Furthermore, we integrate it with a multi-agent motion model and local interaction scheme to accurately compute the trajectory of each pedestrian. We highlight the performance and benefits of our algorithm on well-known datasets with tens of pedestrians.", "title": "" }, { "docid": "41977707078c1ad63a1ca0aa59ed759d", "text": "An ultrawideband conformal capsule slot antenna, which has a simple configuration and a stable impedance matching characteristic, is described in this paper. In the past, wideband outer-wall antennas have been proposed for capsule-type applications. However, this paper shows that the outer wall is not a good choice for placing capsule antennas, since such a choice exhibits a high specific absorption ratio, low gain, and low efficiency. Instead, the antenna proposed in this paper is conformal to the inner wall of a capsule shell, and it provides a wide impedance bandwidth ranging from 1.64 to 5.95 GHz (113.6%). Furthermore, the impedance matching remains stable even with a change in the operating environment. Since in a typical application scenario a capsule will move through the digestive system, and thus experience varying environments, a wide bandwidth and stable performance are both very desirable attributes of the antenna, whose design is discussed in this paper. Moreover, the proposed antenna continues to maintain a stable impedance match even when a battery is added inside the capsule, or when there is a change in the battery size and/or its position. Given these advantages, we argue that slot antennas are well suited for incorporation into capsule antennas for the present application.", "title": "" }, { "docid": "5d721d52aa72607b2638c01381369a8d", "text": "In this work, we present, LieNet, a novel deep learning framework that simultaneously detects, segments multiple object instances, and estimates their 6D poses from a single RGB image without requiring additional post-processing. Our system is accurate and fast (∼10 fps), which is well suited for real-time applications. In particular, LieNet detects and segments object instances in the image analogous to modern instance segmentation networks such as Mask R-CNN, but contains a novel additional sub-network for 6D pose estimation. LieNet estimates the rotation matrix of an object by regressing a Lie algebra based rotation representation, and estimates the translation vector by predicting the distance of the object to the camera center. The experiments on two standard pose benchmarking datasets show that LieNet greatly outperforms other recent CNN based pose prediction methods when they are used with monocular images and without post-refinements.", "title": "" }, { "docid": "48e925aba276c0f32aca04bcd21123c1", "text": "The introduction of the processor instructions AES-NI and VPCLMULQDQ, that are designed for speeding up encryption, and their continual performance improvements through processor generations, has significantly reduced the costs of encryption overheads. More and more applications and platforms encrypt all of their data and traffic. As an example, we note the world wide proliferation of the use of AES-GCM, with performance dropping down to 0.64 cycles per byte (from ∼ 23 before the instructions), on the latest Intel processors. This is close to the theoretically achievable performance with the existing hardware support. Anticipating future applications and increasing demand for high performance encryption, Intel has recently announced [1] that its future architecture (codename ”Ice Lake”) will introduce new encryption instructions. These will be able to vectorize the AES-NI and VPCLMULQDQ instructions, on wide registers that are available on the AVX512 architectures. In this paper, we explain how these new instructions can be used effectively, and how properly using them can lead to the anticipated theoretical encryption throughput of around 0.16 cycles per byte. The included examples demonstrate AES encryption in various modes of operation, AEAD such as AES-GCM, and the emerging nonce misuse resistant variant AES-GCM-SIV.", "title": "" }, { "docid": "5cdc962d9ce66938ad15829f8d0331ed", "text": "This study aims to provide a picture of how relationship quality can influence customer loyalty or loyalty in the business-to-business context. Building on prior research, we propose relationship quality as a higher construct comprising trust, commitment, satisfaction and service quality. These dimensions of relationship quality can reasonably explain the influence of relationship quality on customer loyalty. This study follows the composite loyalty approach providing both behavioural aspects (purchase intentions) and attitudinal loyalty in order to fully explain the concept of customer loyalty. A literature search is undertaken in the areas of customer loyalty, relationship quality, perceived service quality, trust, commitment and satisfaction. This study then seeks to address the following research issues: Does relationship quality influence both aspects of customer loyalty? Which relationship quality dimensions influence each of the components of customer loyalty? This study was conducted in a business-to-business setting of the courier and freight delivery service industry in Australia. The survey was targeted to Australian Small to Medium Enterprises (SMEs). Two methods were chosen for data collection: mail survey and online survey. The total number of usable respondents who completed both survey was 306. In this study, a two step approach (Anderson and Gerbing 1988) was selected for measurement model and structural model. The results also show that all measurement models of relationship dimensions achieved a satisfactory level of fit to the data. The hypothesized relationships were estimated using structural equation modeling. The overall goodness of fit statistics shows that the structural model fits the data well. As the results show, to maintain customer loyalty to the supplier, a supplier may enhance all four aspects of relationship quality which are trust, commitment, satisfaction and service quality. Specifically, in order to enhance customer’s trust, a supplier should promote the customer’s trust in the supplier. In efforts to emphasize commitment, a supplier should focus on building affective aspects of commitment rather than calculative aspects. Satisfaction appears to be a crucial factor in maintaining purchase intentions whereas service quality will strongly enhance both purchase intentions and attitudinal loyalty.", "title": "" }, { "docid": "2ed36e909f52e139b5fd907436e80443", "text": "It is difficult to draw sweeping general conclusions about the blastogenesis of CT, principally because so few thoroughly studied cases are reported. It is to be hoped that methods such as painstaking gross or electronic dissection will increase the number of well-documented cases. Nevertheless, the following conclusions can be proposed: 1. Most CT can be classified into a few main anatomic types (or paradigms), and there are also rare transitional types that show gradation between the main types. 2. Most CT have two full notochordal axes (Fig. 5); the ventral organs induced along these axes may be severely disorientated, malformed, or aplastic in the process of being arranged within one body. Reported anatomic types of CT represent those notochordal arrangements that are compatible with reasonably complete embryogenesis. New ventro-lateral axes are formed in many types of CT because of space constriction in the ventral zones. The new structures represent areas of \"mutual recognition and organization\" rather than \"fusion\" (Fig. 17). 3. Orientations of the pairs of axes in the embryonic disc can be deduced from the resulting anatomy. Except for dicephalus, the axes are not side by side. Notochords are usually \"end-on\" or ventro-ventral in orientation (Fig. 5). 4. A single gastrulation event or only partial duplicated gastrulation event seems to occur in dicephalics, despite a full double notochord. 5. The anatomy of diprosopus requires further clarification, particularly in cases with complete crania rather than anencephaly-equivalent. Diprosopus CT offer the best opportunity to study the effects of true forking of the notochord, if this actually occurs. 6. In cephalothoracopagus, thoracopagus, and ischiopagus, remarkably complete new body forms are constructed at right angles to the notochordal axes. The extent of expression of viscera in these types depends on the degree of noncongruity of their ventro-ventral axes (Figs. 4, 11, 15b). 7. Some organs and tissues fail to develop (interaction aplasia) because of conflicting migrational pathways or abnormal concentrations of morphogens in and around the neoaxes. 8. Where the cardiovascular system is discordantly expressed in dicephalus and thoracopagus twins, the right heart is more severely malformed, depending on the degree of interaction of the two embryonic septa transversa. 9. The septum transversum provides mesenchymal components to the heawrt and liver; the epithelial components (derived fro the foregut[s]) may vary in number from the number of mesenchymal septa transversa contributing to the liver of the CT embryo.(ABSTRACT TRUNCATED AT 400 WORDS)", "title": "" }, { "docid": "083f03665d2b802737a54f2cd811e27c", "text": "This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods), were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.", "title": "" }, { "docid": "caa10e745374970796bdd0039416a29d", "text": "s: Feature selection methods try to find a subset of the available features to improve the application of a learning algorithm. Many methods are based on searching a feature set that optimizes some evaluation function. On the other side, feature set estimators evaluate features individually. Relief is a well known and good feature set estimator. While being usually faster feature estimators have some disadvantages. Based on Relief ideas, we propose a feature set measure that can be used to evaluate the feature sets in a search process. We show how the proposed measure can help guiding the search process, as well as selecting the most appropriate feature set. The new measure is compared with a consistency measure, and the highly reputed wrapper approach.", "title": "" }, { "docid": "d06c91afbfd79e40d0d6fe326e3be957", "text": "This meta-analysis included 66 studies (N = 4,176) on parental antecedents of attachment security. The question addressed was whether maternal sensitivity is associated with infant attachment security, and what the strength of this relation is. It was hypothesized that studies more similar to Ainsworth's Baltimore study (Ainsworth, Blehar, Waters, & Wall, 1978) would show stronger associations than studies diverging from this pioneering study. To create conceptually homogeneous sets of studies, experts divided the studies into 9 groups with similar constructs and measures of parenting. For each domain, a meta-analysis was performed to describe the central tendency, variability, and relevant moderators. After correction for attenuation, the 21 studies (N = 1,099) in which the Strange Situation procedure in nonclinical samples was used, as well as preceding or concurrent observational sensitivity measures, showed a combined effect size of r(1,097) = .24. According to Cohen's (1988) conventional criteria, the association is moderately strong. It is concluded that in normal settings sensitivity is an important but not exclusive condition of attachment security. Several other dimensions of parenting are identified as playing an equally important role. In attachment theory, a move to the contextual level is required to interpret the complex transactions between context and sensitivity in less stable and more stressful settings, and to pay more attention to nonshared environmental influences.", "title": "" }, { "docid": "7b104b14b4219ecc2d1d141fbf0e707b", "text": "As hospitals throughout Europe are striving exploit advantages of IT and network technologies, electronic medical records systems are starting to replace paper based archives. This paper suggests and describes an add-on service to electronic medical record systems that will help regular patients in getting insight to their diagnoses and medical record. The add-on service is based annotating polysemous and foreign terms with WordNet synsets. By exploiting the way that relationships between synsets are structured and described in WordNet, it is shown how patients can get interactive opportunities to generalize and understand their personal records.", "title": "" }, { "docid": "eeb1fb4b6fe17f3021afd92be86a48f2", "text": "Despite immense technological advances, learners still prefer studying text from printed hardcopy rather than from computer screens. Subjective and objective differences between on-screen and on-paper learning were examined in terms of a set of cognitive and metacognitive components, comprising a Metacognitive Learning Regulation Profile (MLRP) for each study media. Participants studied expository texts of 1000-1200 words in one of the two media and for each text they provided metacognitive prediction-of-performance judgments with respect to a subsequent multiple-choice test. Under fixed study time (Experiment 1), test performance did not differ between the two media, but when study time was self-regulated (Experiment 2) worse performance was observed on screen than on paper. The results suggest that the primary differences between the two study media are not cognitive but rather metacognitive--less accurate prediction of performance and more erratic study-time regulation on screen than on paper. More generally, this study highlights the contribution of metacognitive regulatory processes to learning and demonstrates the potential of the MLRP methodology for revealing the source of subjective and objective differences in study performance among study conditions.", "title": "" }, { "docid": "611b755f959d542603057683706a1cd2", "text": "The Net Promoter Score (NPS) is still a popular customer loyalty measurement despite recent studies arguing that customer loyalty is multidimensional. Therefore, firms require new data-driven methods that combine behavioral and attitudinal data sources. This paper provides a framework that holistically assesses and predicts customer loyalty using attitudinal and behavioral data sources. We built a novel customer loyalty predictive model that employs a big data approach to assessing and predicting customer loyalty in a B2B context. We demonstrate the use of varying big data sources, confirming that NPS measurement does not necessarily correspond to actual behavior. Our model utilises customers’ verbatim comments to understand why customers are churning.", "title": "" } ]
scidocsrr
2242785dd499b9e4c09433dd81679d64
Evaluation of background subtraction techniques for video surveillance
[ { "docid": "89d91df8511c0b0f424dd5fa20fcd212", "text": "We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.", "title": "" }, { "docid": "9ffaf53e8745d1f7f5b7ff58c77602c6", "text": "Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches.", "title": "" } ]
[ { "docid": "4ab90fc06ac28bbe24a930ac986be828", "text": "BACKGROUND\nPrevious studies have revealed a variation in the origin and distribution patterns of the facial artery. However, the relationship between the facial artery and the facial muscles has not been well described. The purpose of this study was to determine the facial artery depth and relationship with the facial musculature layer, which represents critical information for dermal filler injection and oral and maxillofacial surgery.\n\n\nMETHODS\nFifty-four embalmed adult faces from Korean cadavers (36 male and 18 female cadavers; mean age, 73.3 years) were used in this study. A detailed dissection was performed, with great care being taken to avoid damaging the facial artery underlying the facial skin and muscle.\n\n\nRESULTS\nThe facial artery was first categorized according to the patterns of its final arterial branches. The branching pattern was classified simply into three types: type I, nasolabial pattern (51.8 percent); type II, nasolabial pattern with an infraorbital trunk (29.6 percent); and type III, forehead pattern (18.6 percent). Each type was further subdivided according to the facial artery depth and relationship with the facial musculature layer as types Ia (37.0 percent), Ib (14.8 percent), IIa (16.7 percent), IIb (12.9 percent), IIIa (16.7 percent), and IIIb (1.9 percent).\n\n\nCONCLUSION\nThis study provides new anatomical insight into the relationships between the facial artery branches and the facial muscles, including providing useful information for clinical applications in the fields of oral and maxillofacial surgery.", "title": "" }, { "docid": "fab439f694dad00c66cab42526fcaa70", "text": "The nature of consciousness, the mechanism by which it occurs in the brain, and its ultimate place in the universe are unknown. We proposed in the mid 1990's that consciousness depends on biologically 'orchestrated' coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity, and that the continuous Schrödinger evolution of each such process terminates in accordance with the specific Diósi-Penrose (DP) scheme of 'objective reduction' ('OR') of the quantum state. This orchestrated OR activity ('Orch OR') is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space-time geometry, so Orch OR suggests that there is a connection between the brain's biomolecular processes and the basic structure of the universe. Here we review Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology. We also introduce a novel suggestion of 'beat frequencies' of faster microtubule vibrations as a possible source of the observed electro-encephalographic ('EEG') correlates of consciousness. We conclude that consciousness plays an intrinsic role in the universe.", "title": "" }, { "docid": "1b33db1ede4652a80f469e7ac4720233", "text": "The use of renewable energy sources is becoming increasingly necessary, if we are to achieve the changes required to address the impacts of global warming. Biomass is the most common form of renewable energy, widely used in the third world but until recently, less so in the Western world. Latterly much attention has been focused on identifying suitable biomass species, which can provide high-energy outputs, to replace conventional fossil fuel energy sources. The type of biomass required is largely determined by the energy conversion process and the form in which the energy is required. In the first of three papers, the background to biomass production (in a European climate) and plant properties is examined. In the second paper, energy conversion technologies are reviewed, with emphasis on the production of a gaseous fuel to supplement the gas derived from the landfilling of organic wastes (landfill gas) and used in gas engines to generate electricity. The potential of a restored landfill site to act as a biomass source, providing fuel to supplement landfill gas-fuelled power stations, is examined, together with a comparison of the economics of power production from purpose-grown biomass versus waste-biomass. The third paper considers particular gasification technologies and their potential for biomass gasification.", "title": "" }, { "docid": "dedc509f31c9b7e6c4409d655a158721", "text": "Envelope tracking (ET) is by now a well-established technique that improves the efficiency of microwave power amplifiers (PAs) compared to what can be obtained with conventional class-AB or class-B operation for amplifying signals with a time-varying envelope, such as most of those used in present wireless communication systems. ET is poised to be deployed extensively in coming generations of amplifiers for cellular handsets because it can reduce power dissipation for signals using the long-term evolution (LTE) standard required for fourthgeneration (4G) wireless systems, which feature high peak-to-average power ratios (PAPRs). The ET technique continues to be actively developed for higher carrier frequencies and broader bandwidths. This article reviews the concepts and history of ET, discusses several applications currently on the drawing board, presents challenges for future development, and highlights some directions for improving the technique.", "title": "" }, { "docid": "523983cad60a81e0e6694c8d90ab9c3d", "text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.", "title": "" }, { "docid": "d725c63647485fd77412f16e1f6485f2", "text": "The ongoing discussions about a „digital revolution― and ―disruptive competitive advantages‖ have led to the creation of such a business vision as ―Industry 4.0‖. Yet, the term and even more its actual impact on businesses is still unclear.This paper addresses this gap and explores more specifically, the consequences and potentials of Industry 4.0 for the procurement, supply and distribution management functions. A blend of literature-based deductions and results from a qualitative study are used to explore the phenomenon.The findings indicate that technologies of Industry 4.0 legitimate the next level of maturity in procurement (Procurement &Supply Management 4.0). Empirical findings support these conceptual considerations, revealing the ambitious expectations.The sample comprises seven industries and the employed method is qualitative (telephone and face-to-face interviews). The empirical findings are only a basis for further quantitative investigation , however, they support the necessity and existence of the maturity level. The findings also reveal skepticism due to high investment costs but also very high expectations. As recent studies about digitalization are rather rare in the context of single company functions, this research work contributes to the understanding of digitalization and supply management.", "title": "" }, { "docid": "1b2515c8d20593d7b4446d695e28389f", "text": "Based on microwave C-sections, rat-race coupler is designed to have a dual-band characteristic and a miniaturized area. The C-section together with two transmission line sections attached to both of its ends is synthesized to realize a phase change of 90° at the first frequency, and 270° at the second passband. The equivalence is established by the transmission line theory, and transcendental equations are derived to determine its structure parameters. Two circuits are realized in this presentation; one is designed at 2.45/5.2 GHz and the other at 2.45/5.8 GHz. The latter circuit occupies only 31% of the area of a conventional hybrid ring at the first band. It is believed that this circuit has the best size reduction for microstrip dual-band rat-race couplers in open literature. The measured results show good agreement with simulation responses.", "title": "" }, { "docid": "683edd67fe4b1919228253fe5dd461cb", "text": "In oncology, the term 'hyperthermia' refers to the treatment of malignant diseases by administering heat in various ways. Hyperthermia is usually applied as an adjunct to an already established treatment modality (especially radiotherapy and chemotherapy), where tumor temperatures in the range of 40-43 degrees C are aspired. In several clinical phase-III trials, an improvement of both local control and survival rates have been demonstrated by adding local/regional hyperthermia to radiotherapy in patients with locally advanced or recurrent superficial and pelvic tumors. In addition, interstitial hyperthermia, hyperthermic chemoperfusion, and whole-body hyperthermia (WBH) are under clinical investigation, and some positive comparative trials have already been completed. In parallel to clinical research, several aspects of heat action have been examined in numerous pre-clinical studies since the 1970s. However, an unequivocal identification of the mechanisms leading to favorable clinical results of hyperthermia have not yet been identified for various reasons. This manuscript deals with discussions concerning the direct cytotoxic effect of heat, heat-induced alterations of the tumor microenvironment, synergism of heat in conjunction with radiation and drugs, as well as, the presumed cellular effects of hyperthermia including the expression of heat-shock proteins (HSP), induction and regulation of apoptosis, signal transduction, and modulation of drug resistance by hyperthermia.", "title": "" }, { "docid": "b6f4a2122f8fe1bc7cb4e59ad7cf8017", "text": "The use of biomass to provide energy has been fundamental to the development of civilisation. In recent times pressures on the global environment have led to calls for an increased use of renewable energy sources, in lieu of fossil fuels. Biomass is one potential source of renewable energy and the conversion of plant material into a suitable form of energy, usually electricity or as a fuel for an internal combustion engine, can be achieved using a number of different routes, each with specific pros and cons. A brief review of the main conversion processes is presented, with specific regard to the production of a fuel suitable for spark ignition gas engines.", "title": "" }, { "docid": "68b1e52ae7298648563941bf64c683e3", "text": "The recent concept of ‘‘Health Insurance Marketplace’’ introduced to facilitate the purchase of health insurance by comparing different insurance plans in terms of price, coverage benefits, and quality designates a key role to the health insurance providers. Currently, the web based tools available to search for health insurance plans are deficient in offering personalized recommendations based on the coverage benefits and cost. Therefore, anticipating the users’ needs we propose a cloud based framework that offers personalized recommendations about the health insurance plans.We use theMulti-attribute Utility Theory (MAUT) to help users compare different health insurance plans based on coverage and cost criteria, such as: (a) premium, (b) co-pay, (c) deductibles, (d) co-insurance, and (e) maximum benefit offered by a plan. To overcome the issues arising possibly due to the heterogeneous data formats and different plan representations across the providers, we present a standardized representation for the health insurance plans. The plan information of each of the providers is retrieved using the Data as a Service (DaaS). The framework is implemented as Software as a Service (SaaS) to offer customized recommendations by applying a ranking technique for the identified plans according to the user specified criteria. © 2014 Published by Elsevier B.V.", "title": "" }, { "docid": "4ca7e1893c0ab71d46af4954f7daf58e", "text": "Identifying coordinate transformations that make strongly nonlinear dynamics approximately linear has the potential to enable nonlinear prediction, estimation, and control using linear theory. The Koopman operator is a leading data-driven embedding, and its eigenfunctions provide intrinsic coordinates that globally linearize the dynamics. However, identifying and representing these eigenfunctions has proven challenging. This work leverages deep learning to discover representations of Koopman eigenfunctions from data. Our network is parsimonious and interpretable by construction, embedding the dynamics on a low-dimensional manifold. We identify nonlinear coordinates on which the dynamics are globally linear using a modified auto-encoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra. Our framework parametrizes the continuous frequency using an auxiliary network, enabling a compact and efficient embedding, while connecting our models to decades of asymptotics. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings. It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control. Here the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.", "title": "" }, { "docid": "a98c32ca34b5096a38d29a54ece2ba0b", "text": "Those who feel better able to express their “true selves” in Internet rather than face-to-face interaction settings are more likely to form close relationships with people met on the Internet (McKenna, Green, & Gleason, this issue). Building on these correlational findings from survey data, we conducted three laboratory experiments to directly test the hypothesized causal role of differential self-expression in Internet relationship formation. Experiments 1 and 2, using a reaction time task, found that for university undergraduates, the true-self concept is more accessible in memory during Internet interactions, and the actual self more accessible during face-to-face interactions. Experiment 3 confirmed that people randomly assigned to interact over the Internet (vs. face to face) were better able to express their true-self qualities to their partners.", "title": "" }, { "docid": "d6c54837dbb1c07a0b9e2ed7b2945021", "text": "Chatbots are software used in entertainment industry, businesses and user support. Chatbots are modeled on various techniques such as knowledge base, machine learning based. Machine learning based chatbots yields more practical results. Chatbot which gives responses based on the context of conversation tends to be more user friendly. The chatbot we are proposing demonstrates a method of developing chatbot which can follow the context of the conversation. This method uses TensorFlow for developing the neural network model of the chatbot and uses the nlp techniques to maintain the context of the conversation. This chatbots can be used in small industries or business for automating customer care as user queries will be handled by chatbots thus reducing need of human labour and expenditure.", "title": "" }, { "docid": "9beeee852ce0d077720c212cf17be036", "text": "Spoofing speech detection aims to differentiate spoofing speech from natural speech. Frame-based features are usually used in most of previous works. Although multiple frames or dynamic features are used to form a super-vector to represent the temporal information, the time span covered by these features are not sufficient. Most of the systems failed to detect the non-vocoder or unit selection based spoofing attacks. In this work, we propose to use a temporal convolutional neural network (CNN) based classifier for spoofing speech detection. The temporal CNN first convolves the feature trajectories with a set of filters, then extract the maximum responses of these filters within a time window using a max-pooling layer. Due to the use of max-pooling, we can extract useful information from a long temporal span without concatenating a large number of neighbouring frames, as in feedforward deep neural network (DNN). Five types of feature are employed to access the performance of proposed classifier. Experimental results on ASVspoof 2015 corpus show that the temporal CNN based classifier is effective for synthetic speech detection. Specifically, the proposed method brings a significant performance boost for the unit selection based spoofing speech detection.", "title": "" }, { "docid": "7c0ef25b2a4d777456facdfc526cf206", "text": "The paper presents a novel approach to unsupervised text summarization. The novelty lies in exploiting the diversity of concepts in text for summarization, which has not received much attention in the summarization literature. A diversity-based approach here is a principled generalization of Maximal Marginal Relevance criterion by Carbonell and Goldstein \\cite{carbonell-goldstein98}.\nWe propose, in addition, aninformation-centricapproach to evaluation, where the quality of summaries is judged not in terms of how well they match human-created summaries but in terms of how well they represent their source documents in IR tasks such document retrieval and text categorization.\nTo find the effectiveness of our approach under the proposed evaluation scheme, we set out to examine how a system with the diversity functionality performs against one without, using the BMIR-J2 corpus, a test data developed by a Japanese research consortium. The results demonstrate a clear superiority of a diversity based approach to a non-diversity based approach.", "title": "" }, { "docid": "c2fc709aeb4c48a3bd2071b4693d4296", "text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "title": "" }, { "docid": "037042318b99bf9c32831a6b25dcd50e", "text": "Autoencoders are popular among neural-network-based matrix completion models due to their ability to retrieve potential latent factors from the partially observed matrices. Nevertheless, when training data is scarce their performance is significantly degraded due to overfitting. In this paper, we mitigate overfitting with a data-dependent regularization technique that relies on the principles of multi-task learning. Specifically, we propose an autoencoder-based matrix completion model that performs prediction of the unknown matrix values as a main task, and manifold learning as an auxiliary task. The latter acts as an inductive bias, leading to solutions that generalize better. The proposed model outperforms the existing autoencoder-based models designed for matrix completion, achieving high reconstruction accuracy in well-known datasets.", "title": "" }, { "docid": "5b9ca6d2cec03c771e89fe8e5dd23012", "text": "Posttraumatic agitation is a challenging problem for acute and rehabilitation staff, persons with traumatic brain injury, and their families. Specific variables for evaluation and care remain elusive. Clinical trials have not yielded a strong foundation for evidence-based practice in this arena. This review seeks to evaluate the present literature (with a focus on the decade 1995-2005) and employ previous clinical experience to deliver a review of the topic. We will discuss definitions, pathophysiology, evaluation techniques, and treatment regimens. A recommended approach to the evaluation and treatment of the person with posttraumatic agitation will be presented. The authors hope that this review will spur discussion and assist in facilitating clinical care paradigms and research programs.", "title": "" }, { "docid": "c46dd659aa1dfeac9c58197ff8575278", "text": "Previous studies indicate that childhood sexual abuse can have extensive and serious consequences. The aim of this research was to do a qualitative study of the consequences of childhood sexual abuse for Icelandic men's health and well-being. Phenomenology was the methodological approach of the study. Totally 14 interviews were conducted, two per individual, and analysed based on the Vancouver School of Phenomenology. The main results of the study showed that the men describe deep and almost unbearable suffering, affecting their entire life, of which there is no alleviation in sight. The men have lived in repressed silence most of their lives and have come close to taking their own lives. What stopped them from committing suicide was revealing to others what happened to them which set them free in a way. The men experienced fear- or rage-based shock at the time of the trauma and most of them endured the attack by dissociation, disconnecting psyche and body and have difficulties reconnecting. They had extremely difficult childhoods, living with indisposition, bullying, learning difficulties and behavioural problems. Some have, from a young age, numbed themselves with alcohol and elicit drugs. They have suffered psychologically and physically and have had relational and sexual intimacy problems. The consequences of the abuse surfaced either immediately after the shock or many years later and developed into complex post-traumatic stress disorder. Because of perceived societal prejudice, it was hard for the men to seek help. This shows the great need for professionals to be alert to the possible consequences of childhood sexual abuse in their practice to reverse the damaging consequences on their health and well-being. We conclude that living in repressed silence after a trauma, like childhood sexual abuse, can be dangerous for the health, well-being and indeed the very life of the survivor.", "title": "" }, { "docid": "1272d55054d50934ff633950ddc24420", "text": "A central push in operations models over the last decade has been the incorporation of models of customer choice. Real world implementations of many of these models face the formidable stumbling block of simply identifying the ‘right’ model of choice to use. Thus motivated, we visit the following problem: For a ‘generic’ model of consumer choice (namely, distributions over preference lists) and a limited amount of data on how consumers actually make decisions (such as marginal information about these distributions), how may one predict revenues from offering a particular assortment of choices? We present a framework to answer such questions and design a number of tractable algorithms from a data and computational standpoint for the same. This paper thus takes a significant step towards ‘automating’ the crucial task of choice model selection in the context of operational decision problems.", "title": "" } ]
scidocsrr
7d172a005281fe1d5d5a4bb0a78a5d72
SoftLearn: A Process Mining Platform for the Discovery of Learning Paths
[ { "docid": "86b12f890edf6c6561536a947f338feb", "text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.", "title": "" } ]
[ { "docid": "58e6579f09cf92366bd7a34e83d1331a", "text": "Finger vein recognition is a newly developed and promising biometrics technology. To facilitate evaluation in this area and study state-of-the-art performance of the finger vein recognition algorithms, we organized The ICB-2015 Competition on Finger Vein Recognition (ICFVR2015). This competition is held on a general recognition algorithm evaluation platform called RATE, with 3 data sets collected from volunteers and actual usage. 7 algorithms were finally submitted, with the best EER achieving 0.375%. This paper will first introduce the organization of the competition and RATE, then describe data sets and test protocols, and finally present results of the competition.", "title": "" }, { "docid": "f2ba6cfcee7b192ce14ea4cfb268bac9", "text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2017, INFORMS", "title": "" }, { "docid": "4f8fea97733000d58f2ff229c85aeaa0", "text": "Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites.", "title": "" }, { "docid": "fae925bdd47b835035d4f8f0b5b3139d", "text": "By Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin : Network Flows: Theory, Algorithms, and Applications bringing together the classic and the contemporary aspects of the field this comprehensive introduction to network flows provides an integrative view of theory network flows pearson new international edition theory algorithms and applications on amazon free shipping on qualifying offers Network Flows: Theory, Algorithms, and Applications:", "title": "" }, { "docid": "0d23946f8a94db5943deee81deb3f322", "text": "The Spatial Semantic Hierarchy is a model of knowledge of large-scale space consisting of multiple interacting representations, both qualitative and quantitative. The SSH is inspired by the properties of the human cognitive map, and is intended to serve both as a model of the human cognitive map and as a method for robot exploration and map-building. The multiple levels of the SSH express states of partial knowledge, and thus enable the human or robotic agent to deal robustly with uncertainty during both learning and problem-solving. The control level represents useful patterns of sensorimotor interaction with the world in the form of trajectory-following and hill-climbing control laws leading to locally distinctive states. Local geometric maps in local frames of reference can be constructed at the control level to serve as observers for control laws in particular neighborhoods. The causal level abstracts continuous behavior among distinctive states into a discrete model consisting of states linked by actions. The topological level introduces the external ontology of places, paths and regions by abduction to explain the observed pattern of states and actions at the causal level. Quantitative knowledge at the control, causal and topological levels supports a “patchwork map” of local geometric frames of reference linked by causal and topological connections. The patchwork map can be merged into a single global frame of reference at the metrical level when sufficient information and computational resources are available. We describe the assumptions and guarantees behind the generality of the SSH across environments and sensorimotor systems. Evidence is presented from several partial implementations of the SSH on simulated and physical robots.  2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "10a33d5a75419519ce1177f6711b749c", "text": "Perianal fistulizing Crohn's disease has a major negative effect on patient quality of life and is a predictor of poor long-term outcomes. Factors involved in the pathogenesis of perianal fistulizing Crohn's disease include an increased production of transforming growth factor β, TNF and IL-13 in the inflammatory infiltrate that induce epithelial-to-mesenchymal transition and upregulation of matrix metalloproteinases, leading to tissue remodelling and fistula formation. Care of patients with perianal Crohn's disease requires a multidisciplinary approach. A complete assessment of fistula characteristics is the basis for optimal management and must include the clinical evaluation of fistula openings, endoscopic assessment of the presence of proctitis, and MRI to determine the anatomy of fistula tracts and presence of abscesses. Local injection of mesenchymal stem cells can induce remission in patients not responding to medical therapies, or to avoid the exposure to systemic immunosuppression in patients naive to biologics in the absence of active luminal disease. Surgery is still required in a high proportion of patients and should not be delayed when criteria for drug failure is met. In this Review, we provide an up-to-date overview on the pathogenesis and diagnosis of fistulizing Crohn's disease, as well as therapeutic strategies.", "title": "" }, { "docid": "5785108e48e62ce2758a7b18559a697e", "text": "The objective of this article is to create a better understanding of the intersection of the academic fields of entrepreneurship and strategic management, based on an aggregation of the extant literature in these two fields. The article structures and synthesizes the existing scholarly works in the two fields, thereby generating new knowledge. The results can be used to further enhance fruitful integration of these two overlapping but separate academic fields. The article attempts to integrate the two fields by first identifying apparent interrelations, and then by concentrating in more detail on some important intersections, including strategic management in small and medium-sized enterprises and start-ups, acknowledging the central role of the entrepreneur. The content and process sides of strategic management are discussed as well as their important connecting link, the business plan. To conclude, implications and future research directions for the two fields are proposed.", "title": "" }, { "docid": "ef773a23445ba125559d1c03e9267ef8", "text": "Understanding the complex dynamic and uncertain characteristics of organisational employees who perform authorised or unauthorised information security activities is deemed to be a very important and challenging task. This paper presents a conceptual framework for classifying and organising the characteristics of organisational subjects involved in these information security practices. Our framework expands the traditional Human Behaviour and the Social Environment perspectives used in social work by identifying how knowledge, skills and individual preferences work to influence individual and group practices with respect to information security management. The classification of concepts and characteristics in the framework arises from a review of recent literature and is underpinned by theoretical models that explain these concepts and characteristics. Further, based upon an exploratory study of three case organisations in Saudi Arabia involving extensive interviews with senior managers, department managers, IT managers, information security officers, and IT staff; this article describes observed information security practices and identifies several factors which appear to be particularly important in influencing information security behaviour. These factors include values associated with national and organisational culture and how they manifest in practice, and activities related to information security management.", "title": "" }, { "docid": "57987aa428d56bd210a2040c4441e31e", "text": "BACKGROUND\nThe study of adverse drug events (ADEs) is a tenured topic in medical literature. In recent years, increasing numbers of scientific articles and health-related social media posts have been generated and shared daily, albeit with very limited use for ADE study and with little known about the content with respect to ADEs.\n\n\nOBJECTIVE\nThe aim of this study was to develop a big data analytics strategy that mines the content of scientific articles and health-related Web-based social media to detect and identify ADEs.\n\n\nMETHODS\nWe analyzed the following two data sources: (1) biomedical articles and (2) health-related social media blog posts. We developed an intelligent and scalable text mining solution on big data infrastructures composed of Apache Spark, natural language processing, and machine learning. This was combined with an Elasticsearch No-SQL distributed database to explore and visualize ADEs.\n\n\nRESULTS\nThe accuracy, precision, recall, and area under receiver operating characteristic of the system were 92.7%, 93.6%, 93.0%, and 0.905, respectively, and showed better results in comparison with traditional approaches in the literature. This work not only detected and classified ADE sentences from big data biomedical literature but also scientifically visualized ADE interactions.\n\n\nCONCLUSIONS\nTo the best of our knowledge, this work is the first to investigate a big data machine learning strategy for ADE discovery on massive datasets downloaded from PubMed Central and social media. This contribution illustrates possible capacities in big data biomedical text analysis using advanced computational methods with real-time update from new data published on a daily basis.", "title": "" }, { "docid": "9180404278d95c4826203c55be2507d0", "text": "In this paper, motivated by network inference and tomography applications, we study the problem of compressive sensing for sparse signal vectors over graphs. In particular, we are interested in recovering sparse vectors representing the properties of the edges from a graph. Unlike existing compressive sensing results, the collective additive measurements we are allowed to take must follow connected paths over the underlying graph. For a sufficiently connected graph with n nodes, it is shown that, using O(k log(n)) path measurements, we are able to recover any k-sparse link vector (with no more than k nonzero elements), even though the measurements have to follow the graph path constraints. We mainly show that the computationally efficient ℓ1 minimization can provide theoretical guarantees for inferring such k-sparse vectors with O(k log(n)) path measurements from the graph.", "title": "" }, { "docid": "462248d6ebad4ed197b0322a5ab09406", "text": "The purpose of this study was to quantify the response of the forearm musculature to combinations of wrist and forearm posture and grip force. Ten healthy individuals performed five relative handgrip efforts (5%, 50%, 70% and 100% of maximum, and 50 N) for combinations of three wrist postures (flexed, neutral and extended) and three forearm postures (pronated, neutral and supinated). 'Baseline' extensor muscle activity (associated with holding the dynamometer without exerting grip force) was greatest with the forearm pronated and the wrist extended, while flexor activity was largest in supination when the wrist was flexed. Extensor activity was generally larger than that of flexors during low to mid-range target force levels, and was always greater when the forearm was pronated. Flexor activation only exceeded the extensor activation at the 70% and 100% target force levels in some postures. A flexed wrist reduced maximum grip force by 40-50%, but EMG amplitude remained elevated. Women produced 60-65% of the grip strength of men, and required 5-10% more of both relative force and extensor activation to produce a 50 N grip. However, this appeared to be due to strength rather than gender. Forearm rotation affected grip force generation only when the wrist was flexed, with force decreasing from supination to pronation (p < 0.005). The levels of extensor activation observed, especially during baseline and low level grip exertions, suggest a possible contributing mechanism to the development of lateral forearm muscle pain in the workplace.", "title": "" }, { "docid": "3d59f488d91af8b9d204032a8d4f65c8", "text": "Robotic grasp detection for novel objects is a challenging task, but for the last few years, deep learning based approaches have achieved remarkable performance improvements, up to 96.1% accuracy, with RGB-D data. In this paper, we propose fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our methods also achieved state-of-the-art detection accuracy (up to 96.6%) with state-ofthe-art real-time computation time for high-resolution images (6-20ms per 360×360 image) on Cornell dataset. Due to FCNN, our proposed method can be applied to images with any size for detecting multigrasps on multiobjects. Proposed methods were evaluated using 4-axis robot arm with small parallel gripper and RGB-D camera for grasping challenging small, novel objects. With accurate vision-robot coordinate calibration through our proposed learning-based, fully automatic approach, our proposed method yielded 90% success rate.", "title": "" }, { "docid": "85ab2edb48dd57f259385399437ea8e9", "text": "Training robust deep video representations has proven to be much more challenging than learning deep image representations. This is in part due to the enormous size of raw video streams and the high temporal redundancy; the true and interesting signal is often drowned in too much irrelevant data. Motivated by that the superfluous information can be reduced by up to two orders of magnitude by video compression (using H.264, HEVC, etc.), we propose to train a deep network directly on the compressed video. This representation has a higher information density, and we found the training to be easier. In addition, the signals in a compressed video provide free, albeit noisy, motion information. We propose novel techniques to use them effectively. Our approach is about 4.6 times faster than Res3D and 2.7 times faster than ResNet-152. On the task of action recognition, our approach outperforms all the other methods on the UCF-101, HMDB-51, and Charades dataset.", "title": "" }, { "docid": "6b1fbc91a501ea25c7d3d20780a2be74", "text": "STUDY DESIGN\nA systematic quantitative review of the literature.\n\n\nOBJECTIVE\nTo compare combined anterior-posterior surgery versus posterior surgery for thoracolumbar fractures in order to identify better treatments.\n\n\nSUMMARY OF BACKGROUND DATA\nAxial load of the anterior and middle column of the spine can lead to a burst fracture in the vertebral body. The management of thoracolumbar burst fractures remains controversial. The goals of operative treatment are fracture reduction, fixation and decompressing the neural canal. For this, different operative methods are developed, for instance, the posterior and the combined anterior-posterior approach. Recent systematic qualitative reviews comparing these methods are lacking.\n\n\nMETHODS\nWe conducted an electronic search of MEDLINE, EMBASE, LILACS and the Cochrane Central Register for Controlled Trials.\n\n\nRESULTS\nFive observational comparative studies and no randomized clinical trials comparing the combined anteriorposterior approach with the posterior approach were retrieved. The total enrollment of patients in these studies was 755 patients. The results were expressed as relative risk (RR) for dichotomous outcomes and weighted mean difference (WMD) for continuous outcomes with 95% confidence intervals (CI).\n\n\nCONCLUSIONS\nA small significantly higher kyphotic correction and improvement of vertebral height (sagittal index) observed for the combined anterior-posterior group is cancelled out by more blood loss, longer operation time, longer hospital stay, higher costs and a possible higher intra- and postoperative complication rate requiring re-operation and the possibility of a worsened Hannover spine score. The surgeons' choices regarding the operative approach are biased: worse cases tended to undergo the combined anterior-posterior approach.", "title": "" }, { "docid": "87c33e325d074d8baefd56f6396f1c7a", "text": "We present a recurrent model for semantic instance segmentation that sequentially generates binary masks and their associated class probabilities for every object in an image. Our proposed system is trainable end-to-end from an input image to a sequence of labeled masks and, compared to methods relying on object proposals, does not require postprocessing steps on its output. We study the suitability of our recurrent model on three different instance segmentation benchmarks, namely Pascal VOC 2012, CVPPP Plant Leaf Segmentation and Cityscapes. Further, we analyze the object sorting patterns generated by our model and observe that it learns to follow a consistent pattern, which correlates with the activations learned in the encoder part of our network.", "title": "" }, { "docid": "0a63bb79988efa4cc26dcb66647617a0", "text": "Physical activity is one of the most promising nonpharmacological, noninvasive, and cost-effective methods of health-promotion, yet statistics show that only a small percentage of middle-aged and older adults engage in the recommended amount of regular exercise. This state of affairs is less likely due to a lack of knowledge about the benefits of exercise than to failures of motivation and self-regulatory mechanisms. Many types of intervention programs target exercise in later life, but they typically do not achieve sustained behavior change, and there has been very little increase in the exercise rate in the population over the last decade. The goal of this paper is to consider the use of effective low-cost motivational and behavioral strategies for increasing physical activity, which could have far-reaching benefits at the individual and population levels. We present a multicomponent framework to guide development of behavior change interventions to increase and maintain physical activity among sedentary adults and others at risk for health problems. This involves a personalized approach to motivation and behavior change, which includes social support, goal setting, and positive affect coupled with cognitive restructuring of negative and self-defeating attitudes and misconceptions. These strategies can lead to increases in exercise self-efficacy and control beliefs as well as self- management skills such as self-regulation and action planning, which in turn are expected to lead to long-term increases in activity. These changes in activity frequency and intensity can ultimately lead to improvements in physical and psychological well-being among middle-aged and older adults, including those from underserved, vulnerable populations. Even a modest increase in physical activity can have a significant impact on health and quality of life. Recommendations for future interventions include a focus on ways to achieve personalized approaches, broad outreach, and maintenance of behavior changes.", "title": "" }, { "docid": "0f5511aaed3d6627671a5e9f68df422a", "text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.", "title": "" }, { "docid": "b0bf389688f9a11125c6bbd7202b6e2c", "text": "Ascariasis, a worldwide parasitic disease, is regarded by some authorities as the most common parasitic infection in humans. The causative organism is Ascaris lumbricoides, which normally lives in the lumen of the small intestine. From the intestine, the worm can invade the bile duct or pancreatic duct, but invasion into the gallbladder is quite rare because of the anatomical features of the cystic duct, which is narrow and tortuous. Once it enters the gallbladder, it is exceedingly rare for the worm to migrate back to the intestine. We report a case of gallbladder ascariasis with worm migration back into the intestine, in view of its rare presentation.", "title": "" }, { "docid": "36ebd6dd8a4fa1d69138696d21e19342", "text": "Very high dimensional learning systems become theoretical ly possible when training examples are abundant. The computing cost then becomes the limiting fact or. Any efficient learning algorithm should at least take a brief look at each example. But should a ll ex mples be given equal attention? This contribution proposes an empirical answer. We first pre sent an online SVM algorithm based on this premise. LASVM yields competitive misclassifi cation rates after a single pass over the training examples, outspeeding state-of-the-art SVM s olvers. Then we show how active example selection can yield faster training, higher accuracies , and simpler models, using only a fraction of the training example labels.", "title": "" }, { "docid": "4239773a9ef4636f4dd8e084b658a6bc", "text": "Alternative splicing and alternative polyadenylation (APA) of pre-mRNAs greatly contribute to transcriptome diversity, coding capacity of a genome and gene regulatory mechanisms in eukaryotes. Second-generation sequencing technologies have been extensively used to analyse transcriptomes. However, a major limitation of short-read data is that it is difficult to accurately predict full-length splice isoforms. Here we sequenced the sorghum transcriptome using Pacific Biosciences single-molecule real-time long-read isoform sequencing and developed a pipeline called TAPIS (Transcriptome Analysis Pipeline for Isoform Sequencing) to identify full-length splice isoforms and APA sites. Our analysis reveals transcriptome-wide full-length isoforms at an unprecedented scale with over 11,000 novel splice isoforms. Additionally, we uncover APA of ∼11,000 expressed genes and more than 2,100 novel genes. These results greatly enhance sorghum gene annotations and aid in studying gene regulation in this important bioenergy crop. The TAPIS pipeline will serve as a useful tool to analyse Iso-Seq data from any organism.", "title": "" } ]
scidocsrr
34b72b753611e8e511ed4a35083a5447
Matching NIR Face to VIS Face Using Transduction
[ { "docid": "07ccd1768e98bb806bf4e6bac665292e", "text": "In many applications, such as E-Passport and driver’s license, the enrollment of face templates is done using visible light (VIS) face images. Such images are normally acquired in controlled environment where the lighting is approximately frontal. However, Authentication is done in variable lighting conditions. Matching of faces in VIS images taken in different lighting conditions is still a big challenge. A recent development in near infrared (NIR) image based face recognition [1] has well overcome the difficulty arising from lighting changes. However, it requires that enrollment face images be acquired using NIR as well. In this paper, we present a new problem, that of matching a face in an NIR image against one in a VIS images, and propose a solution to it. The work is aimed to develop a new solution for meeting the accuracy requirement of face-based biometric recognition, by taking advantages of the recent NIR face technology while allowing the use of existing VIS face photos as gallery templates. Face recognition is done by matching an NIR probe face against a VIS gallery face. Based on an analysis of properties of NIR and VIS face images, we propose a learningbased approach for the different modality matching. A mechanism of correlation between NIR and VIS faces is learned from NIR→VIS face pairs, and the learned correlation is used to evaluate similarity between an NIR face and a VIS face. We provide preliminary results of NIR→VIS face matching for recognition under different illumination conditions. The results demonstrate advantages of NIR→VIS matching over VIS→VIS matching.", "title": "" } ]
[ { "docid": "cdda406e5d20a2d2f43c4171a4214a6a", "text": "Although the international standard CityGML has five levels of detail (LODs), the vast majority of available models are the coarse ones (up to LOD2, ie block-shaped buildings with roofs). LOD3 and LOD4 models, which contain architectural details such as balconies, windows and rooms, nearly exist because, unlike coarser LODs, their construction requires several datasets that must be acquired with different technologies, and often extensive manual work is needed. We investigate in this paper an alternative to obtaining CityGML LOD3 models: the automatic conversion from already existing architectural models (stored in the IFC format). Existing conversion algorithms mostly focus on the semantic mappings and convert all the geometries, which yields CityGMLmodels having poor usability in practice (spatial analysis is for instance not possible). We present a conversion algorithm that accurately applies the correct semantics from IFC models and that constructs valid CityGML LOD3 buildings by performing a series of geometric operations in 3D.We have implemented our algorithm and we demonstrate its effectiveness with several real-world datasets. We also propose specific improvements to both standards to foster their integration in the future.", "title": "" }, { "docid": "9395db18cc1b09e6907232fbdcc0d19b", "text": "With the inclusion of an effective methodology, this article answers in detail a question that, for a quarter of a century, remained open despite intense study by various researchers. Is the formula XCB=e(x,e(e(e(x,y),e(z,y)),z)) a single axiom for the classical equivalential calculus when the rules of inference consist of detachment (modus ponens) and substitution Where the function e represents equivalence, this calculus can be axiomatized quite naturally with the formulas e(x,x), e(e(x,y),e(y,x)), and e(e(x,y),e(e(y,z),e(x,z))), which correspond to reflexivity, symmetry, and transitivity, respectively. (We note that e(x,x) is dependent on the other two axioms.) Heretofore, thirteen shortest single axioms for classical equivalence of length eleven had been discovered, and XCB was the only remaining formula of that length whose status was undetermined. To show that XCB is indeed such a single axiom, we focus on the rule of condensed detachment, a rule that captures detachment together with an appropriately general, but restricted, form of substitution. The proof we present in this paper consists of twenty-five applications of condensed detachment, completing with the deduction of transitivity followed by a deduction of symmetry. We also discuss some factors that may explain in part why XCB resisted relinquishing its treasure for so long. Our approach relied on diverse strategies applied by the automated reasoning program OTTER. Thus ends the search for shortest single axioms for the equivalential calculus.", "title": "" }, { "docid": "c65f050e911abb4b58b4e4f9b9aec63b", "text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.", "title": "" }, { "docid": "5cea156f62a70b01dfa97d8e606acec6", "text": "Curcumin is the principal curcuminoid of the popular Indian spice Curcuma longa (Turmeric), which is a member of ginger family (Zingiberaceae). Inspite of showing extraordinary medicinal properties its commercialized formulation is still a challenge because of its poor solubility, bioavailability and rapid plasma clearance. In our work, different shapes of nanocurcumin are prepared in alcohol-water solutions using sonication assisted solvent worsening method. The nanocrcumin in the suspension have been characterized using Dynamic and Static Light Scattering (DLS and SLS) and Transmission Electron Microscopy methods. The DLS data revealed that the curcumin nanoparticles in the suspension had asymmetric shape with effective hydrodynamic radius in the range of 120160nm. With increase in water concentration in the solvent medium the shape of the nanocurcumin was seen to undergo a gradual transition from isotropic to anisotropic state which was supported by TEM study. The in vitro antimicrobial activity (Inhibition) of the nanocurcumin has been compared with that of the normal curcumin using Broth dilution and KirbyBauer methods against both Gram positive and Gram negative bacteria. The efficacy of nanocurcumin is marginally better than curcurmin per se. We observed that nanocurcumin had better dispersibility and enhanced bioavailability in hydrophilic environment as compared to normal curcumin.", "title": "" }, { "docid": "a1bff389a9a95926a052ded84c625a9e", "text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.", "title": "" }, { "docid": "c495fadfd4c3e17948e71591e84c3398", "text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.", "title": "" }, { "docid": "ab132902ce21c35d4b5befb8ff2898b5", "text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.", "title": "" }, { "docid": "06b57ca0fbe0aa7688b69dc1bd3d1cf8", "text": "This research examines how sociotechnical affordances shape interpretation of disclosure and social judgments on social networking sites. Drawing on the disclosure personalism framework, Study 1 revealed that information unavailability and relational basis underlay personalistic judgments about Facebook disclosures: Perceivers inferred greater message and relational intimacy from disclosures made privately than from those made publicly. Study 2 revealed that perceivers judged intimate disclosures shared publicly as less appropriate than intimate disclosures shared privately, and that perceived disclosure appropriateness accounted for the effects of public versus private contexts on reduced liking for a discloser. Taken together, the results show how sociotechnical affordances shape perceptions of disclosure and relationships, which has implications for understanding relational development and maintenance on SNS.", "title": "" }, { "docid": "057df3356022c31db27b1f165c827524", "text": "Eating disorders in dancers are thought to be common, but the exact rates remain to be clarified. The aim of this study is to systematically compile and analyse the rates of eating disorders in dancers. A literature search, appraisal and meta-analysis were conducted. Thirty-three relevant studies were published between 1966 and 2013 with sufficient data for extraction. Primary data were extracted as raw numbers or confidence intervals. Risk ratios and 95% confidence intervals were calculated for controlled studies. The overall prevalence of eating disorders was 12.0% (16.4% for ballet dancers), 2.0% (4% for ballet dancers) for anorexia, 4.4% (2% for ballet dancers) for bulimia and 9.5% (14.9% for ballet dancers) for eating disorders not otherwise specified (EDNOS). The dancer group had higher mean scores on the EAT-26 and the Eating Disorder Inventory subscales. Dancers, in general, had a higher risk of suffering from eating disorders in general, anorexia nervosa and EDNOS, but no higher risk of suffering from bulimia nervosa. The study concluded that as dancers had a three times higher risk of suffering from eating disorders, particularly anorexia nervosa and EDNOS, specifically designed services for this population should be considered.", "title": "" }, { "docid": "eab7af3a42400c1012e1c3764c63c2c2", "text": "We report on the first large-scale evaluation of author obfuscation approaches built to attack authorship verification approaches: the impact of 3 obfuscators on the performance of a total of 44 authorship verification approaches has been measured and analyzed. The best-performing obfuscator successfully impacts the decision-making process of the authorship verifiers on average in about 47% of the cases, causing them to misjudge a given pair of documents as having been written by “different authors” when in fact they would have decided otherwise if one of them had not been automatically obfuscated. The evaluated obfuscators have been submitted to a shared task on author obfuscation that we organized at the PAN 2016 lab on digital text forensics. We contribute further by surveying the literature on author obfuscation, by collecting and organizing evaluation methodology for this domain, and by introducing performance measures tailored to measuring the impact of author obfuscation on authorship verification.", "title": "" }, { "docid": "dcf1c3d5acb7212413a072eaa4244fd1", "text": "Using simple and standardized semiology, the lung appears accessible to ultrasound, despite previous opinions otherwise. Lung ultrasound allows the intensivist to quickly answer to a majority of critical situations. Not only pleural effusion but also pneumothorax, alveolar consolidation, and interstitial syndrome will have accurate ultrasound equivalents, the recognition of which practically guides management. Combined with venous, cardiac, and abdominal examination, ultrasound investigation of this vital organ provides a transparent overview of the critically ill, a kind of stethoscope for a visual medicine. It is believed that by using this tool, the intensivist may more confidently manage acute dyspnea and make emergency therapeutic decisions based on reproducible data. Further benefits include reduced requirements for computed tomographic scans, therefore decreasing delay, irradiation, cost, and above all, discomfort to the patient. Thus, ultrasound of the lung can also be added to the classic armamentarium as a clinical tool for emergency use.", "title": "" }, { "docid": "ce1ff59a3b327af3708440414b5eb964", "text": "In recent years, all major automotive companies have launched initiatives towards cars that assist people in making driving decisions. The ultimate goal of all these efforts are cars that can drive themselves. The benefit of such a technology could be enormous. At present, some 42,000 people die every year in traffic accidents in the U.S., mostly because of human error. Self-driving cars could make people safer and more productive. Self-driving cars is a true AI challenge. To endow cars with the ability to make decisions on behalf of their drivers, they have to sense, perceive, and act. Recent work in this field has extensively built on probabilistic representations and machine learning methods. The speaker will report on past work on the DARPA Grand Challenge, and discuss ongoing work on the Urban Challenge, DARPA’s follow-up program on self-driving cars.", "title": "" }, { "docid": "bde4e8743d2146d3ee9af39f27d14b5a", "text": "For several decades now, there has been sporadic interest in automatically characterizing the speech impairment due to Parkinson's disease (PD). Most early studies were confined to quantifying a few speech features that were easy to compute. More recent studies have adopted a machine learning approach where a large number of potential features are extracted and the models are learned automatically from the data. In the same vein, here we characterize the disease using a relatively large cohort of 168 subjects, collected from multiple (three) clinics. We elicited speech using three tasks - the sustained phonation task, the diadochokinetic task and a reading task, all within a time budget of 4 minutes, prompted by a portable device. From these recordings, we extracted 1582 features for each subject using openSMILE, a standard feature extraction tool. We compared the effectiveness of three strategies for learning a regularized regression and find that ridge regression performs better than lasso and support vector regression for our task. We refine the feature extraction to capture pitch-related cues, including jitter and shimmer, more accurately using a time-varying harmonic model of speech. Our results show that the severity of the disease can be inferred from speech with a mean absolute error of about 5.5, explaining 61% of the variance and consistently well-above chance across all clinics. Of the three speech elicitation tasks, we find that the reading task is significantly better at capturing cues than diadochokinetic or sustained phonation task. In all, we have demonstrated that the data collection and inference can be fully automated, and the results show that speech-based assessment has promising practical application in PD. The techniques reported here are more widely applicable to other paralinguistic tasks in clinical domain.", "title": "" }, { "docid": "c57d9c4f62606e8fccef34ddd22edaec", "text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.", "title": "" }, { "docid": "de721f4b839b0816f551fa8f8ee2065e", "text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", "title": "" }, { "docid": "247eebd69a651f6f116f41fdf885ae39", "text": "RFID identification is a new technology that will become ubiquitous as RFID tags will be applied to every-day items in order to yield great productivity gains or “smart” applications for users. However, this pervasive use of RFID tags opens up the possibility for various attacks violating user privacy. In this work we present an RFID authentication protocol that enforces user privacy and protects against tag cloning. We designed our protocol with both tag-to-reader and reader-to-tag authentication in mind; unless both types of authentication are applied, any protocol can be shown to be prone to either cloning or privacy attacks. Our scheme is based on the use of a secret shared between tag and database that is refreshed to avoid tag tracing. However, this is done in such a way so that efficiency of identification is not sacrificed. Additionally, our protocol is very simple and it can be implemented easily with the use of standard cryptographic hash functions. In analyzing our protocol, we identify several attacks that can be applied to RFID protocols and we demonstrate the security of our scheme. Furthermore, we show how forward privacy is guaranteed; messages seen today will still be valid in the future, even after the tag has been compromised.", "title": "" }, { "docid": "dd38d76f208d26e681c00f63b50492e5", "text": "An anti-louse shampoo (Licener®) based on a neem seed extract was tested in vivo and in vitro on its efficacy to eliminate head louse infestation by a single treatment. The hair of 12 children being selected from a larger group due to their intense infestation with head lice were incubated for 10 min with the neem seed extract-containing shampoo. It was found that after this short exposition period, none of the lice had survived, when being observed for 22 h. In all cases, more than 50–70 dead lice had been combed down from each head after the shampoo had been washed out with normal tap water. A second group of eight children had been treated for 20 min with identical results. Intense combing of the volunteers 7 days after the treatment did not result in the finding of any motile louse neither in the 10-min treated group nor in the group the hair of which had been treated for 20 min. Other living head lice were in vitro incubated within the undiluted product (being placed inside little baskets the floor of which consisted of a fine net of gauze). It was seen that a total submersion for only 3 min prior to washing 3× for 2 min with tap water was sufficient to kill all motile stages (larvae and adults). The incubation of nits at 30°C into the undiluted product for 3, 10, and 20 min did not show differences. In all cases, there was no eyespot development or hatching larvae within 7–10 days of observation. This and the fact that the hair of treated children (even in the short-time treated group of only 10 min) did not reveal freshly hatched larval stages of lice indicate that there is an ovicidal activity of the product, too.", "title": "" }, { "docid": "be1b9731df45408571e75d1add5dfe9c", "text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "title": "" }, { "docid": "febed6b06359fe35437e7fa16ed0cbfa", "text": "Videos recorded on moving cameras are often known to be shaky due to unstable carrier motion and the video stabilization problem involves inferring the intended smooth motion to keep and the unintended shaky motion to remove. However, conventional methods typically require proper, scenario-specific parameter setting, which does not generalize well across different scenarios. Moreover, we observe that a stable video should satisfy two conditions: a smooth trajectory and consistent inter-frame transition. While conventional methods only target at the former condition, we address these two issues at the same time. In this paper, we propose a homography consistency based algorithm to directly extract the optimal smooth trajectory and evenly distribute the inter-frame transition. By optimizing in the homography domain, our method does not need further matrix decomposition and parameter adjustment, automatically adapting to all possible types of motion (eg. translational or rotational) and video properties (eg. frame rates). We test our algorithm on translational videos recorded from a car and rotational videos from a hovering aerial vehicle, both of high and low frame rates. Results show our method widely applicable to different scenarios without any need of additional parameter adjustment.", "title": "" }, { "docid": "9f16cb2dd8c4a95d5faed112779ee041", "text": "This paper deals with the problem of measuring the wheel-rail interaction quality in real time using a suitably designed and realized railway measurement system. More specifically, the measured parameter is the equivalent conicity, as defined in the international union of railways UIC 518 Standard, and the measurement system is based on suitable processing of geometric data that is acquired by a contactless optical unit. The measurement system has been verified according to the test procedures described in the UIC 519 Standard. This paper shows how it is possible to obtain, in real time and with comparatively simple algorithms, measurements that are perfectly compliant with the UIC 519 Standard, with regard to the required measurement uncertainty as well.", "title": "" } ]
scidocsrr
438e0ed9329ec9104a8b0bcd47297f02
Creation of a Digital Business Model Builder
[ { "docid": "43bc62e674ae5c8785d00406b307b478", "text": "We explore the theoretical foundations of value creation in e-business by examining how 59 American and European e-businesses that have recently become publicly traded corporations create value. We observe that in e-business new value can be created by the ways in which transactions are enabled. Grounded in the rich data obtained from case study analyses and in the received theory in entrepreneurship and strategic management, we develop a model of the sources of value creation. The model suggests that the value creation potential of e-businesses hinges on four interdependent dimensions, namely: efficiency, complementarities, lock-in, and novelty. Our findings suggest that no single entrepreneurship or strategic management theory can fully explain the value creation potential of e-business. Rather, an integration of the received theoretical perspectives on value creation is needed. To enable such an integration, we offer the business model construct as a unit of analysis for future research on value creation in e-business. A business model depicts the design of transaction content, structure, and governance so as to create value through the exploitation of business opportunities. We propose that a firm’s business model is an important locus of innovation and a crucial source of value creation for the firm and its suppliers, partners, and customers. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "264aa89aa10fe05cff2f0e1a239e79ff", "text": "While the terminology has changed over time, the basic concept of the Digital Twin model has remained fairly stable from its inception in 2001. It is based on the idea that a digital informational construct about a physical system could be created as an entity on its own. This digital information would be a “twin” of the information that was embedded within the physical system itself and be linked with that physical system through the entire lifecycle of the system.", "title": "" } ]
[ { "docid": "9323c74e39a677c28d1c082b12e1f587", "text": "Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Restoring the true scene colors (clear day image) from a single image of a weather-degraded scene remains a challenging task due to the inherent ambiguity between scene albedo and depth. In this paper, we introduce a novel probabilistic method that fully leverages natural statistics of both the albedo and depth of the scene to resolve this ambiguity. Our key idea is to model the image with a factorial Markov random field in which the. scene albedo and depth are. two statistically independent latent layers. We. show that we may exploit natural image and depth statistics as priors on these hidden layers and factorize a single foggy image via a canonical Expectation Maximization algorithm with alternating minimization. Experimental results show that the proposed method achieves more accurate restoration compared to state-of-the-art methods that focus on only recovering scene albedo or depth individually.", "title": "" }, { "docid": "76d2ba510927bd7f56155e1cf1cbbc52", "text": "As the first part of a study that aims to propose tools to take into account some electromagnetic compatibility aspects, we have developed a model to predict the electric and magnetic fields emitted by a device. This model is based on a set of equivalent sources (electric and magnetic dipoles) obtained from the cartographies of the tangential components of electric and magnetic near fields. One of its features is to be suitable for a commercial electromagnetic simulation tool based on a finite element method. This paper presents the process of modeling and the measurement and calibration procedure to obtain electromagnetic fields necessary for the model; the validation and the integration of the model into a commercial electromagnetic simulator are then performed on a Wilkinson power divider.", "title": "" }, { "docid": "afa8dcb9dfbd99781c4b03d80f9ad85c", "text": "Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment. A significant issue with transferring this success to the robotics domain is that interaction with the real world is costly, but training on limited experience is prone to overfitting. We present a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot. The robot leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation, to demonstrate successful zero-shot transfer under real-world environmental variations without fine-tuning.", "title": "" }, { "docid": "73862c0aa60c03d5a96f755cdc3bf07b", "text": "Adaptive and innovative application of classical data mining principles and techniques in time series analysis has resulted in development of a concept known as time series data mining. Since the time series are present in all areas of business and scientific research, attractiveness of mining of time series datasets should not be seen only in the context of the research challenges in the scientific community, but also in terms of usefulness of the research results, as a support to the process of business decision-making. A fundamental component in the mining process of time series data is time series segmentation. As a data mining research problem, segmentation is focused on the discovery of rules in movements of observed phenomena in a form of interpretable, novel, and useful temporal patterns. In this Paper, a comprehensive review of the conceptual determinations, including the elements of comparative analysis, of the most commonly used algorithms for segmentation of time series, is being considered.", "title": "" }, { "docid": "012bcbc6b5e7b8aaafd03f100489961c", "text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.", "title": "" }, { "docid": "50a89110795314b5610fabeaf41f0e40", "text": "People are capable of robust evaluations of their decisions: they are often aware of their mistakes even without explicit feedback, and report levels of confidence in their decisions that correlate with objective performance. These metacognitive abilities help people to avoid making the same mistakes twice, and to avoid overcommitting time or resources to decisions that are based on unreliable evidence. In this review, we consider progress in characterizing the neural and mechanistic basis of these related aspects of metacognition-confidence judgements and error monitoring-and identify crucial points of convergence between methods and theories in the two fields. This convergence suggests that common principles govern metacognitive judgements of confidence and accuracy; in particular, a shared reliance on post-decisional processing within the systems responsible for the initial decision. However, research in both fields has focused rather narrowly on simple, discrete decisions-reflecting the correspondingly restricted focus of current models of the decision process itself-raising doubts about the degree to which discovered principles will scale up to explain metacognitive evaluation of real-world decisions and actions that are fluid, temporally extended, and embedded in the broader context of evolving behavioural goals.", "title": "" }, { "docid": "7645c6a0089ab537cb3f0f82743ce452", "text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.", "title": "" }, { "docid": "d3431bc21cde7bd96fe4c70d6ea6657a", "text": "Chip-multiprocessors are quickly gaining momentum in all segments of computing. However, the practical success of CMPs strongly depends on addressing the difficulty of multithreaded application development. To address this challenge, it is necessary to co-develop new CMP architecture with novel programming models. Currently, architecture research relies on software simulators which are too slow to facilitate interesting experiments with CMP software without using small datasets or significantly reducing the level of detail in the simulated models. An alternative to simulation is to exploit the rich capabilities of modern FPGAs to create FPGA-based platforms for novel CMP research. This paper presents ATLAS, the first prototype for CMPs with hardware support for Transactional Memory (TM), a technology aiming to simplify parallel programming. ATLAS uses the BEE2 multi-FPGA board to provide a system with 8 PowerPC cores that run at 100MHz and runs Linux. ATLAS provides significant benefits for CMP research such as 100x performance improvement over a software simulator and good visibility that helps with software tuning and architectural improvements. In addition to presenting and evaluating ATLAS, we share our observations about building a FPGA-based framework for CMP research. Specifically, we address issues such as overall performance, challenges of mapping ASIC-style CMP RTL on to FPGAs, software support, the selection criteria for the base processor, and the challenges of using pre-designed IP libraries.", "title": "" }, { "docid": "d11d6df22b5c6212b27dad4e3ed96826", "text": "We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad, are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks.", "title": "" }, { "docid": "c613138270b05f909904519d195fcecf", "text": "This study deals with artificial neural network (ANN) modeling a diesel engine using waste cooking biodiesel fuel to predict the brake power, torque, specific fuel consumption and exhaust emissions of engine. To acquire data for training and testing the proposed ANN, two cylinders, four-stroke diesel engine was fuelled with waste vegetable cooking biodiesel and diesel fuel blends and operated at different engine speeds. The properties of biodiesel produced from waste vegetable oil was measured based on ASTM standards. The experimental results reveal that blends of waste vegetable oil methyl ester with diesel fuel provide better engine performance and improved emission characteristics. Using some of the experimental data for training, an ANN model based on standard Back-Propagation algorithm for the engine was developed. Multi layer perception network (MLP) was used for nonlinear mapping between the input and the output parameters. Different activation functions and several rules were used to assess the percentage error between the desired and the predicted values. It was observed that the ANN model can predict the engine performance and exhaust emissions quite well with correlation coefficient (R) were 0.9487, 0.999, 0.929 and 0.999 for the engine torque, SFC, CO and HC emissions, respectively. The prediction MSE (Mean Square Error) error was between the desired outputs as measured values and the simulated values by the model was obtained as 0.0004.", "title": "" }, { "docid": "69986adaf1759ce9111f3f582ef35b65", "text": "Bazila Akbar Kahn, 2013. Interaction of Physical Activity, Mental Health, Health Locus of Control and Quality of Life: A Study on University Students in Pakistan. Department of Sport Sciences. University of Jyväskylä. Master’s Thesis of Sport and Exercise Psychology. 66 pages Physical activity involvement is considered as beneficial both for physiological and psychological health. In Pakistani society an elevated level of physical inactivity has been identified lately. Nevertheless, studies examining the association between physical activity and psychological health are limited to the young population of university students in Pakistan. University students are considered to be at a risk stage due to academic stress and physiological changes Therefore, the purpose of this study was to explore the associations between physical activity, quality of life and psychological health related variables to university students in Pakistan. Participants (N=378) of the current study were from seven universities in Pakistan (265 female, 112 males). General Health Questionnaire, SF-36 quality of life matrix, multidimensional health locus of control and international physical activity questionnaire were administered. Results reveal a large number of students as physically inactive (37.6%). t-Test revealed male students were more active and having a better quality of life in comparison to the female. The high prevalence of psychological distress (25%) has also been identified by using correlation. Results indicated a linear positive relationship of physical activity with mental component summary and a negative association with psychological distress. Conversely, psychological distress was negatively related overall health related quality of life and PA. Results also demonstrated that students with a better internal locus of control were discovered to be more physically active. Findings were discussed in comparison with studies from other countries e.g. US, UK, Norway, Poland, Turkey and Australia However, the results suggest the replication of the study with a larger sample size. Additionally, it is also imperative to explore the barriers to PA among the student population in Pakistan. Keys words: Physical activity, mental health, health locus of control, psychological distress, university students in Pakistan.", "title": "" }, { "docid": "ac11d61454afa129f29e1b3a5e20ec9e", "text": "Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection. However, facial expressions do not always fall neatly into pre-defined semantic categories. Also, the similarity between expressions measured in the action unit space need not correspond to how humans perceive expression similarity. Different from previous work, our goal is to describe facial expressions in a continuous fashion using a compact embedding space that mimics human visual preferences. To achieve this goal, we collect a large-scale faces-in-the-wild dataset with human annotations in the form: Expressions A and B are visually more similar when compared to expression C, and use this dataset to train a neural network that produces a compact (16-dimensional) expression embedding. We experimentally demonstrate that the learned embedding can be successfully used for various applications such as expression retrieval, photo album summarization, and emotion recognition. We also show that the embedding learned using the proposed dataset performs better than several other embeddings learned using existing emotion or action unit datasets.", "title": "" }, { "docid": "37825cd0f6ae399204a392e3b32a667b", "text": "Abduction is inference to the best explanation. Abduction has long been studied intensively in a wide range of contexts, from artificial intelligence research to cognitive science. While recent advances in large-scale knowledge acquisition warrant applying abduction with large knowledge bases to real-life problems, as of yet no existing approach to abduction has achieved both the efficiency and formal expressiveness necessary to be a practical solution for large-scale reasoning on real-life problems. The contributions of our work are the following: (i) we reformulate abduction as an Integer Linear Programming (ILP) optimization problem, providing full support for first-order predicate logic (FOPL); (ii) we employ Cutting Plane Inference, which is an iterative optimization strategy developed in Operations Research for making abductive reasoning in full-fledged FOPL tractable, showing its efficiency on a real-life dataset; (iii) the abductive inference engine presented in this paper is made publicly available.", "title": "" }, { "docid": "021a64219ef739cc77ce5f51107d20b3", "text": "JavaScript malware-based attacks account for a large fraction of successful mass-scale exploitation happening today. From the standpoint of the attacker, the attraction is that these drive-by attacks can be mounted against an unsuspecting user visiting a seemingly innocent web page. While several techniques for addressing these types of exploits have been proposed, in-browser adoption has been slow, in part because of the performance overhead these methods tend to incur. In this paper, we propose ZOZZLE, a low-overhead solution for detecting and preventing JavaScript malware that can be deployed in the browser. Our experience also suggests that ZOZZLE may be used as a lightweight filter for a more costly detection technique or for standalone offline", "title": "" }, { "docid": "fc4ea7391c1500851ec0d37beed4cd90", "text": "As a crucial operation, routing plays an important role in various communication networks. In the context of data and sensor networks, routing strategies such as shortest-path, multi-path and potential-based (“all-path”) routing have been developed. Existing results in the literature show that the shortest path and all-path routing can be obtained from L1 and L2 flow optimization, respectively. Based on this connection between routing and flow optimization in a network, in this paper we develop a unifying theoretical framework by considering flow optimization with mixed (weighted) L1/L2-norms. We obtain a surprising result: as we vary the trade-off parameter θ, the routing graphs induced by the optimal flow solutions span from shortest-path to multi-path to all-path routing-this entire sequence of routing graphs is referred to as the routing continuum. We also develop an efficient iterative algorithm for computing the entire routing continuum. Several generalizations are also considered, with applications to traffic engineering, wireless sensor networks, and network robustness analysis.", "title": "" }, { "docid": "048c67f19bdb634e39e98296fd1107cb", "text": "It has been suggested that music and speech maintain entirely dissociable mental processing systems. The current study, however, provides evidence that there is an overlap in the processing of certain shared aspects of the two. This study focuses on fundamental frequency (pitch), which is an essential component of melodic units in music and lexical and/or intonational units in speech. We hypothesize that extensive experience with the processing of musical pitch can transfer to a lexical pitch-processing domain. To that end, we asked nine English-speaking musicians and nine Englishspeaking non-musicians to identify and discriminate the four lexical tones of Mandarin Chinese. The subjects performed significantly differently on both tasks; the musicians identified the tones with 89% accuracy and discriminated them with 87% accuracy, while the non-musicians identified them with only 69% accuracy and discriminated them with 71% accuracy. These results provide counter-evidence to the theory of dissociation between music and speech processing.", "title": "" }, { "docid": "87dd4ba33b9f4ae20d60097960047551", "text": "Lacking the presence of human and social elements is claimed one major weakness that is hindering the growth of e-commerce. The emergence of social commerce (SC) might help ameliorate this situation. Social commerce is a new evolution of e-commerce that combines the commercial and social activities by deploying social technologies into e-commerce sites. Social commerce reintroduces the social aspect of shopping to e-commerce, increasing the degree of social presences in online environment. Drawing upon the social presence theory, this study theorizes the nature of social aspect in online SC marketplace by proposing a set of three social presence variables. These variables are then hypothesized to have positive impacts on trusting beliefs which in turn result in online purchase behaviors. The research model is examined via data collected from a typical ecommerce site in China. Our findings suggest that social presence factors grounded in social technologies contribute significantly to the building of the trustworthy online exchanging relationships. In doing so, this paper confirms the positive role of social aspect in shaping online purchase behaviors, providing a theoretical evidence for the fusion of social and commercial activities. Finally, this paper introduces a new perspective of e-commerce and calls more attention to this new phenomenon.", "title": "" }, { "docid": "51b7cf820e3a46b5daeee6eb83058077", "text": "Previous taxonomies of software change have focused on the purpose of the change (i.e., the why) rather than the underlying mechanisms. This paper proposes a taxonomy of software change based on characterizing the mechanisms of change and the factors that influence these mechanisms. The ultimate goal of this taxonomy is to provide a framework that positions concrete tools, formalisms and methods within the domain of software evolution. Such a framework would considerably ease comparison between the various mechanisms of change. It would also allow practitioners to identify and evaluate the relevant tools, methods and formalisms for a particular change scenario. As an initial step towards this taxonomy, the paper presents a framework that can be used to characterize software change support tools and to identify the factors that impact on the use of these tools. The framework is evaluated by applying it to three different change support tools and by comparing these tools based on this analysis. Copyright c © 2005 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "a2217cd5f5e6b54ad0329a8703204ccb", "text": "Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE—a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks.", "title": "" }, { "docid": "51620ef906b7fc5774e051fb3261d611", "text": "Named Entity Recognition (NER) plays an important role in a variety of online information management tasks including text categorization, document clustering, and faceted search. While recent NER systems can achieve near-human performance on certain documents like news articles, they still remain highly domain-specific and thus cannot effectively identify entities such as original technical concepts in scientific documents. In this work, we propose novel approaches for NER on distinctive document collections (such as scientific articles) based on n-grams inspection and classification. We design and evaluate several entity recognition features---ranging from well-known part-of-speech tags to n-gram co-location statistics and decision trees---to classify candidates. In addition, we show how the use of external knowledge bases (either specific like DBLP or generic like DBPedia) can be leveraged to improve the effectiveness of NER for idiosyncratic collections. We evaluate our system on two test collections created from a set of Computer Science and Physics papers and compare it against state-of-the-art supervised methods. Experimental results show that a careful combination of the features we propose yield up to 85% NER accuracy over scientific collections and substantially outperforms state-of-the-art approaches such as those based on maximum entropy.", "title": "" } ]
scidocsrr
56beee87369e0bc0ec90e0ea9be29c56
Pulse: Mining Customer Opinions from Free Text
[ { "docid": "3ac2f2916614a4e8f6afa1c31d9f704d", "text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.", "title": "" }, { "docid": "dd4cfd8973d837b3182deeeb5801d2c0", "text": "We examine methods for clustering in high dimensions. In the first part of the paper, we perform an experimental comparison between three batch clustering algorithms: the Expectation–Maximization (EM) algorithm, a “winner take all” version of the EM algorithm reminiscent of the K-means algorithm, and model-based hierarchical agglomerative clustering. We learn naive-Bayes models with a hidden root node, using high-dimensional discrete-variable data sets (both real and synthetic). We find that the EM algorithm significantly outperforms the other methods, and proceed to investigate the effect of various initialization schemes on the final solution produced by the EM algorithm. The initializations that we consider are (1) parameters sampled from an uninformative prior, (2) random perturbations of the marginal distribution of the data, and (3) the output of hierarchical agglomerative clustering. Although the methods are substantially different, they lead to learned models that are strikingly similar in quality.", "title": "" } ]
[ { "docid": "e602ab2a2d93a8912869ae8af0925299", "text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.", "title": "" }, { "docid": "da86c72fff98d51d4d78ece7516664fe", "text": "OBJECTIVE\nThe purpose of this study was to establish an Indian reference for normal fetal nasal bone length at 16-26 weeks of gestation.\n\n\nMETHODS\nThe fetal nasal bone was measured by ultrasound in 2,962 pregnant women at 16-26 weeks of gestation from 2004 to 2009 by a single operator, who performed three measurements for each woman when the fetus was in the midsagittal plane and the nasal bone was between a 45 and 135° angle to the ultrasound beam. All neonates were examined after delivery to confirm the absence of congenital abnormalities.\n\n\nRESULTS\nThe median nasal bone length increased with gestational age from 3.3 mm at 16 weeks to 6.65 mm at 26 weeks in a linear relationship. The fifth percentile nasal bone lengths were 2.37, 2.4, 2.8, 3.5, 3.6, 3.9, 4.3, 4.6, 4.68, 4.54, and 4.91 mm at 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, and 26 weeks, respectively.\n\n\nCONCLUSIONS\nWe have established the nasal bone length in South Indian fetuses at 16-26 weeks of gestation and there is progressive increase in the fifth percentile of nasal bone length with advancing gestational age. Hence, gestational age should be considered while defining hypoplasia of the nasal bone.", "title": "" }, { "docid": "1b54171e55063045779bc1be111c37ae", "text": "The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little interuser interference. Since these results rely on asymptotics, it is important to investigate whether the conventional system models are reasonable in this asymptotic regime. This paper considers a new system model that incorporates general transceiver hardware impairments at both the BSs (equipped with large antenna arrays) and the single-antenna user equipments (UEs). As opposed to the conventional case of ideal hardware, we show that hardware impairments create finite ceilings on the channel estimation accuracy and on the downlink/uplink capacity of each UE. Surprisingly, the capacity is mainly limited by the hardware at the UE, while the impact of impairments in the large-scale arrays vanishes asymptotically and interuser interference (in particular, pilot contamination) becomes negligible. Furthermore, we prove that the huge degrees of freedom offered by massive MIMO can be used to reduce the transmit power and/or to tolerate larger hardware impairments, which allows for the use of inexpensive and energy-efficient antenna elements.", "title": "" }, { "docid": "4818794eddc8af63fd99b000bd00736a", "text": "Dysproteinemia is characterized by the overproduction of an Ig by clonal expansion of cells from the B cell lineage. The resultant monoclonal protein can be composed of the entire Ig or its components. Monoclonal proteins are increasingly recognized as a contributor to kidney disease. They can cause injury in all areas of the kidney, including the glomerular, tubular, and vascular compartments. In the glomerulus, the major mechanism of injury is deposition. Examples of this include Ig amyloidosis, monoclonal Ig deposition disease, immunotactoid glomerulopathy, and cryoglobulinemic GN specifically from types 1 and 2 cryoglobulins. Mechanisms that do not involve Ig deposition include the activation of the complement system, which causes complement deposition in C3 glomerulopathy, and cytokines/growth factors as seen in thrombotic microangiopathy and precipitation, which is involved with cryoglobulinemia. It is important to recognize that nephrotoxic monoclonal proteins can be produced by clones from any of the B cell lineages and that a malignant state is not required for the development of kidney disease. The nephrotoxic clones that do not meet requirement for a malignant condition are now called monoclonal gammopathy of renal significance. Whether it is a malignancy or monoclonal gammopathy of renal significance, preservation of renal function requires substantial reduction of the monoclonal protein. With better understanding of the pathogenesis, clone-directed strategies, such as rituximab against CD20 expressing B cell and bortezomib against plasma cell clones, have been used in the treatment of these diseases. These clone-directed therapies been found to be more effective than immunosuppressive regimens used in nonmonoclonal protein-related kidney diseases.", "title": "" }, { "docid": "b1b842bed367be06c67952c34921f6f6", "text": "Definitions and uses of the concept of empowerment are wide-ranging: the term has been used to describe the essence of human existence and development, but also aspects of organizational effectiveness and quality. The empowerment ideology is rooted in social action where empowerment was associated with community interests and with attempts to increase the power and influence of oppressed groups (such as workers, women and ethnic minorities). Later, there was also growing recognition of the importance of the individual's characteristics and actions. Based on a review of the literature, this paper explores the uses of the empowerment concept as a framework for nurses' professional growth and development. Given the complexity of the concept, it is vital to understand the underlying philosophy before moving on to define its substance. The articles reviewed were classified into three groups on the basis of their theoretical orientation: critical social theory, organization theory and social psychological theory. Empowerment seems likely to provide for an umbrella concept of professional development in nursing.", "title": "" }, { "docid": "d97af6f656cba4018a5d367861a07f01", "text": "Traditional Cloud model is not designed to handle latency-sensitive Internet of Things applications. The new trend consists on moving data to be processed close to where it was generated. To this end, Fog Computing paradigm suggests using the compute and storage power of network elements. In such environments, intelligent and scalable orchestration of thousands of heterogeneous devices in complex environments is critical for IoT Service providers. In this vision paper, we present a framework, called Foggy, that facilitates dynamic resource provisioning and automated application deployment in Fog Computing architectures. We analyze several applications and identify their requirements that need to be taken intoconsideration in our design of the Foggy framework. We implemented a proof of concept of a simple IoT application continuous deployment using Raspberry Pi boards.", "title": "" }, { "docid": "a0b07468cb8904654a80d0dcab6dced5", "text": "Malicious JavaScript code in webpages on the Internet is an emergent security issue because of its universality and potentially severe impact. Because of its obfuscation and complexities, detecting it has a considerable cost. Over the last few years, several machine learning-based detection approaches have been proposed; most of them use shallow discriminating models with features that are constructed with artificial rules. However, with the advent of the big data era for information transmission, these existing methods already cannot satisfy actual needs. In this paper, we present a new deep learning framework for detection of malicious JavaScript code, from which we obtained the highest detection accuracy compared with the control group. The architecture is composed of a sparse random projection, deep learning model, and logistic regression. Stacked denoising auto-encoders were used to extract high-level features from JavaScript code; logistic regression as a classifier was used to distinguish between malicious and benign JavaScript code. Experimental results indicated that our architecture, with over 27 000 labeled samples, can achieve an accuracy of up to 95%, with a false positive rate less than 4.2% in the best case. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "c99fd51e8577a5300389c565aebebdb3", "text": "Face Detection and Recognition is an important area in the field of substantiation. Maintenance of records of students along with monitoring of class attendance is an area of administration that requires significant amount of time and efforts for management. Automated Attendance Management System performs the daily activities of attendance analysis, for which face recognition is an important aspect. The prevalent techniques and methodologies for detecting and recognizing faces by using feature extraction tools like mean, standard deviation etc fail to overcome issues such as scaling, pose, illumination, variations. The proposed system provides features such as detection of faces, extraction of the features, detection of extracted features, and analysis of student’s attendance. The proposed system integrates techniques such as Principal Component Analysis (PCA) for feature extraction and voila-jones for face detection &Euclidian distance classifier. Faces are recognized using PCA, using the database that contains images of students and is used to recognize student using the captured image. Better accuracy is attained in results and the system takes into account the changes that occurs in the face over the period of time.", "title": "" }, { "docid": "091eedcd69373f99419a745f2215e345", "text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.", "title": "" }, { "docid": "fa32b22333e3a4c66867969a76f95917", "text": "Recognition of texts in scenes is one of the most important tasks in many computer vision applications. Though different scene text recognition techniques have been developed, scene text recognition under a generic condition is still a very open and challenging research problem. One major factor that defers the advance in this research area is character touching, where many characters in scene images are heavily touched with each other and cannot be segmented for recognition. In this paper, we proposed a novel scene text recognition technique that performs word level recognition without character segmentation. Our proposed technique has three advantages. First it converts each word image into a sequential signal for the scene text recognition. Second, it adapts the recurrent neural network (RNN) with Long Short TermMemory (LSTM), the technique that has been widely used for handwriting recognition in recent years. Third, by integrating multiple RNNs, an accurate recognition system is developed which is capable of recognizing scene texts including those heavily touched ones without character segmentation. Extensive experiments have been conducted over a number of datasets including several ICDAR Robust Reading datasets and Google Street View dataset. Experiments show that the proposed technique is capable of recognizing texts in scenes accurately.", "title": "" }, { "docid": "cb88333d7c90df778361318dd362e9cb", "text": "1. All other texts on the mathematics of language are now obsolete. Therefore, instead of going on about what a wonderful job Partee, ter Meulen, and Wall (henceforth, PMW) have done in some ways (breadth of coverage, much better presentation of formal semantics than is usual in books on mathematics of language, etc.), I will leave the lily ungilded, and focus on some points where the book under review could be made far better than it actually is. 2. Perhaps my main complaint concerns the treatment of the connections between the mathematical methods and the linguistics. This whole question is dealt with rather unevenly, and this is reflected in the very structure of the book. The major topics covered, corresponding to the book's division into parts (which are then subdivided into chapters) are set theory, logic and formal systems, algebra, \"English as a formal language\" (this is the heading under which compositionality, lambda-abstraction, generalized quantifiers, and intensionality are discussed), and finally formal language and automata theory. Now, the \"English as a formal language\" part deals with a Montague-style treatment of this language, but it does not go into contemporary syntactic analyses of English, not even ones that are mathematically precise and firmly grounded in formal language theory. Having praised the book for its detailed discussion of the uses of formal semantics in linguistics, I must damn its cavalier treatment of the uses of formal syntax. Thus, there is no mention anywhere in it of generalized phrase structure grammar or X-bar syntax or almost anything else of relevance to modern syntactic theory. Likewise, although the section on set theory deals at some length with nondenumerable sets, there is no mention of the argument of Langendoen and Postal (1984) that NLs are not denumerable. Since this is perhaps the one place in the literature where set theory and linguistics meet, one does not have to be a fan of Langendoen and Postal to see that this topic should be broached. 3. Certain important theoretical topics, usually ones at the interface of mathematics and linguistics, are presented sketchily and even misleadingly; for example, the compositionality of formal semantics, the generative power of transformational grammar, the nonregularity and noncontext freeness of NLs, and (more generally) the question of what kinds of objects one can prove things about. Let us begin with the principle of compositionality (i.e., that \"the meaning of a complex expression is a function of the meanings of its parts and of the syntactic rules by which they are combined\"). PMW claim that \"construed broadly and vaguely", "title": "" }, { "docid": "dd9f6ef9eafdef8b29c566bcea8ded57", "text": "A recent trend in saliency algorithm development is large-scale benchmarking and algorithm ranking with ground truth provided by datasets of human fixations. In order to accommodate the strong bias humans have toward central fixations, it is common to replace traditional ROC metrics with a shuffled ROC metric which uses randomly sampled fixations from other images in the database as the negative set. However, the shuffled ROC introduces a number of problematic elements, including a fundamental assumption that it is possible to separate visual salience and image spatial arrangement. We argue that it is more informative to directly measure the effect of spatial bias on algorithm performance rather than try to correct for it. To capture and quantify these known sources of bias, we propose a novel metric for measuring saliency algorithm performance: the spatially binned ROC (spROC). This metric provides direct in-sight into the spatial biases of a saliency algorithm without sacrificing the intuitive raw performance evaluation of traditional ROC measurements. By quantitatively measuring the bias in saliency algorithms, researchers will be better equipped to select and optimize the most appropriate algorithm for a given task. We use a baseline measure of inherent algorithm bias to show that Adaptive Whitening Saliency (AWS) [14], Attention by Information Maximization (AIM) [8], and Dynamic Visual Attention (DVA) [20] provide the least spatially biased results, suiting them for tasks in which there is no information about the underlying spatial bias of the stimuli, whereas algorithms such as Graph Based Visual Saliency (GBVS) [18] and Context-Aware Saliency (CAS) [15] have a significant inherent central bias.", "title": "" }, { "docid": "7820bdca9f8623b6f983ee1e89b5a7fa", "text": "Blockchain is receiving increasing attention from academy and industry, since it is considered a breakthrough technology that could bring huge benefits to many different sectors. In 2017, Gartner positioned blockchain close to the peak of inflated expectations, acknowledging the enthusiasm for this technology that is now largely discussed by media. In this scenario, the risk to adopt it in the wake of enthusiasm, without objectively judging its actual added value is rather high. Insurance is one the sectors that, among others, started to carefully investigate the possibilities of blockchain. For this specific sector, however, the hype cycle shows that the technology is still in the innovation trigger phase, meaning that the spectrum of possible applications has not been fully explored yet. Insurers, as with many other companies not necessarily active only in the financial sector, are currently requested to make a hard decision, that is, whether to adopt blockchain or not, and they will only know if they were right in 3–5 years. The objective of this paper is to support actors involved in this decision process by illustrating what a blockchain is, analyzing its advantages and disadvantages, as well as discussing several use cases taken from the insurance sector, which could easily be extended to other domains.", "title": "" }, { "docid": "1dfd962aab338894bbd1af8c7dd8fd7e", "text": "A variety of congenital syndromes affecting the face occur due to defects involving the first and second BAs. Radiographic evaluation of craniofacial deformities is necessary to define aberrant anatomy, plan surgical procedures, and evaluate the effects of craniofacial growth and surgical reconstructions. High-resolution CT has proved vital in determining the nature and extent of these syndromes. The radiologic evaluation of syndromes of the first and second BA should begin first by studying a series of isolated defects (cleft lip with or without CP, micrognathia, and EAC atresia) that compose the major features of these syndromes and allow a more specific diagnosis. After discussion of these defects and the associated embryology, we discuss PRS, HFM, ACS, TCS, Stickler syndrome, and VCFS.", "title": "" }, { "docid": "7e1df3fd563009c356c8a1620b96a232", "text": "This research investigates the large hype surrounding big data (BD) and Analytics (BDA) in both academia and the business world. Initial insights pointed to large and complex amalgamations of different fields, techniques and tools. Above all, BD as a research field and as a business tool found to be under developing and is fraught with many challenges. The intention here in this research is to develop an adoption model of BD that could detect key success predictors. The research finds a great interest and optimism about BD value that fueled this current buzz behind this novel phenomenon. Like any disruptive innovation, its assimilation in organizations oppressed with many challenges at various contextual levels. BD would provide different advantages to organizations that would seriously consider all its perspectives alongside its lifecycle in the pre-adoption or adoption or implementation phases. The research attempts to delineate the different facets of BD as a technology and as a management tool highlighting different contributions, implications and recommendations. This is of great interest to researchers, professional and policy makers.", "title": "" }, { "docid": "6d44c4244064634deda30a5059acd87e", "text": "Currently, gene sequence genealogies of the Oligotrichea Bütschli, 1889 comprise only few species. Therefore, a cladistic approach, especially to the Oligotrichida, was made, applying Hennig's method and computer programs. Twenty-three characters were selected and discussed, i.e., the morphology of the oral apparatus (five characters), the somatic ciliature (eight characters), special organelles (four characters), and ontogenetic particulars (six characters). Nine of these characters developed convergently twice. Although several new features were included into the analyses, the cladograms match other morphological trees in the monophyly of the Oligotrichea, Halteriia, Oligotrichia, Oligotrichida, and Choreotrichida. The main synapomorphies of the Oligotrichea are the enantiotropic division mode and the de novo-origin of the undulating membranes. Although the sister group relationship of the Halteriia and the Oligotrichia contradicts results obtained by gene sequence analyses, no morphologic, ontogenetic or ultrastructural features were found, which support a branching of Halteria grandinella within the Stichotrichida. The cladistic approaches suggest paraphyly of the family Strombidiidae probably due to the scarce knowledge. A revised classification of the Oligotrichea is suggested, including all sufficiently known families and genera.", "title": "" }, { "docid": "d0af022e7d013b887b4d18df9d5f08f8", "text": "UNLABELLED\nThe epidermal nevus syndromes represent a group of distinct disorders that can be distinguished by the type of associated epidermal nevus and by the criterion of presence or absence of heritability. Well defined syndromes characterized by organoid epidermal nevi include Schimmelpenning syndrome, phacomatosis pigmentokeratotica, nevus comedonicus syndrome, angora hair nevus syndrome, and Becker nevus syndrome. The molecular basis of these disorders has so far not been identified. By contrast, the group of syndromes characterized by keratinocytic nevi comprises three phenotypes with a known molecular etiology in the form of CHILD (congenital hemidysplasia with ichthyosiform nevus and limb defects) syndrome, type 2 segmental Cowden disease, and fibroblast growth factor receptor 3 epidermal nevus syndrome (García-Hafner-Happle syndrome), whereas Proteus syndrome is still of unknown origin. From this overview, it is clear that a specific type of these disorders cannot be classified by the name \"epidermal nevus syndrome\" nor by the terms \"organoid nevus syndrome\" or \"keratinocytic nevus syndrome.\"\n\n\nLEARNING OBJECTIVES\nAfter completing this learning activity, participants should be able to distinguish nine different epidermal nevus syndromes by their characteristic features, understand the practical significance of avoiding terms like \"epidermal nevus syndrome\" or \"keratinocytic nevus syndrome\" to define any specific entity within this group of disorders, and differentiate between nonhereditary traits and those bearing a genetic risk because of either Mendelian or non-Mendelian inheritance.", "title": "" }, { "docid": "a10b0a69ba7d3f902590b35cf0d5ea32", "text": "This article distills insights from historical, sociological, and psychological perspectives on marriage to develop the suffocation model of marriage in America. According to this model, contemporary Americans are asking their marriage to help them fulfill different sets of goals than in the past. Whereas they ask their marriage to help them fulfill their physiological and safety needs much less than in the past, they ask it to help them fulfill their esteem and self-actualization needs much more than in the past. Asking the marriage to help them fulfill the latter, higher level needs typically requires sufficient investment of time and psychological resources to ensure that the two spouses develop a deep bond and profound insight into each other’s essential qualities. Although some spouses are investing sufficient resources—and reaping the marital and psychological benefits of doing so—most are not. Indeed, they are, on average, investing less than in the past. As a result, mean levels of marital quality and personal well-being are declining over time. According to the suffocation model, spouses who are struggling with an imbalance between what they are asking from their marriage and what they are investing in it have several promising options for corrective action: intervening to optimize their available resources, increasing their investment of resources in the marriage, and asking less of the marriage in terms of facilitating the fulfillment of spouses’ higher needs. Discussion explores the implications of the suffocation model for understanding dating and courtship, sociodemographic variation, and marriage beyond American’s borders.", "title": "" }, { "docid": "9dea3143e6ceac6acbb909e302744ba3", "text": "Biometric identification has numerous advantages over conventional ID and password systems; however, the lack of anonymity and revocability of biometric templates is of concern. Several methods have been proposed to address these problems. Many of the approaches require a precise registration before matching in the anonymous domain. We introduce binary string representations of fingerprints that obviates the need for registration and can be directly matched. We describe several techniques for creating anonymous and revocable representations using these binary string representations. The match performance of these representations is evaluated using a large database of fingerprint images. We prove that given an anonymous representation, it is computationally infeasible to invert it to the original fingerprint, thereby preserving privacy. To the best of our knowledge, this is the first linear, anonymous and revocable fingerprint representation that is implicitly registered.", "title": "" }, { "docid": "e195823e6be49f6c2f3b3f3a2a7977c9", "text": "Bitcoin has not only attracted many users but also been considered as a technical breakthrough by academia. However, the expanding potential of Bitcoin is largely untapped due to its limited throughput. The Bitcoin community is now facing its biggest crisis in history as the community splits on how to increase the throughput. Among various proposals, Bitcoin Unlimited recently became the most popular candidate, as it allows miners to collectively decide the block size limit according to the real network capacity. However, the security of BU is heatedly debated and no consensus has been reached as the issue is discussed in different miner incentive models. In this paper, we systematically evaluate BU's security with three incentive models via testing the two major arguments of BU supporters: the block validity consensus is not necessary for BU's security; such consensus would emerge in BU out of economic incentives. Our results invalidate both arguments and therefore disprove BU's security claims. Our paper further contributes to the field by addressing the necessity of a prescribed block validity consensus for cryptocurrencies.", "title": "" } ]
scidocsrr
e8ba6b9593a3b1e5ae6d6e10a2c5116a
Fast Exact Search in Hamming Space With Multi-Index Hashing
[ { "docid": "98cc792a4fdc23819c877634489d7298", "text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.", "title": "" }, { "docid": "51da24a6bdd2b42c68c4465624d2c344", "text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.", "title": "" } ]
[ { "docid": "218bb1cf213a84f758f222a96ee19fd1", "text": "The cytokinesis-block micronucleus cytome (CBMN Cyt) assay is one of the best-validated methods for measuring chromosome damage in human lymphocytes. This paper describes the methodology, biology, and mechanisms underlying the application of this technique for biodosimetry following exposure to ionizing radiation. Apart from the measurement of micronuclei, it is also possible to measure other important biomarkers within the CBMN Cyt assay that are relevant to radiation biodosimetry. These include nucleoplasmic bridges, which are an important additional measure of radiation-induced damage that originate from dicentric chromosomes as well as the proportion of dividing cells and cells undergoing cell death. A brief account is also given of current developments in the automation of this technique and important knowledge gaps that need attention to further enhance the applicability of this important method for radiation biodosimetry.", "title": "" }, { "docid": "64acb2d16c23f2f26140c0bce1785c9b", "text": "Physical forces of gravity, hemodynamic stresses, and movement play a critical role in tissue development. Yet, little is known about how cells convert these mechanical signals into a chemical response. This review attempts to place the potential molecular mediators of mechanotransduction (e.g. stretch-sensitive ion channels, signaling molecules, cytoskeleton, integrins) within the context of the structural complexity of living cells. The model presented relies on recent experimental findings, which suggests that cells use tensegrity architecture for their organization. Tensegrity predicts that cells are hard-wired to respond immediately to mechanical stresses transmitted over cell surface receptors that physically couple the cytoskeleton to extracellular matrix (e.g. integrins) or to other cells (cadherins, selectins, CAMs). Many signal transducing molecules that are activated by cell binding to growth factors and extracellular matrix associate with cytoskeletal scaffolds within focal adhesion complexes. Mechanical signals, therefore, may be integrated with other environmental signals and transduced into a biochemical response through force-dependent changes in scaffold geometry or molecular mechanics. Tensegrity also provides a mechanism to focus mechanical energy on molecular transducers and to orchestrate and tune the cellular response.", "title": "" }, { "docid": "3cf289ec7d0740dbf59a5c738c68d4a9", "text": "Feminism is a natural ally to interaction design, due to its central commitments to issues such as agency, fulfillment, identity, equity, empowerment, and social justice. In this paper, I summarize the state of the art of feminism in HCI and propose ways to build on existing successes to more robustly integrate feminism into interaction design research and practice. I explore the productive role of feminism in analogous fields, such as industrial design, architecture, and game design. I introduce examples of feminist interaction design already in the field. Finally, I propose a set of femi-nist interaction design qualities intended to support design and evaluation processes directly as they unfold.", "title": "" }, { "docid": "dde4e45fd477808d40b3b06599d361ff", "text": "In this paper, we present the basic features of the flight control of the SkySails towing kite system. After introducing the coordinate definitions and the basic system dynamics, we introduce a novel model used for controller design and justify its main dynamics with results from system identification based on numerous sea trials. We then present the controller design, which we successfully use for operational flights for several years. Finally, we explain the generation of dynamical flight patterns.", "title": "" }, { "docid": "7ff79a0701051f653257aefa2c3ba154", "text": "As antivirus and network intrusion detection systems have increasingly proven insufficient to detect advanced threats, large security operations centers have moved to deploy endpoint-based sensors that provide deeper visibility into low-level events across their enterprises. Unfortunately, for many organizations in government and industry, the installation, maintenance, and resource requirements of these newer solutions pose barriers to adoption and are perceived as risks to organizations' missions. To mitigate this problem we investigated the utility of agentless detection of malicious endpoint behavior, using only the standard built-in Windows audit logging facility as our signal. We found that Windows audit logs, while emitting manageable sized data streams on the endpoints, provide enough information to allow robust detection of malicious behavior. Audit logs provide an effective, low-cost alternative to deploying additional expensive agent-based breach detection systems in many government and industrial settings, and can be used to detect, in our tests, 83% percent of malware samples with a 0.1% false positive rate. They can also supplement already existing host signature-based antivirus solutions, like Kaspersky, Symantec, and McAfee, detecting, in our testing environment, 78% of malware missed by those antivirus systems.", "title": "" }, { "docid": "813a45c7cae19fcd548a8b95a670d65a", "text": "In this paper, conical monopole type UWB antenna which suppress dual bands is proposed. The SSRs were arranged in such a way that the interaction of the magnetic field with them enables the UWB antenna to reject the dual bands using the resonance of SRRs. The proposed conical monopole antenna has a return loss less than -10dB and antenna gain greater than 5dB at 2GHz~11GHz frequency band, except the suppressed bands. The return loss and gain at WiMAX and WLAN bands is greater than -3dB and less than 0dB respectively.", "title": "" }, { "docid": "57201ebbcb18929e9b223144b4cd04d0", "text": "Most recent ad hoc network research has focused on providing routing services without considering security. In this paper, we detail security threats against ad hoc routing protocols, specifically examining AODV and DSR. In light of these threats, we identify three different environments with distinct security requirements. We propose a solution to one, the managed-open scenario where no network infrastructure is pre-deployed, but a small amount of prior security coordination is expected. Our protocol, ARAN, is based on certificates and successfully defeats all identified attacks.", "title": "" }, { "docid": "a917a0ed4f9082766aeef29cb82eeb27", "text": "Roles represent node-level connectivity patterns such as star-center, star-edge nodes, near-cliques or nodes that act as bridges to different regions of the graph. Intuitively, two nodes belong to the same role if they are structurally similar. Roles have been mainly of interest to sociologists, but more recently, roles have become increasingly useful in other domains. Traditionally, the notion of roles were defined based on graph equivalences such as structural, regular, and stochastic equivalences. We briefly revisit these early notions and instead propose a more general formulation of roles based on the similarity of a feature representation (in contrast to the graph representation). This leads us to propose a taxonomy of three general classes of techniques for discovering roles that includes (i) graph-based roles, (ii) feature-based roles, and (iii) hybrid roles. We also propose a flexible framework for discovering roles using the notion of similarity on a feature-based representation. The framework consists of two fundamental components: (a) role feature construction and (b) role assignment using the learned feature representation. We discuss the different possibilities for discovering feature-based roles and the tradeoffs of the many techniques for computing them. Finally, we discuss potential applications and future directions and challenges.", "title": "" }, { "docid": "d8dd68593fd7bd4bdc868634deb9661a", "text": "We present a low-cost IoT based system able to monitor acoustic, olfactory, visual and thermal comfort levels. The system is provided with different ambient sensors, computing, control and connectivity features. The integration of the device with a smartwatch makes it possible the analysis of the personal comfort parameters.", "title": "" }, { "docid": "7c82a4aa866d57dd6f592d848f727cff", "text": "A novel printed diversity monopole antenna is presented for WiFi/WiMAX applications. The antenna comprises two crescent shaped radiators placed symmetrically with respect to a defected ground plane and a neutralization lines is connected between them to achieve good impedance matching and low mutual coupling. Theoretical and experimental characteristics are illustrated for this antenna, which achieves an impedance bandwidth of 54.5% (over 2.4-4.2 GHz), with a reflection coefficient <;-10 dB and mutual coupling <;-17 dB. An acceptable agreement is obtained for the computed and measured gain, radiation patterns, envelope correlation coefficient, and channel capacity loss. These characteristics demonstrate that the proposed antenna is an attractive candidate for multiple-input multiple-output portable or mobile devices.", "title": "" }, { "docid": "1f30c936ce02f020c814177dbf4df2ec", "text": "Ever since the establishment of cell theory in the early 19th century, which recognized the cell as the fundamental building unit of life, biologists have sought to explain the underlying principles. Momentous discoveries were made over the course of many decades of research [1], but the quest to attain full understanding of cellular mechanisms and how to manipulate them to improve health continues to the present day, with bigger budgets, more minds, and more sophisticated tools than ever before. One of the tools to which a great deal of the progress in cell biology can be attributed is light microscopy [2]. The field has come a long way since Antoni van Leeuwenhoeks first steps in the 1670s toward improving and exploiting microscopic imaging for studying life at the cellular level. Not only do biologists today have a plethora of different, complementary microscopic imaging techniques at their disposal that enable them to visualize phenomena even way below the classical diffraction limit of light, advanced microscope systems also allow them to easily acquire very large numbers of images within just a matter of hours. The abundance, heterogeneity, dimensionality, and complexity of the data generated in modern imaging experiments rule out manual image management, processing, and analysis. Consequently, computerized techniques for performing these tasks have become of key importance for further progress in cell biology [3][6]. A central problem in many studies, and often regarded as the cornerstone of image analysis, is image segmentation. Specifically, since cellular morphology is an important phenotypic feature that is indicative of the physiological state of a cell, and since the cell contour is often required for subsequent analysis of intracellular processes (zooming in to nanoscale), or of cell sociology (zooming out to millimeter scale), the problem of cell segmentation has received increasing attention in past years [7]. Here we reflect on how the field has evolved over the years and how past developments can be expected to extrapolate into the future.", "title": "" }, { "docid": "0e3eaf955aa6d0199b4cee08198b6ae0", "text": "Relevance Feedback has proven very effective for improving retrieval accuracy. A difficult yet important problem in all relevance feedback methods is how to optimally balance the original query and feedback information. In the current feedback methods, the balance parameter is usually set to a fixed value across all the queries and collections. However, due to the difference in queries and feedback documents, this balance parameter should be optimized for each query and each set of feedback documents.\n In this paper, we present a learning approach to adaptively predict the optimal balance coefficient for each query and each collection. We propose three heuristics to characterize the balance between query and feedback information. Taking these three heuristics as a road map, we explore a number of features and combine them using a regression approach to predict the balance coefficient. Our experiments show that the proposed adaptive relevance feedback is more robust and effective than the regular fixed-coefficient feedback.", "title": "" }, { "docid": "5d6bd34fb5fdb44950ec5d98e77219c3", "text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.", "title": "" }, { "docid": "dd8c61b00519117ec153b3938f4c6e69", "text": "The characteristics of athletic shoes have been described with terms like cushioning, stability, and guidance.1,2 Despite many years of effort to optimize athletic shoe construction, the prevalence of running-related lower extremity injuries has not significantly declined; however, athletic performance has reached new heights.3-5 Criteria for optimal athletic shoe construction have been proposed, but no clear consensus has emerged.6-8 Given the unique demands of various sports, sportspecific shoe designs may simultaneously increase performance and decrease injury incidence.9-11 The purpose of this report is to provide an overview of current concepts in athletic shoe design, with emphasis on running shoes, so that athletic trainers and therapists (ATs) can assist their patients in selection of an appropriate shoe design.", "title": "" }, { "docid": "6e22b591075d1344ae34716854d96272", "text": "This paper demonstrates a new structure of dual band microstrip bandpass filter (BPF) by cascading an interdigital structure (IDS) and a hairpin line structure. The use of IDS improves the quality factor of the proposed filter. The size of the filter is very small and it is very compact and simple to design. To reduce size of the proposed filter there is no use of via or defected ground structure which makes its fabrication easier and cost effective. The first band of filter covers 2.4GHz, 2.5GHz and 3.5GHz and second band covers 5.8GHz of WLAN/WiMAX standards with good insertion loss. The proposed filter is designed on FR4 with dielectric constant of 4.4 and of thickness 1.6mm. Performance of proposed filter is compared with previously reported filters and found better with reduced size.", "title": "" }, { "docid": "6ebd75996b8a652720b23254c9d77be4", "text": "This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based", "title": "" }, { "docid": "a5e03e76925c838cfdfc328552c9e901", "text": "OBJECTIVE\nIn this article, we describe some of the cognitive and system-based sources of detection and interpretation errors in diagnostic radiology and discuss potential approaches to help reduce misdiagnoses.\n\n\nCONCLUSION\nEvery radiologist worries about missing a diagnosis or giving a false-positive reading. The retrospective error rate among radiologic examinations is approximately 30%, with real-time errors in daily radiology practice averaging 3-5%. Nearly 75% of all medical malpractice claims against radiologists are related to diagnostic errors. As medical reimbursement trends downward, radiologists attempt to compensate by undertaking additional responsibilities to increase productivity. The increased workload, rising quality expectations, cognitive biases, and poor system factors all contribute to diagnostic errors in radiology. Diagnostic errors are underrecognized and underappreciated in radiology practice. This is due to the inability to obtain reliable national estimates of the impact, the difficulty in evaluating effectiveness of potential interventions, and the poor response to systemwide solutions. Most of our clinical work is executed through type 1 processes to minimize cost, anxiety, and delay; however, type 1 processes are also vulnerable to errors. Instead of trying to completely eliminate cognitive shortcuts that serve us well most of the time, becoming aware of common biases and using metacognitive strategies to mitigate the effects have the potential to create sustainable improvement in diagnostic errors.", "title": "" }, { "docid": "452156877885aa1883cb55cb3faefb5f", "text": "The smart grid changes the way how energy and information are exchanged and offers opportunities for incentive-based load balancing. For instance, customers may shift the time of energy consumption of household appliances in exchange for a cheaper energy tariff. This paves the path towards a full range of modular tariffs and dynamic pricing that incorporate the overall grid capacity as well as individual customer demands. This also allows customers to frequently switch within a variety of tariffs from different utility providers based on individual energy consumption and provision forecasts. For automated tariff decisions it is desirable to have a tool that assists in choosing the optimum tariff based on a prediction of individual energy need and production. However, the revelation of individual load patterns for smart grid applications poses severe privacy threats for customers as analyzed in depth in literature. Similarly, accurate and fine-grained regional load forecasts are sensitive business information of utility providers that are not supposed to be released publicly. This paper extends previous work in the domain of privacy-preserving load profile matching where load profiles from utility providers and load profile forecasts from customers are transformed in a distance-preserving embedding in order to find a matching tariff. The embeddings neither reveal individual contributions of customers, nor those of utility providers. Prior work requires a dedicated entity that needs to be trustworthy at least to some extent for determining the matches. In this paper we propose an adaption of this protocol, where we use blockchains and smart contracts for this matching process, instead. Blockchains are gaining widespread adaption in the smart grid domain as a powerful tool for public commitments and accountable calculations. While the use of a decentralized and trust-free blockchain for this protocol comes at the price of some privacy degradation (for which a mitigation is outlined), this drawback is outweighed for it enables verifiability, reliability and transparency. Fabian Knirsch, Andreas Unterweger, Günther Eibl and Dominik Engel Salzburg University of Applied Sciences, Josef Ressel Center for User-Centric Smart Grid Privacy, Security and Control, Urstein Süd 1, 5412 Puch bei Hallein, Austria. e-mail: fabian.knirsch@", "title": "" }, { "docid": "13a06fb1a1bdf0df0043fe10f74443e1", "text": "Coping with the extreme growth of the number of users is one of the main challenges for the future IEEE 802.11 networks. The high interference level, along with the conventional standardized carrier sensing approaches, will degrade the network performance. To tackle these challenges, the Dynamic Sensitivity Control (DSC) and the BSS Color scheme are considered in IEEE 802.11ax and IEEE 802.11ah, respectively. The main purpose of these schemes is to enhance the network throughput and improve the spectrum efficiency in dense networks. In this paper, we evaluate the DSC and the BSS Color scheme along with the PARTIAL-AID (PAID) feature introduced in IEEE 802.11ac, in terms of throughput and fairness. We also, exploit the performance when the aforementioned techniques are combined. The simulations show a significant gain in total throughput when these techniques are applied.", "title": "" }, { "docid": "89372ca7a873fdfd5a91386822f64acb", "text": "Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.", "title": "" } ]
scidocsrr
676214c29da5bdd2357a5e4907d97623
Learned D-AMP: Principled Neural Network based Compressive Image Recovery
[ { "docid": "b426696d7c1764502706696b0d462a34", "text": "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "title": "" } ]
[ { "docid": "b3c203dabe2c19764634fbc3a6717381", "text": "This work complements existing research regarding the forgiveness process by highlighting the role of commitment in motivating forgiveness. On the basis of an interdependence-theoretic analysis, the authors suggest that (a) victims' self-oriented reactions to betrayal are antithetical to forgiveness, favoring impulses such as grudge and vengeance, and (b) forgiveness rests on prorelationship motivation, one cause of which is strong commitment. A priming experiment, a cross-sectional survey study, and an interaction record study revealed evidence of associations (or causal effects) of commitment with forgiveness. The commitment-forgiveness association appeared to rest on intent to persist rather than long-term orientation or psychological attachment. In addition, the commitment-forgiveness association was mediated by cognitive interpretations of betrayal incidents; evidence for mediation by emotional reactions was inconsistent.", "title": "" }, { "docid": "18dfd9865271ae6994d4c9f84ffa49c3", "text": "Clustering is a division of data into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar compared to objects of other groups. This paper is intended to study and compare different data clustering algorithms. The algorithms under investigation are: k-means algorithm, hierarchical clustering algorithm, self-organizing maps algorithm, and expectation maximization clustering algorithm. All these algorithms are compared according to the following factors: size of dataset, number of clusters, type of dataset and type of software used. Some conclusions that are extracted belong to the performance, quality, and accuracy of the clustering algorithms.", "title": "" }, { "docid": "bb447bbd4df92339bace55dc5610fbcc", "text": "Fuzz testing has helped security researchers and organizations discover a large number of vulnerabilities. Although it is efficient and widely used in industry, hardly any empirical studies and experience exist on the customization of fuzzers to real industrial projects. In this paper, collaborating with the engineers from Huawei, we present the practice of adapting fuzz testing to a proprietary message middleware named libmsg, which is responsible for the message transfer of the entire distributed system department. We present the main obstacles coming across in applying an efficient fuzzer to libmsg, including system configuration inconsistency, system build complexity, fuzzing driver absence. The solutions for those typical obstacles are also provided. For example, for the most difficult and expensive obstacle of writing fuzzing drivers, we present a low-cost approach by converting existing sample code snippets into fuzzing drivers. After overcoming those obstacles, we can effectively identify software bugs, and report 9 previously unknown vulnerabilities, including flaws that lead to denial of service or system crash.", "title": "" }, { "docid": "840d7c9e0507bac0103f526a4c5d74d7", "text": "http://dx.doi.org/10.1016/j.paid.2014.08.026 0191-8869/ 2014 Elsevier Ltd. All rights reserved. q This study was funded by a seed grant to the first author from the University of Western Sydney. ⇑ Corresponding author. Address: School of Social Sciences and Psychology, University of Western Sydney, Milperra, NSW 2214, Australia. Tel.: +61 (02) 9772 6447; fax: +61 (02) 9772 6757. E-mail address: p.jonason@uws.edu.au (P.K. Jonason). Peter K. Jonason a,⇑, Serena Wee , Norman P. Li b", "title": "" }, { "docid": "e6d8e8d04585c60a55ebb8229f06e996", "text": "Cellphones provide a unique opportunity to examine how new media both reflect and affect the social world. This study suggests that people map their understanding of common social rules and dilemmas onto new technologies. Over time, these interactions create and reflect a new social landscape. Based upon a year-long observational field study and in-depth interviews, this article examines cellphone usage from two main perspectives: how social norms of interaction in public spaces change and remain the same; and how cellphones become markers for social relations and reflect tacit pre-existing power relations. Informed by Goffman’s concept of cross talk and Hopper’s caller hegemony, the article analyzes the modifications, innovations and violations of cellphone usage on tacit codes of social interactions.", "title": "" }, { "docid": "4b0cf6392d84a0cc8ab80c6ed4796853", "text": "This paper introduces the Finite-State TurnTaking Machine (FSTTM), a new model to control the turn-taking behavior of conversational agents. Based on a non-deterministic finite-state machine, the FSTTM uses a cost matrix and decision theoretic principles to select a turn-taking action at any time. We show how the model can be applied to the problem of end-of-turn detection. Evaluation results on a deployed spoken dialog system show that the FSTTM provides significantly higher responsiveness than previous approaches.", "title": "" }, { "docid": "b02e71f5a98d17959a564a39aba70c93", "text": "Power dissipation has become a significant concern for integrated circuit design in nanometric CMOS technology. To reduce power consumption, approximate implementations of a circuit have been considered as a potential solution for applications in which strict exactness is not required. In approximate computing, power reduction is achieved through the relaxation of the often demanding requirement of accuracy. In this paper, new approximate adders are proposed for low-power imprecise applications by using logic reduction at the gate level as an approach to relaxing numerical accuracy. Transmission gates are utilized in the designs of two approximate full adders with reduced complexity. A further positive feature of the proposed designs is the reduction of the critical path delay. The approximate adders show advantages in terms of power dissipation over accurate and recently proposed approximate adders. An image processing application is presented using the proposed approximate adders to evaluate the efficiency in power and delay at application level.", "title": "" }, { "docid": "67826169bd43d22679f93108aab267a2", "text": "Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging –this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise –this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.", "title": "" }, { "docid": "a2d699f3c600743c732b26071639038a", "text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.", "title": "" }, { "docid": "06f9780257311891f54c5d0c03e73c1a", "text": "This essay extends Simon's arguments in the Sciences of the Artificial to a critical examination of how theorizing in Information Technology disciplines should occur. The essay is framed around a number of fundamental questions that relate theorizing in the artificial sciences to the traditions of the philosophy of science. Theorizing in the artificial sciences is contrasted with theorizing in other branches of science and the applicability of the scientific method is questioned. The paper argues that theorizing should be considered in a holistic manner that links two modes of theorizing: an interior mode with the how of artifact construction studied and an exterior mode with the what of existing artifacts studied. Unlike some representations in the design science movement the paper argues that the study of artifacts once constructed can not be passed back uncritically to the methods of traditional science. Seven principles for creating knowledge in IT disciplines are derived: (i) artifact system centrality; (ii) artifact purposefulness; (iii) need for design theory; (iv) induction and abduction in theory building; (v) artifact construction as theory building; (vi) interior and exterior modes for theorizing; and (viii) issues with generality. The implicit claim is that consideration of these principles will improve knowledge creation and theorizing in design disciplines, for both design science researchers and also for researchers using more traditional methods. Further, attention to these principles should lead to the creation of more useful and relevant knowledge.", "title": "" }, { "docid": "6bd9c1eeea3b1ace83840964b8502d71", "text": "Trusted hardware systems, such as Intel's new SGX instruction set architecture extension, aim to provide strong confidentiality and integrity assurances for applications. Recent work, however, raises serious concerns about the vulnerability of such systems to side-channel attacks. We propose, formalize, and explore a cryptographic primitive called a Sealed-Glass Proof (SGP) that models computation possible in an isolated execution environment with unbounded leakage, and thus in the face of arbitrary side-channels. A SGP specifically models the capabilities of trusted hardware that can attest to correct execution of a piece of code, but whose execution is transparent, meaning that an application's secrets and state are visible to other processes on the same host. Despite this strong threat model, we show that SGPs enable a range of practical applications. Our key observation is that SGPs permit safe verifiable computing in zero-knowledge, as data leakage results only in the prover learning her own secrets. Among other applications, we describe the implementation of an end-to-end bug bounty (or zero-day solicitation) platform that couples a SGX-based SGP with a smart contract. Our platform enables a marketplace that achieves fair exchange, protects against unfair bounty withdrawals, and resists denial-of-service attacks by dishonest sellers. We also consider a slight relaxation of the SGP model that permits black-box modules instantiating minimal, side-channel resistant primitives, yielding a still broader range of applications. Our work shows how trusted hardware systems such as SGX can support trustworthy applications even in the presence of side channels.", "title": "" }, { "docid": "8b39da92bf7a65b33ccbe1e9d7e7c031", "text": "This paper proposes a sentiment analysis approach for the Arabic language that combines lexicon based and corpus based techniques. The main idea of this approach is to represent the review for the corpus-based approach in the same way it is seen in lexicon-based approach, through replacing the polarity words with their corresponding label Positive 'POS' or Negative 'NEG' in the lexicon, this way the terms that are important but rare can be taken into consideration by the classifier. A comprehensive comparison is conducted using different classifiers, and experimental results showed that the proposed hybrid approach outperforms the corpus-based approach and the highest accuracy reached 96.34% using random forest classifier with 6-fold cross validation.", "title": "" }, { "docid": "aa65dc18169238ef973ef24efb03f918", "text": "A number of national studies point to a trend in which highly selective and elite private and public universities are becoming less accessible to lower-income students. At the same time there have been surprisingly few studies of the actual characteristics and academic experiences of low-income students or comparisons of their undergraduate experience with those of more wealthy students. This paper explores the divide between poor and rich students, first comparing a group of selective US institutions and their number and percentage of Pell Grant recipients and then, using institutional data and results from the University of California Undergraduate Experience Survey (UCUES), presenting an analysis of the high percentage of low-income undergraduate students within the University of California system — who they are, their academic performance and quality of their undergraduate experience. Among our conclusions: The University of California has a strikingly higher number of lowincome students when compared to a sample group of twenty-four other selective public and private universities and colleges, including the Ivy Leagues and a sub-group of other California institutions such as Stanford and the University of Southern California. Indeed, the UC campuses of Berkeley, Davis, and UCLA each have more Pell Grant students than all of the eight Ivy League institutions combined. However, one out of three Pell Grant recipients at UC have at least one parent with a four-year college degree, calling into question the assumption that “low-income” and “first-generation” are interchangeable groups of students. Low-income students, and in particular Pell Grant recipients, at UC have only slightly lower GPAs than their more wealthy counterparts in both math, science and engineering, and in humanities and social science fields. Contrary to some previous research, we find that low-income students have generally the same academic and social satisfaction levels; and are similar in their sense of belonging within their campus communities. However, there are some intriguing results across UC campuses, with low-income students somewhat less satisfied at those campuses where there are more affluent student bodies and where lower-income students have a smaller presence. An imbalance between rich and poor is the oldest and most fatal ailment of all republics — Plutarch There has been a growing and renewed concern among scholars of higher education and policymakers about increasing socioeconomic disparities in American society. Not surprisingly, these disparities are increasingly reflected * The SERU Project is a collaborative study based at the Center for Studies in Higher Education at UC Berkeley and focused on developing new types of data and innovative policy relevant scholarly analyses on the academic and civic experience of students at major research universities. For further information on the project, see http://cshe.berkeley.edu/research/seru/ ** John Aubrey Douglass is Senior Research Fellow – Public Policy and Higher Education at the Center for Studies in Higher Education at UC Berkeley and coPI of the SERU Project; Gregg Thomson is Director of the Office of Student Research at UC Berkeley and a co-PI of the SERU Project. We would like to thank David Radwin at OSR and a SERU Project Research Associate for his collaboration with data analysis. Douglass and Thomson: Poor and Rich 2 CSHE Research & Occasional Paper Series in the enrollment of students in the nation’s cadre of highly selective, elite private universities, and increasingly among public universities. Particularly over the past three decades, “brand name” prestige private universities and colleges have moved to a high tuition fee and high financial aid model, with the concept that a significant portion of generated tuition revenue can be redirected toward financial aid for either low-income or merit-based scholarships. With rising costs, declining subsidization by state governments, and the shift of federal financial aid toward loans versus grants in aid, public universities are moving a low fee model toward what is best called a moderate fee and high financial aid model – a model that is essentially evolving. There is increasing evidence, however, that neither the private nor the evolving public tuition and financial aid model is working. Students from wealthy families congregate at the most prestigious private and public institutions, with significant variance depending on the state and region of the nation, reflecting the quality and composition of state systems of higher education. A 2004 study by Sandy Astin and Leticia Oseguera looked at a number of selective private and public universities and concluded that the number and percentage of low-income and middle-income families had declined while the number from wealthy families increased. “American higher education, in other words, is more socioeconomically stratified today than at any other time during the past three decades,” they note. One reason, they speculated, may be “the increasing competitiveness among prospective college students for admission to the country’s most selective colleges and universities” (Astin and Oseguera 2004). A more recent study by Danette Gerald and Kati Haycock (2006) looked at the socioeconomic status (SES) of undergraduate students at a selective group of fifty “premier” public universities and had a similar conclusion – but one more alarming because of the important historical mission of public universities to provide broad access, a formal mandate or social contract. Though more open to students from low-income families than their private counterparts, the premier publics had declined in the percentage of students with federally funded Pell Grants (federal grants to students generally with family incomes below $40,000 annually) when compared to other four-year public institutions in the nation. Ranging from $431 to a maximum of $4,731, Pell Grants, and the criteria for selection of recipients, has long served as a benchmark on SES access. Pell Grant students have, on average, a family income of only $19,300. On average, note Gerald and Haycock, the selected premier publics have some 22% of their enrolled undergraduates with Pell Grants; all public four-year institutions have some 31% with Pell Grants; private institutions have an average of around 14% (Gerald and Haycock 2006). But it is important to note that there are a great many dimensions in understanding equity and access among private and public higher education institutions (HEIs). For one, there is a need to disaggregate types of institutions, for example, private versus public, university versus community college. Public and private institutions, and particularly highly selective universities and colleges, tend to draw from different demographic pools, with public universities largely linked to the socioeconomic stratification of their home state. Second, there are the factors related to rising tuition and increasingly complicated and, one might argue, inadequate approaches to financial aid in the U.S. With the slow down in the US economy, the US Department of Education recently estimated that demand for Pell Grants was exceeded projected demand by some 800,000 students; total applications for the grant program are up 16 percent over the previous year. This will require an additional $6 billion to the Pell Grant’s current budget of $14 billion next year.1 Economic downturns tend to push demand up for access to higher education among the middle and lower class, although most profoundly at the community college level. This phenomenon plus continued growth in the nation’s population, and in particularly in states such as California, Texas and Florida, means an inadequate financial aid system, where the maximum Pell Grant award has remained largely the same for the last decade when adjusted for inflation, will be further eroded. But in light of the uncertainty in the economy and the lack of resolve at the federal level to support higher education, it is not clear the US government will fund the increased demand – it may cut the maximum award. And third, there are larger social trends, such as increased disparities in income and the erosion of public services, declines in the quality of many public schools, the stagnation and real declines for some socioeconomic groups in high school graduation rates; and the large increase in the number of part-time students, most of whom must work to stay financially solvent. Douglass and Thomson: Poor and Rich 3 CSHE Research & Occasional Paper Series This paper examines low-income, and upper income, student access to the University of California and how lowincome access compares with a group of elite privates (specifically Ivy League institutions) and selective publics. Using data from the University of California’s Undergraduate Experience Survey (UCUES) and institutional data, we discuss what makes UC similar and different in the SES and demographic mix of students. Because the maximum Pell Grant is under $5,000, the cost of tuition alone is higher in the publics, and much higher in our group of selective privates, the percentage and number of Pell Grant students at an institution provides evidence of its resolve, creativity, and financial commitment to admit and enroll working and middle-class students. We then analyze the undergraduate experience of our designation of poor students (defined for this analysis as Pell Grant recipients) and rich students (from high-income families, defined as those with household incomes above $125,000 and no need-based aid).2 While including other income groups, we use these contrasting categories of wealth to observe differences in the background of students, their choice of major, general levels of satisfaction, academic performance, and sense of belonging at the university. There is very little analytical work on the characteristics and percepti", "title": "" }, { "docid": "7e64a043b66594b0881fad9b99864ec5", "text": "In this paper, we present a system that enables humanoid robots to imitate complex whole-body motions of humans in real time. In our approach, we use a compact human model and consider the positions of the end-effectors as well as the center of mass as the most important aspects to imitate. Our system actively balances the center of mass over the support polygon to avoid falls of the robot, which would occur when using direct imitation. For every point in time, our approach generates a statically stable pose. Hereby, we do not constrain the configurations to be in double support. Instead, we allow for changes of the support mode according to the motions to imitate. To achieve safe imitation, we use retargeting of the robot's feet if necessary and find statically stable configurations by inverse kinematics. We present experiments using human data captured with an Xsens MVN motion capture system. The results show that a Nao humanoid is able to reliably imitate complex whole-body motions in real time, which also include extended periods of time in single support mode, in which the robot has to balance on one foot.", "title": "" }, { "docid": "514bf9c9105dd3de95c3965bb86ebe36", "text": "Origami is the centuries-old art of folding paper, and recently, it is investigated as computer science: Given an origami with creases, the problem to determine if it can be flat after folding all creases is NP-hard. Another hundreds-old art of folding paper is a pop-up book. A model for the pop-up book design problem is given, and its computational complexity is investigated. We show that both of the opening book problem and the closing book problem are NP-hard.", "title": "" }, { "docid": "486d31b962600141ba75dfde718f5b3d", "text": "The design, fabrication, and measurement of a coax to double-ridged waveguide launcher and horn antenna is presented. The novel launcher design employs two symmetric field probes across the ridge gap to minimize spreading inductance in the transition, and achieves better than 15 dB return loss over a 10:1 bandwidth. The aperture-matched horn uses a half-cosine transition into a linear taper for the outer waveguide dimensions and ridge width, and a power-law scaled gap to realize monotonically varying cutoff frequencies, thus avoiding the appearance of trapped mode resonances. It achieves a nearly constant beamwidth in both E- and H-planes for an overall directivity of about 16.5 dB from 10-100 GHz.", "title": "" }, { "docid": "26b0038c375eaa619ff584360f401674", "text": "We examine the code base of the OpenBSD operating system to determine whether its security is increasing over time. We measure the rate at which new code has been introduced and the rate at which vulnerabilities have been reported over the last 7.5 years and fifteen versions. We learn that 61% of the lines of code in today’s OpenBSD are foundational: they were introduced prior to the release of the initial version we studied and have not been altered since. We also learn that 62% of reported vulnerabilities were present when the study began and can also be considered to be foundational. We find strong statistical evidence of a decrease in the rate at which foundational vulnerabilities are being reported. However, this decrease is anything but brisk: foundational vulnerabilities have a median lifetime of at least 2.6 years. Finally, we examined the density of vulnerabilities in the code that was altered/introduced in each version. The densities ranged from 0 to 0.033 vulnerabilities reported per thousand lines of code. These densities will increase as more vulnerabilities are reported. ∗This work is sponsored by the I3P under Air Force Contract FA8721-05-0002. Opinions, interpretations, conclusions and recommendations are those of the author(s) and are not necessarily endorsed by the United States Government. †This work was produced under the auspices of the Institute for Information Infrastructure Protection (I3P) research program. The I3P is managed by Dartmouth College, and supported under Award number 2003-TK-TX-0003 from the U.S. Department of Homeland Security, Science and Technology Directorate. Points of view in this document are those of the authors and do not necessarily represent the official position of the U.S. Department of Homeland Security, the Science and Technology Directorate, the I3P, or Dartmouth College. ‡Currently at the University of Cambridge", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "d12286b9697cf59b7b929b25d57e4ed7", "text": "Speech Processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. Speech recognition is the process of automatically recognizing the spoken words of person based on information content in speech signal. This paper introduces a brief survey on Automatic Speech Recognition and discusses the various classification techniques that have been accomplished in this wide area of speech processing. The objective of this review paper is to summarize some of the well-known methods that are widely used in several stages of speech recognition system. Keywords—Feature Extraction, Acoustic phonetic Approach, Pattern Recognition,Artificial intelligence Approach, Speech Recognition.", "title": "" }, { "docid": "319ba1d449d2b65c5c58b5cc0fdbed67", "text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness", "title": "" } ]
scidocsrr
7041192b6c68a7eab2a435d3bc1821c0
Syndromic surveillance systems
[ { "docid": "ed7c60db1ecdab2820fa9701940ada54", "text": "Spatial scan statistics are widely used for count data to detect geographical disease clusters of high or low incidence, mortality or prevalence and to evaluate their statistical significance. Some data are ordinal or continuous in nature, however, so that it is necessary to dichotomize the data to use a traditional scan statistic for count data. There is then a loss of information and the choice of cut-off point is often arbitrary. In this paper, we propose a spatial scan statistic for ordinal data, which allows us to analyse such data incorporating the ordinal structure without making any further assumptions. The test statistic is based on a likelihood ratio test and evaluated using Monte Carlo hypothesis testing. The proposed method is illustrated using prostate cancer grade and stage data from the Maryland Cancer Registry. The statistical power, sensitivity and positive predicted value of the test are examined through a simulation study.", "title": "" } ]
[ { "docid": "5394ca3d404c23a03bb123070855bf3c", "text": "UNLABELLED\nA previously characterized rice hull smoke extract (RHSE) was tested for bactericidal activity against Salmonella Typhimurium using the disc-diffusion method. The minimum inhibitory concentration (MIC) value of RHSE was 0.822% (v/v). The in vivo antibacterial activity of RHSE (1.0%, v/v) was also examined in a Salmonella-infected Balb/c mouse model. Mice infected with a sublethal dose of the pathogens were administered intraperitoneally a 1.0% solution of RHSE at four 12-h intervals during the 48-h experimental period. The results showed that RHSE inhibited bacterial growth by 59.4%, 51.4%, 39.6%, and 28.3% compared to 78.7%, 64.6%, 59.2%, and 43.2% inhibition with the medicinal antibiotic vancomycin (20 mg/mL). By contrast, 4 consecutive administrations at 12-h intervals elicited the most effective antibacterial effect of 75.0% and 85.5% growth reduction of the bacteria by RHSE and vancomycin, respectively. The combination of RHSE and vancomycin acted synergistically against the pathogen. The inclusion of RHSE (1.0% v/w) as part of a standard mouse diet fed for 2 wk decreased mortality of 10 mice infected with lethal doses of the Salmonella. Photomicrographs of histological changes in liver tissues show that RHSE also protected the liver against Salmonella-induced pathological necrosis lesions. These beneficial results suggest that the RHSE has the potential to complement wood-derived smokes as antimicrobial flavor formulations for application to human foods and animal feeds.\n\n\nPRACTICAL APPLICATION\nThe new antimicrobial and anti-inflammatory rice hull derived liquid smoke has the potential to complement widely used wood-derived liquid smokes as an antimicrobial flavor and health-promoting formulation for application to foods.", "title": "" }, { "docid": "48c28572e5eafda1598a422fa1256569", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" }, { "docid": "7e6dbb7e6302f9da97ea5f2ba3b2b782", "text": "The purpose of this paper is to investigate some of the main drivers of high unemployment rates in the European Union countries starting from two sources highlighted in the economic literature: the shortfall of the aggregate demand and the increasing labour market mismatches. Our analysis is based on a panel database and focuses on two objectives: to measure the long and short-term impact of GDP growth on unemployment over recent years for different categories of labour market participants (young, older and low educated workers) and to evaluate the relationship between mismatches related to skills (educational and occupational) and unemployment. One of the main conclusions is that unemployment rates of young and low educated workers are more responsive to economic growth variations both in the long and short run, while unemployment rates of older workers show a greater capacity of adjustment. In addition, occupational mismatches seem to have a significant long-term impact on the changes in unemployment of all categories of unemployed, whereas the short run effect is rather mixed, varying across countries. One explanation is the fact that during crisis, economy’s structure tends to change more rapidly than labour market and educational system can adapt. * Received: 22-02-2017; accepted: 31-05-2017 1 The research is supported by the Czech Science Foundation, project P402/12/G097 “DYME – Dynamic Models in Economics”. 2 Assistant Professor, Faculty of Economic Cybernetics, Statistics and Informatics, Bucharest Academy of Economic Studies, Piata Romana Square 6, 1st district, Bucharest, 010374 Romania. Scientific affiliation: labour market imbalances, international comparisons, regional competitiveness, panel data models. Phone: +40 21 319 19 00. Fax: +40 21 319 18 99. E-mail: gina.dimian@csie.ase.ro (corresponding author). 3 Full Professor, Faculty of Economic Cybernetics, Statistics and Informatics, Bucharest Academy of Economic Studies, Piata Romana Square 6, 1st district, Bucharest, 010374 Romania. Scientific affiliation: macroeconomics, economic convergence, international relationships, international statistics. Phone: +40 21 319 19 00. Fax: +40 21 319 18 99. E-mail: liviu.begu@csie.ase.ro. 4 Full Professor, Faculty of Informatics and Statistics, University of Economics. W. Churchill Sq. 4, 130 67 Prague 3, Czech Republic. Scientific affiliation: efficiency analysis, multiple-criteria analysis, data mining methods. Phone: +420 2 2409 5403. Fax: +420 2 2409 5423. E-mail: jablon@vse.cz. Personal website: http://webhosting.vse.cz/jablon. Gina Cristina Dimian, Liviu Stelian Begu, Josef Jablonsky • Unemployment and labour... 14 Zb. rad. Ekon. fak. Rij. • 2017 • vol. 35 • no. 1 • 13-44", "title": "" }, { "docid": "5ea560095b752ca8e7fb6672f4092980", "text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.", "title": "" }, { "docid": "68da3e14bac1f8b6f037291df039d1b3", "text": "Suicide by helium inhalation inside a plastic bag has recently been publicized by right-to-die proponents in \"how to\" print and videotape materials. This article reports a suicide performed according to this new and highly lethal technique, which is also a potentially undetectable cause of death. Toxicology information could not determine helium inhalation, and drug screening did not reveal data of significance. The cause of death could be determined only by the physical evidence at the scene of death. Helium inhalation can easily be concealed when interested parties remove or alter evidence. To ensure that their deaths are not documented as suicide, some individuals considering assisted suicide may choose helium methods and assistance from helpers. Recent challenges to Oregon's physician-assisted suicide law may increase interest in helium instead of barbiturates for assisted suicide.", "title": "" }, { "docid": "22fe98f01a5379a9ea280c22028da43f", "text": "Linux containers showed great superiority when compared to virtual machines and hypervisors in terms of networking, disk and memory management, start-up and compilation speed, and overall processing performance. In this research, we are questioning whether it is more secure to run services inside Linux containers than running them directly on a host base operating system or not. We used Docker v1.10 to conduct a series of experiments to assess the attack surface of hosts running services inside Docker containers compared to hosts running the same services on the base operating system represented in our paper as Debian Jessie. Our vulnerability assessment shows that using Docker containers increase the attack surface of a given host, not the other way around.", "title": "" }, { "docid": "c3531a47987db261fb9a6bb0bea3c4a3", "text": "We address the problem of making online, parallel query plans fault-tolerant: i.e., provide intra-query fault-tolerance without blocking. We develop an approach that not only achieves this goal but does so through the use of different fault-tolerance techniques at different operators within a query plan. Enabling each operator to use a different fault-tolerance strategy leads to a space of fault-tolerance plans amenable to cost-based optimization. We develop FTOpt, a cost-based fault-tolerance optimizer that automatically selects the best strategy for each operator in a query plan in a manner that minimizes the expected processing time with failures for the entire query. We implement our approach in a prototype parallel query-processing engine. Our experiments demonstrate that (1) there is no single best fault-tolerance strategy for all query plans, (2) often hybrid strategies that mix-and-match recovery techniques outperform any uniform strategy, and (3) our optimizer correctly identifies winning fault-tolerance configurations.", "title": "" }, { "docid": "5959bfe78aaa96e00ebdff6269797cf3", "text": "OBJECTIVE\nMicroablative fractional CO2 laser has been proven to determine tissue remodeling with neoformation of collagen and elastic fibers on atrophic skin. The aim of our study is to evaluate the effects of microablative fractional CO2 laser on postmenopausal women with vulvovaginal atrophy using an ex vivo model.\n\n\nMETHODS\nThis is a prospective ex vivo cohort trial. Consecutive postmenopausal women with vulvovaginal atrophy managed with pelvic organ prolapse surgical operation were enrolled. After fascial plication, the redundant vaginal edge on one side was treated with CO2 laser (SmartXide2; DEKA Laser, Florence, Italy). Five different CO2 laser setup protocols were tested. The contralateral part of the vaginal wall was always used as control. Excessive vagina was trimmed and sent for histological evaluation to compare treated and nontreated tissues. Microscopic and ultrastructural aspects of the collagenic and elastic components of the matrix were studied, and a specific image analysis with computerized morphometry was performed. We also considered the fine cytological aspects of connective tissue proper cells, particularly fibroblasts.\n\n\nRESULTS\nDuring the study period, five women were enrolled, and 10 vaginal specimens were finally retrieved. Four different settings of CO2 laser were compared. Protocols were tested twice each to confirm histological findings. Treatment protocols were compared according to histological findings, particularly in maximal depth and connective changes achieved. All procedures were uneventful for participants.\n\n\nCONCLUSIONS\nThis study shows that microablative fractional CO2 laser can produce a remodeling of vaginal connective tissue without causing damage to surrounding tissue.", "title": "" }, { "docid": "8306c40722bb956253c6e7cf112836d7", "text": "Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy.", "title": "" }, { "docid": "91d0f12e9303b93521146d4d650a63df", "text": "We utilize the state-of-the-art in deep learning to show that we can learn by example what constitutes humor in the context of a Yelp review. To the best of the authors knowledge, no systematic study of deep learning for humor exists – thus, we construct a scaffolded study. First, we use “shallow” methods such as Random Forests and Linear Discriminants built on top of bag-of-words and word vector features. Then, we build deep feedforward networks on top of these features – in some sense, measuring how much of an effect basic feedforward nets help. Then, we use recurrent neural networks and convolutional neural networks to more accurately model the sequential nature of a review.", "title": "" }, { "docid": "c1235195e9ce4a9db0e22b165915a5ff", "text": "Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we propose a vehicular sensor-rich platform and learning algorithms for maneuver anticipation. For this purpose we equip a car with cameras, Global Positioning System (GPS), and a computing device to capture the driving context from both inside and outside of the car. In order to anticipate maneuvers, we propose a sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams. Our architecture consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to capture long temporal dependencies. We propose a novel training procedure which allows the network to predict the future given only a partial temporal context. We introduce a diverse data set with 1180 miles of natural freeway and city driving, and show that we can anticipate maneuvers 3.5 seconds before they occur in realtime with a precision and recall of 90.5% and 87.4% respectively.", "title": "" }, { "docid": "7ed693c8f8dfa62842304f4c6783af03", "text": "Indian Sign Language (ISL) or Indo-Pakistani Sign Language is possibly the prevalent sign language variety in South Asia used by at least several hundred deaf signers. It is different in the phonetics, grammar and syntax from other country’s sign languages. Since ISL got standardized only recently, there is very little research work that has happened in ISL recognition. Considering the challenges in ISL gesture recognition, a novel method for recognition of static signs of Indian sign language alphabets and numerals for Human Computer Interaction (HCI) has been proposed in this thesis work. The developed algorithm for the hand gesture recognition system in ISL formulates a vision-based approach, using the Two-Dimensional Discrete Cosine Transform (2D-DCT) for image compression and the Self-Organizing Map (SOM) or Kohonen Self Organizing Feature Map (SOFM) Neural Network for pattern recognition purpose, simulated in MATLAB. To design an efficient and user friendly hand gesture recognition system, a GUI model has been implemented. The main advantage of this algorithm is its high-speed processing capability and low computational requirements, in terms of both speed and memory utilization. KeywordsArtificial Neural Network, Hand Gesture Recognition, Human Computer Interaction (HCI), Indian Sign Language (ISL), Kohonen Self Organizing Feature Map (SOFM), Two-Dimensional Discrete Cosine Transform (2D-", "title": "" }, { "docid": "9634e701750984a457189611885b7c81", "text": "A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modeling and designing systems using an extension of the B formalism: Event-B. Based on the idea of refinement, the author’s systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs, and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.", "title": "" }, { "docid": "bb00dcd8b7051b13dc7210c51e6d05b2", "text": "Many different types of unmanned aerial vehicles (UAVs) have been developed to address a variety of applications ranging from searching and mapping to surveillance. However, for complex wide-area surveillance scenarios, where fleets of autonomous UAVs must be deployed to work collectively on a common goal, multiple types of UAVs should be incorporated forming a heterogeneous UAV system. Indeed, the interconnection of two levels of UAVs---one with high altitude fixed-wing UAVs and one with low altitude rotary-wing UAVs---can provide applicability for scenarios which cannot be addressed by either UAV type. This work considers a bi-level flying ad hoc networks (FANETs), in which each UAV is equipped with ad hoc communication capabilities, in which the higher level fixed-wing swarm serves mainly as a communication bridge for the lower level UAV fleets, which conduct precise information sensing. The interconnection of multiple UAV types poses a significant challenge, since each UAV level moves according to its own mobility pattern, which is constrained by the UAV physical properties. Another important challenge is to form network clusters at the lower level, whereby the intra-level links must provide a certain degree of stability to allow a reliable communication within the UAV system. This article proposes a novel mobility model for the low-level UAVs that combines a pheromone-based model with a multi-hop clustering algorithm. The pheromones permit to focus on the least explored areas with the goal to optimize the coverage while the multi-hop clustering algorithm aims at keeping a stable and connected network. The proposed model works online and is fully distributed. The connection stability is evaluated against different measurements such as stability coefficient and volatility. The performance of the proposed model is compared to other state-of-the-art contributions using simulations. Experimental results demonstrate the ability of the proposed mobility model to significantly improve the network stability while having a limited impact on the wide-area coverage.", "title": "" }, { "docid": "86529c87c02f86fda27e80ab7e7f48b5", "text": "We present an approach based on feed-forward neural networks for learning the distribution of textual documents. This approach is inspired by the Neural Autoregressive Distribution Estimator (NADE) model, which has been shown to be a good estimator of the distribution of discrete-valued high-dimensional vectors. In this paper, we present how NADE can successfully be adapted to the case of textual data, retaining from NADE the property that sampling or computing the probability of observations can be done exactly and efficiently. The approach can also be used to learn deep representations of documents that are competitive to those learned by the alternative topic modeling approaches. Finally, we describe how the approach can be combined with a regular neural network N-gram model and substantially improve its performance, by making its learned representation sensitive to the larger, document-specific context.", "title": "" }, { "docid": "806a83d17d242a7fd5272862158db344", "text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.", "title": "" }, { "docid": "1b6812231498387f158d24de8669dc27", "text": "The ideas and findings in this report should not be construed as an official DoD position. It is published in the interest of scientific and technical information exchange. Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Internal use. Permission to reproduce this document and to prepare derivative works from this document for internal use is granted, provided the copyright and \" No Warranty \" statements are included with all reproductions and derivative works. External use. This document may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other external and/or commercial use. a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013. Abstract xiii 1 Introduction 1 1.1 Purpose and Structure of this Report 1 1.2 Background 1 1.3 The Strategic Planning Landscape 1", "title": "" }, { "docid": "5654b5a5f0be4a888784bdb1f94440fe", "text": "A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.", "title": "" }, { "docid": "8c8120beecf9086f3567083f89e9dfa2", "text": "This thesis studies the problem of product name recognition from short product descriptions. This is an important problem especially with the increasing use of ERP (Enterprise Resource Planning) software at the core of modern business management systems, where the information of business transactions is stored in unstructured data stores. A solution to the problem of product name recognition is especially useful for the intermediate businesses as they are interested in finding potential matches between the items in product catalogs (produced by manufacturers or another intermediate business) and items in the product requests (given by the end user or another intermediate business). In this context the problem of product name recognition is specifically challenging because product descriptions are typically short, ungrammatical, incomplete, abbreviated and multilingual. In this thesis we investigate the application of supervised machine-learning techniques and gazetteer-based techniques to our problem. To approach the problem, we define it as a classification problem where the tokens of product descriptions are classified into I, O and B classes according to the standard IOB tagging scheme. Next we investigate and compare the performance of a set of hybrid solutions that combine machine learning and gazetteer-based approaches. We study a solution space that uses four learning models: linear and non-linear SVC, Random Forest, and AdaBoost. For each solution, we use the same set of features. We divide the features into four categories: token-level features, documentlevel features, gazetteer-based features and frequency-based features. Moreover, we use automatic feature selection to reduce the dimensionality of data; that consequently improves the training efficiency and avoids over-fitting. To be able to evaluate the solutions, we develop a machine learning framework that takes as its inputs a list of predefined solutions (i.e. our solution space) and a preprocessed labeled dataset (i.e. a feature vector X, and a corresponding class label vector Y). It automatically selects the optimal number of most relevant features, optimizes the hyper-parameters of the learning models, trains the learning models, and evaluates the solution set. We believe that our automated machine learning framework can effectively be used as an AutoML framework that automates most of the decisions that have to be made in the design process of a machine learning", "title": "" } ]
scidocsrr
92042b5e706f024996c4ec0f6bb19ed8
Neural Block Sampling
[ { "docid": "5793cf03753f498a649c417e410c325e", "text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.", "title": "" }, { "docid": "2781e7803bdb57d8519fb61f420439bb", "text": "We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods. We call what we do “compilation of inference” because our method transforms a denotational specification of an inference problem in the form of a probabilistic program written in a universal programming language into a trained neural network denoted in a neural network specification language. When at test time this neural network is fed observational data and executed, it performs approximate inference in the original model specified by the probabilistic program. Our training objective and learning procedure are designed to allow the trained neural network to be used as a proposal distribution in a sequential importance sampling inference engine. We illustrate our method on mixture models and Captcha solving and show significant speedups in the efficiency of inference.", "title": "" }, { "docid": "150e7a6f46e93fc917e43e32dedd9424", "text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.", "title": "" } ]
[ { "docid": "333b3349cdcb6ddf44c697e827bcfe62", "text": "Harmful cyanobacterial blooms, reflecting advanced eutrophication, are spreading globally and threaten the sustainability of freshwater ecosystems. Increasingly, non-nitrogen (N(2))-fixing cyanobacteria (e.g., Microcystis) dominate such blooms, indicating that both excessive nitrogen (N) and phosphorus (P) loads may be responsible for their proliferation. Traditionally, watershed nutrient management efforts to control these blooms have focused on reducing P inputs. However, N loading has increased dramatically in many watersheds, promoting blooms of non-N(2) fixers, and altering lake nutrient budgets and cycling characteristics. We examined this proliferating water quality problem in Lake Taihu, China's 3rd largest freshwater lake. This shallow, hyper-eutrophic lake has changed from bloom-free to bloom-plagued conditions over the past 3 decades. Toxic Microcystis spp. blooms threaten the use of the lake for drinking water, fisheries and recreational purposes. Nutrient addition bioassays indicated that the lake shifts from P limitation in winter-spring to N limitation in cyanobacteria-dominated summer and fall months. Combined N and P additions led to maximum stimulation of growth. Despite summer N limitation and P availability, non-N(2) fixing blooms prevailed. Nitrogen cycling studies, combined with N input estimates, indicate that Microcystis thrives on both newly supplied and previously-loaded N sources to maintain its dominance. Denitrification did not relieve the lake of excessive N inputs. Results point to the need to reduce both N and P inputs for long-term eutrophication and cyanobacterial bloom control in this hyper-eutrophic system.", "title": "" }, { "docid": "35da724255bbceb859d01ccaa0dec3b1", "text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.", "title": "" }, { "docid": "3e430519b45551f18c3bfbe509782fa0", "text": "In this paper, we compare the performance of several machine learning based approaches for the tasks of detecting algorithmically generated malicious domains and the categorization of domains according to their malware family. The datasets used for model comparison were provided by the shared task on Detecting Malicious Domain names (DMD 2018). Our models ranked first for two out of the four test datasets provided in the competition.", "title": "" }, { "docid": "869f52723b215ba8dc5c4c614b2c79a6", "text": "Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.", "title": "" }, { "docid": "52d3d3bf1f29e254cbb89c64f3b0d6b5", "text": "Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.", "title": "" }, { "docid": "4ddbdf0217d13c8b349137f1e59910d6", "text": "In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.", "title": "" }, { "docid": "3d0a6b490a80e79690157a9ed690fdcc", "text": "In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired yet hard to display. Despite considerable progress in 3D display technologies, most are still expensive and require special glasses for viewing, so RGBD content is primarily viewed on 2D screens, removing the depth channel from the final viewing experience. We train a generative convolutional neural network that predicts the 2D viewing saliency map for a given frame using the RGBD pixel values and previous fixation estimates in the video. To evaluate the performance of our approach, we present a new comprehensive database of 2D viewing eye-fixation ground-truth for RGBD videos. Our experiments indicate that it is beneficial to integrate depth into video saliency estimates for content that is viewed on a 2D display. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15% relative improvement.", "title": "" }, { "docid": "61a6efb791fbdabfa92448cf39e17e8c", "text": "This work deals with the design of a wideband microstrip log periodic array operating between 4 and 18 GHz (thus working in C,X and Ku bands). A few studies, since now, have been proposed but they are significantly less performing and usually quite complicated. Our solution is remarkably simple and shows both SWR and gain better than likely structures proposed in the literature. The same antenna can also be used as an UWB antenna. The design has been developed using CST MICROWAVE STUDIO 2009, a general purpose and specialist tool for the 3D electromagnetic simulation of microwave high frequency components.", "title": "" }, { "docid": "4a837ccd9e392f8c7682446d9a3a3743", "text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.", "title": "" }, { "docid": "776688e1b33a5f5b11e0609d8b1b46d2", "text": "Entity resolution (ER) is a process to identify records in information systems, which refer to the same real-world entity. Because in the two recent decades the data volume has grown so large, parallel techniques are called upon to satisfy the ER requirements of high performance and scalability. The development of parallel ER has reached a relatively prosperous stage, and has found its way into several applications. In this work, we first comprehensively survey the state of the art of parallel ER approaches. From the comprehensive overview, we then extract the classification criteria of parallel ER, classify and compare these approaches based on these criteria. Finally, we identify open research questions and challenges and discuss potential solutions and further research potentials in this field. TYPE OF PAPER AND", "title": "" }, { "docid": "4c8285c23e836d68be7f68ec8ba7e29a", "text": "In the age of digital society Social Engineering attacks are very successful and unfortunately users still cannot protect themselves against these threats. Social Engineering is a very complex problem, which makes it difficult to differentiate among vulnerable users. These attacks not only target young users or employees, they select massively, regardless of the users' age. Due to the rapid growth of technology and its misuse, everyone is affected by these attacks, everyone is vulnerable to them (Purkait, 2012; Aggarwal et al., 2012). Users are considered the \"weakest link\" of security (Mohebzada et al., 2012; Mitnick and Simon, 2011) and as such, protecting confidential information should be the ultimate goal of all people. However, despite the fact that a number of different strategies exists to educate or train endusers to avoid these attacks, they still do, phishing still succeeds (Dhamija et al., 2006). This is mainly because the existing security awareness trainings, theoretical courses, or frameworks are expected to be equally effective for all users regardless of their age, but experience has shown that this is not true (Alseadoon, 2014). In order for these security trainings to be effective, it is essential that they are composed based on the Social Engineering security weaknesses attributed differently to different generations. Identifying unique characteristics (demographic and personality) of generations, determinants of their vulnerability is what this work aims to do. Then frameworks crafted based on that information (addressing these weaknesses) would be of use and worth implementing. Therefore, taking into consideration the complexity of this problem, this study suggests that there is a need to research it from a broader perspective, adding the \"generation\" element into the study focus to find out if there is indeed any difference in susceptibility among generational cohorts. In order to do so, this research will adapt both qualitative and quantitative methods towards reaching its objectives. Collected-data of users' performance in a phishing assessment are analyzed and psychological translation of results is provided. Thus, the first research question seeks to address what factors determinate endusers vulnerability to Social Engineering, and results from quantitative data (statistical analysis) show that generation is an important element to differentiate potential victims of Social Engineering, whilst computer-efficacy or educational level do not play any noteworthy role in predicting endusers' likelihood of falling for these threats. In consistency with the above elements and previous studies, also gender is shown no potentiality in predicting susceptibility (Parsons et al., 2013). The second research question deems to explain what makes generations differ in susceptibility and this study's findings propose that generation Y personality traits such as consciousness, extraversion and agreeableness are key influencers of their shown vulnerability. Finally, along with establishing strong foundations for future research in studying generations susceptibility to Social Engineering, this thesis employ these findings in proposing a framework aiming to lessen millennial likelihood to Social Engineering victimization. The originality of this study lies on its overall approach: starting with an exhaustive literature review towards identifying factors impacting generations' susceptibility level, then statistically measuring their vulnerability, to finish with a solution proposal crafted to suit the observed generational security weaknesses.", "title": "" }, { "docid": "e769b09a593b68e7d47102046efc6d8d", "text": "BACKGROUND\nExisting research indicates sleep problems to be prevalent in youth with internalizing disorders. However, childhood sleep problems are common in the general population and few data are available examining unique relationships between sleep, specific types of anxiety and depressive symptoms among non-clinical samples of children and adolescents.\n\n\nMETHODS\nThe presence of sleep problems was examined among a community sample of children and adolescents (N=175) in association with anxiety and depressive symptoms, age, and gender. Based on emerging findings from the adult literature we also examined associations between cognitive biases and sleep problems.\n\n\nRESULTS\nOverall findings revealed significant associations between sleep problems and both anxiety and depressive symptoms, though results varied by age. Depressive symptoms showed a greater association with sleep problems among adolescents, while anxiety symptoms were generally associated with sleep problems in all youth. Cognitive factors (cognitive errors and control beliefs) linked with anxiety and depression also were associated with sleep problems among adolescents, though these correlations were no longer significant after controlling for internalizing symptoms.\n\n\nCONCLUSIONS\nResults are discussed in terms of their implications for research and treatment of sleep and internalizing disorders in youth.", "title": "" }, { "docid": "c5c205c8a1fdd6f6def3e28b6477ecec", "text": "The growth and popularity of Internet applications has reinforced the need for effective information filtering techniques. The collaborative filtering approach is now a popular choice and has been implemented in many on-line systems. While many researchers have proposed and compared the performance of various collaborative filtering algorithms, one important performance measure has been omitted from the research to date that is the robustness of the algorithm. In essence, robustness measures the power of the algorithm to make good predictions in the presence of noisy data. In this paper, we argue that robustness is an important system characteristic, and that it must be considered from the point-of-view of potential attacks that could be made on a system by malicious users. We propose a definition for system robustness, and identify system characteristics that influence robustness. Several attack strategies are described in detail, and experimental results are presented for the scenarios outlined.", "title": "" }, { "docid": "fda37e6103f816d4933a3a9c7dee3089", "text": "This paper introduces a novel approach to estimate the systolic and diastolic blood pressure ratios (SBPR and DBPR) based on the maximum amplitude algorithm (MAA) using a Gaussian mixture regression (GMR). The relevant features, which clearly discriminate the SBPR and DBPR according to the targeted groups, are selected in a feature vector. The selected feature vector is then represented by the Gaussian mixture model. The SBPR and DBPR are subsequently obtained with the help of the GMR and then mapped back to SBP and DBP values that are more accurate than those obtained with the conventional MAA method.", "title": "" }, { "docid": "8f39143d569e5f57776d1bc349fc5cf3", "text": "Adolescence starts with puberty and ends when individuals attain an independent role in society. Cognitive neuroscience research in the last two decades has improved our understanding of adolescent brain development. The evidence indicates a prolonged structural maturation of grey matter and white matter tracts supporting higher cognitive functions such as cognitive control and social cognition. These changes are associated with a greater strengthening and separation of brain networks, both in terms of structure and function, as well as improved cognitive skills. Adolescent-specific sub-cortical reactivity to emotions and rewards, contrasted with their developing self-control skills, are thought to account for their greater sensitivity to the socio-affective context. The present review examines these findings and their implications for training interventions and education.", "title": "" }, { "docid": "f0ced128e23c4f17abc635f88178a6c1", "text": "This paper explores liquidity risk in a system of interconnected financial institutions when these institutions are subject to regulatory solvency constraints. When the market’s demand for illiquid assets is less than perfectly elastic, sales by distressed institutions depress the market price of such assets. Marking to market of the asset book can induce a further round of endogenously generated sales of assets, depressing prices further and inducing further sales. Contagious failures can result from small shocks. We investigate the theoretical basis for contagious failures and quantify them through simulation exercises. Liquidity requirements on institutions can be as effective as capital requirements in forestalling contagious failures. ∗First version. We thank Andy Haldane and Vicky Saporta for their comments during the preparation of this paper. The opinions expressed in this paper are those of the authors, and do not necessarily reflect those of the Central Bank of Chile, or the Bank of England. Please direct any correspondence to Hyun Song Shin, h.s.shin@lse.ac.uk.", "title": "" }, { "docid": "5faa3b23756c87f9e33146d2044e2ab6", "text": "Stabilization of unstable thoracic fractures with transpedicular screws is widely accepted. However, placement of transpedicular screws can cause complications, particularly in the thoracic spine with physiologically small pedicles. Hybrid stabilization, a combination of sublaminar bands and pedicle screws, might reduce the rate of misplaced screws and can be helpful in special anatomic circumstances, such as preexisting scoliosis and osteoporosis. We report about two patients suffering from unstable thoracic fractures, of T5 in one case and T3, T4, and T5 in the other case, with preexisting scoliosis and extremely small pedicles. Additionally, one patient had osteoporosis. Patients received hybrid stabilization with pedicle screws adjacent to the fractured vertebral bodies and sublaminar bands at the level above and below the pedicle screws. No complications occurred. Follow-up was 12 months with clinically uneventful postoperative courses. No signs of implant failure or loss of reduction could be detected. In patients with very small thoracic pedicles, scoliosis, and/or osteoporosis, hybrid stabilization with sublaminar bands and pedicle screws can be a viable alternative to long pedicle screw constructs.", "title": "" }, { "docid": "7c80e65361fcfd91369c3a3490feb36f", "text": "Over the past 35 years, information technology has permeated every business activity. This growing use of information technology promised an unprecedented increase in end-user productivity. Yet this promise is unfulfilled, due primarily to a lack of understanding of end-user behavior. End.user productivity is tied directly to functionality and ease of learning and use. Furthermore , system designers lack the necessary guidance and tools to apply effectively what is known about human-computer interaction (HCI) during systems design. Software developers need to expand their focus beyond functional requirements to include the behavioral needs of users. Only when system functions fit actual work and the system is easy to learn and use will the system be adopted by office workers and business professionals. The large, interdisciplinary body of research literature suggest HCI's importance as well as its complexity. This article is the product of an extensive effort to integrate the diverse body of HCI literature into a comprehensible framework that provides guidance to system designers: HCI design is divided into three major divisions: system model, action language, and presentation language. The system model is a conceptual depiction of system objects and functions. The basic premise is that the selection of a good system model provides dtrection for designing action and presentation languages that determine the system's look and feel. Major design recommendations in each division are identified along with current research trends and future research issues.", "title": "" }, { "docid": "5c1d6a2616a54cd8d8316b8d37f0147d", "text": "Cadmium (Cd) is a toxic, nonessential transition metal and contributes a health risk to humans, including various cancers and cardiovascular diseases; however, underlying molecular mechanisms remain largely unknown. Cells transmit information to the next generation via two distinct ways: genetic and epigenetic. Chemical modifications to DNA or histone that alters the structure of chromatin without change of DNA nucleotide sequence are known as epigenetics. These heritable epigenetic changes include DNA methylation, post-translational modifications of histone tails (acetylation, methylation, phosphorylation, etc), and higher order packaging of DNA around nucleosomes. Apart from DNA methyltransferases, histone modification enzymes such as histone acetyltransferase, histone deacetylase, and methyltransferase, and microRNAs (miRNAs) all involve in these epigenetic changes. Recent studies indicate that Cd is able to induce various epigenetic changes in plant and mammalian cells in vitro and in vivo. Since aberrant epigenetics plays a critical role in the development of various cancers and chronic diseases, Cd may cause the above-mentioned pathogenic risks via epigenetic mechanisms. Here we review the in vitro and in vivo evidence of epigenetic effects of Cd. The available findings indicate that epigenetics occurred in association with Cd induction of malignant transformation of cells and pathological proliferation of tissues, suggesting that epigenetic effects may play a role in Cd toxic, particularly carcinogenic effects. The future of environmental epigenomic research on Cd should include the role of epigenetics in determining long-term and late-onset health effects following Cd exposure.", "title": "" }, { "docid": "72f9d32f241992d02990a7a2e9aad9bb", "text": "— Improved methods are proposed for disk drive failure prediction. The SMART (Self Monitoring and Reporting Technology) failure prediction system is currently implemented in disk drives. Its purpose is to predict the near-term failure of an individual hard disk drive, and issue a backup warning to prevent data loss. Two experimentally tests of SMART showed only moderate accuracy at low false alarm rates. (A rate of 0.2% of total drives per year implies that 20% of drive returns would be good drives, relative to ≈1% annual failure rate of drives). This requirement for very low false alarm rates is well known in medical diagnostic tests for rare diseases, and methodology used there suggests ways to improve SMART. ACRONYMS ATA Standard drive interface, desktop computers FA Failure analysis of apparently failed drive FAR False alarm rate, 100 times probability value MVRS Multivariate rank sum statistical test NPF Drive failed, but “No problem found” in FA RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value SCSI Standard drive interface, high-end computers SMART “Self monitoring and reporting technology” WA Failure warning accuracy (probability) Two improved SMART algorithms are proposed here. They use the SMART internal drive attribute measurements in present drives. The present warning algorithm based on maximum error thresholds is replaced by distribution-free statistical hypothesis tests. These improved algorithms are computationally simple enough to be implemented in drive microprocessor firmware code. They require only integer sort operations to put several hundred attribute values in rank order. Some tens of these ranks are added up and the SMART warning is issued if the sum exceeds a prestored limit. NOTATION: n Number of reference (old) measurements m Number of warning (new) measurements N Total ranked measurements (n+m) p Number of different attributes measured Q(X) Normal probability Pr(x>X) RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value", "title": "" } ]
scidocsrr
d7da9e1c0ed4fb72b0c98fe5ad17a2d8
Compact Multifrequency Slot Antenna Design Incorporating Embedded Arc-Strip
[ { "docid": "3323474060ba5f1fbbbdcb152c22a6a9", "text": "A compact triple-band microstrip slot antenna applied to WLAN/WiMAX applications is proposed in this letter. This antenna has a simpler structure than other antennas designed for realizing triple-band characteristics. It is just composed of a microstrip feed line, a substrate, and a ground plane on which some simple slots are etched. Then, to prove the validation of the design, a prototype is fabricated and measured. The experimental data show that the antenna can provide three impedance bandwidths of 600 MHz centered at 2.7 GHz, 430 MHz centered at 3.5 GHz, and 1300 MHz centered at 5.6 GHz.", "title": "" } ]
[ { "docid": "cf751df3c52306a106fcd00eef28b1a4", "text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.", "title": "" }, { "docid": "06e58f46c989f22037f443ccf38198ce", "text": "Many biological surfaces in both the plant and animal kingdom possess unusual structural features at the micro- and nanometre-scale that control their interaction with water and hence wettability. An intriguing example is provided by desert beetles, which use micrometre-sized patterns of hydrophobic and hydrophilic regions on their backs to capture water from humid air. As anyone who has admired spider webs adorned with dew drops will appreciate, spider silk is also capable of efficiently collecting water from air. Here we show that the water-collecting ability of the capture silk of the cribellate spider Uloborus walckenaerius is the result of a unique fibre structure that forms after wetting, with the ‘wet-rebuilt’ fibres characterized by periodic spindle-knots made of random nanofibrils and separated by joints made of aligned nanofibrils. These structural features result in a surface energy gradient between the spindle-knots and the joints and also in a difference in Laplace pressure, with both factors acting together to achieve continuous condensation and directional collection of water drops around spindle-knots. Submillimetre-sized liquid drops have been driven by surface energy gradients or a difference in Laplace pressure, but until now neither force on its own has been used to overcome the larger hysteresis effects that make the movement of micrometre-sized drops more difficult. By tapping into both driving forces, spider silk achieves this task. Inspired by this finding, we designed artificial fibres that mimic the structural features of silk and exhibit its directional water-collecting ability.", "title": "" }, { "docid": "aa32bff910ce6c7b438dc709b28eefe3", "text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: m.batty@ucl.ac.uk 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science", "title": "" }, { "docid": "a1118a6310736fc36dbc70bd25bd5f28", "text": "Many studies have documented large and persistent productivity differences across producers, even within narrowly defined industries. This paper both extends and departs from the past literature, which focused on technological explanations for these differences, by proposing that demand-side features also play a role in creating the observed productivity variation. The specific mechanism investigated here is the effect of spatial substitutability in the product market. When producers are densely clustered in a market, it is easier for consumers to switch between suppliers (making the market in a certain sense more competitive). Relatively inefficient producers find it more difficult to operate profitably as a result. Substitutability increases truncate the productivity distribution from below, resulting in higher minimum and average productivity levels as well as less productivity dispersion. The paper presents a model that makes this process explicit and empirically tests it using data from U.S. ready-mixed concrete plants, taking advantage of geographic variation in substitutability created by the industry’s high transport costs. The results support the model’s predictions and appear robust. Markets with high demand density for ready-mixed concrete—and thus high concrete plant densities—have higher lower-bound and average productivity levels and exhibit less productivity dispersion among their producers.", "title": "" }, { "docid": "bd7c88432eecfcb696462e7e32fc2f32", "text": "Digital Equipme nt Corporation eva luates globa l supply chain alternatives and determines worldwide manufacturing and distribution strategy , using the Global Supply Cha in Model (GSCM) which recommends a production, distribution , and vendor network, GSCM minimizes cost or weighted cumulative production and distribution times or both sub ject to meeting estimated demand and restrictions on local content, offset trade, and joint capacity for multiple products, eche lons, and time periods. Cos t factors include fi xed and variable production cha rges, inventory charges, distribution expenses via multiple mod es, taxes, duties, and duty drawback. GSCM is a large mixed-integer linear program that incorporates a global, multi product bill of materials for supply chains with arbitrary eche lon structure and a comprehensive model of integrated globa l manufacturing and distribution decisions. The supply chain restructuring has saved over $100 million (US ),", "title": "" }, { "docid": "20ec78dfbfe5b9709f25bd28e0e66e8d", "text": "BACKGROUND\nElectronic medical records (EMRs) contain vast amounts of data that is of great interest to physicians, clinical researchers, and medial policy makers. As the size, complexity, and accessibility of EMRs grow, the ability to extract meaningful information from them has become an increasingly important problem to solve.\n\n\nMETHODS\nWe develop a standardized data analysis process to support cohort study with a focus on a particular disease. We use an interactive divide-and-conquer approach to classify patients into relatively uniform within each group. It is a repetitive process enabling the user to divide the data into homogeneous subsets that can be visually examined, compared, and refined. The final visualization was driven by the transformed data, and user feedback direct to the corresponding operators which completed the repetitive process. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time.\n\n\nRESULTS\nThis paper presented a visually rich, interactive web-based application, which could enable researchers to study any cohorts over time by using EMR data. The resulting visualizations help uncover hidden information in the data, compare differences between patient groups, determine critical factors that influence a particular disease, and help direct further analyses. We introduced and demonstrated this tool by using EMRs of 14,567 Chronic Kidney Disease (CKD) patients.\n\n\nCONCLUSIONS\nWe developed a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. By using CKD as a model of disease, it was assembled by automated correlational analysis and human-curated visual evaluation. The visualization methods such as Sankey diagram can reveal useful knowledge about the particular disease cohort and the trajectories of the disease over time.", "title": "" }, { "docid": "b3a9ad04e7df1b2250f0a7b625509efd", "text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.", "title": "" }, { "docid": "f52cde20377d4b8b7554f9973c220d0a", "text": "A typical method to obtain valuable information is to extract the sentiment or opinion from a message. Machine learning technologies are widely used in sentiment classification because of their ability to “learn” from the training dataset to predict or support decision making with relatively high accuracy. However, when the dataset is large, some algorithms might not scale up well. In this paper, we aim to evaluate the scalability of Naïve Bayes classifier (NBC) in large datasets. Instead of using a standard library (e.g., Mahout), we implemented NBC to achieve fine-grain control of the analysis procedure. A Big Data analyzing system is also design for this study. The result is encouraging in that the accuracy of NBC is improved and approaches 82% when the dataset size increases. We have demonstrated that NBC is able to scale up to analyze the sentiment of millions movie reviews with increasing throughput.", "title": "" }, { "docid": "b80ea8f30ee25eb5d33b8febb578c3b5", "text": "Part-of-speech tagging in Marathi language is a very complex task as Marathi is highly inflectional in nature & free word order language. In this paper we have demonstrated a rule-based Part-of-Speech tagger for Marathi Language. The hand–constructed rules that are learned from corpus and some manual addition after studying the grammar of Marathi language are added and that are used for developing the tagger. Disambiguation is done by analyzing", "title": "" }, { "docid": "43e151ee05922e620e2bbac197357ffd", "text": "Modelling artificial neural networks for accurate time series prediction poses multiple challenges, in particular specifying the network architecture in accordance with the underlying structure of the time series. The data generating processes may exhibit a variety of stochastic or deterministic time series patterns of single or multiple seasonality, trends and cycles, overlaid with pulses, level shifts and structural breaks, all depending on the discrete time frequency in which it is observed. For heterogeneous datasets of time series, such as the 2008 ESTSP competition, a universal methodology is required for automatic network specification across varying data patterns and time frequencies. We propose a fully data driven forecasting methodology that combines filter and wrapper approaches for feature selection, including automatic feature evaluation, construction and transformation. The methodology identifies time series patterns, creates and transforms explanatory variables and specifies multilayer perceptrons for heterogeneous sets of time series without expert intervention. Examples of the valid and reliable performance in comparison to established benchmark methods are shown for a set of synthetic time series and for the ESTSP’08 competition dataset, where the proposed methodology obtained second place. & 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2a811ac141a9c5fb0cea4b644b406234", "text": "Leadership is a process influence between leaders and subordinates where a leader attempts to influence the behaviour of subordinates to achieve the organizational goals. Organizational success in achieving its goals and objectives depends on the leaders of the organization and their leadership styles. By adopting the appropriate leadership styles, leaders can affect employee job satisfaction, commitment and productivity. Two hundred Malaysian executives working in public sectors voluntarily participated in this study. Two types of leadership styles, namely, transactional and transformational were found to have direct relationships with employees’ job satisfaction. The results showed that transformational leadership style has a stronger relationship with job satisfaction. This implies that transformational leadership is deemed suitable for managing government organizations. Implications of the findings were discussed further.", "title": "" }, { "docid": "a9b4d5fed4cc45a7c9ce7b429d77855e", "text": "In this paper, a cellular-connected unmanned aerial vehicle (UAV) mobile edge computing system is studied where several UAVs are associated to a terrestrial base station (TBS) for computation offloading. To compute the large amount of data bits, a part of computation task is migrated to TBS and the other part is locally handled at UAVs. Our goal is to minimize the total energy consumption of all UAVs by jointly adjusting the bit allocation, power allocation, resource partitioning as well as UAV trajectory under TBS’s energy budget. For deeply comprehending the impact of multi-UAV access strategy on the system performance, four access schemes in the uplink transmission is considered, i.e., time division multiple access, orthogonal frequency division multiple access, one-by-one access and non-orthogonal multiple access. The involved problems under different access schemes are all formulated in non-convex forms, which are difficult to be tackled optimally. To solve this class of problem, the successive convex approximation technique is employed to obtain the suboptimal solutions. The numerical results show that the proposed scheme save significant energy consumption compared with the benchmark schemes.", "title": "" }, { "docid": "22348f1441faa116cce4b05c45848380", "text": "In this paper we propose a method for matching the scales of 3D point clouds. 3D point sets of the same scene obtained by 3D reconstruction techniques usually differ in scale. To match scales, we estimate the ratio of scales of two given 3D point clouds. By performing PCA of spin images over different scales of two point clouds, two sets of cumulative contribution rate curves are generated. Such sets of curves can be considered to characterize the scale of the given 3D point clouds. To find the scale ratio of two point clouds, we register the two sets of curves by using a variant of ICP that estimates the ratio of scales. Simulations with the Stanford bunny and experimental results with 3D reconstructions of artificial and real scenes demonstrate that the ratio of any 3D point clouds can be effectively used for scale matching.", "title": "" }, { "docid": "599e203a8090cc45b6dc2263567f2a5f", "text": "We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our method's effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.", "title": "" }, { "docid": "95ca78f61a46f6e34edce6210d5e0939", "text": "Wireless sensor networks (WSNs) have recently gained a lot of attention by scientific community. Small and inexpensive devices with low energy consumption and limited computing resources are increasingly being adopted in different application scenarios including environmental monitoring, target tracking and biomedical health monitoring. In many such applications, node localization is inherently one of the system parameters. Localization process is necessary to report the origin of events, routing and to answer questions on the network coverage ,assist group querying of sensors. In general, localization schemes are classified into two broad categories: range-based and range-free. However, it is difficult to classify hybrid solutions as range-based or range-free. In this paper we make this classification easy, where range-based schemes and range-free schemes are divided into two types: fully schemes and hybrid schemes. Moreover, we compare the most relevant localization algorithms and discuss the future research directions for wireless sensor networks localization schemes.", "title": "" }, { "docid": "b226b612db064f720e32e5a7fd9d9dec", "text": "Clustering is a fundamental technique widely used for exploring the inherent data structure in pattern recognition and machine learning. Most of the existing methods focus on modeling the similarity/dissimilarity relationship among instances, such as k-means and spectral clustering, and ignore to extract more effective representation for clustering. In this paper, we propose a deep embedding network for representation learning, which is more beneficial for clustering by considering two constraints on learned representations. We first utilize a deep auto encoder to learn the reduced representations from the raw data. To make the learned representations suitable for clustering, we first impose a locality-persevering constraint on the learned representations, which aims to embed original data into its underlying manifold space. Then, different from spectral clustering which extracts representations from the block diagonal similarity matrix, we apply a group sparsity constraint for the learned representations, and aim to learn block diagonal representations in which the nonzero groups correspond to its cluster. After obtaining the learned representations, we use k-means to cluster them. To evaluate the proposed deep embedding network, we compare its performance with k-means and spectral clustering on three commonly-used datasets. The experiments demonstrate that the proposed method achieves promising performance.", "title": "" }, { "docid": "658ff079f4fc59ee402a84beecd77b55", "text": "Mitochondria are master regulators of metabolism. Mitochondria generate ATP by oxidative phosphorylation using pyruvate (derived from glucose and glycolysis) and fatty acids (FAs), both of which are oxidized in the Krebs cycle, as fuel sources. Mitochondria are also an important source of reactive oxygen species (ROS), creating oxidative stress in various contexts, including in the response to bacterial infection. Recently, complex changes in mitochondrial metabolism have been characterized in mouse macrophages in response to varying stimuli in vitro. In LPS and IFN-γ-activated macrophages (M1 macrophages), there is decreased respiration and a broken Krebs cycle, leading to accumulation of succinate and citrate, which act as signals to alter immune function. In IL-4-activated macrophages (M2 macrophages), the Krebs cycle and oxidative phosphorylation are intact and fatty acid oxidation (FAO) is also utilized. These metabolic alterations in response to the nature of the stimulus are proving to be determinants of the effector functions of M1 and M2 macrophages. Furthermore, reprogramming of macrophages from M1 to M2 can be achieved by targeting metabolic events. Here, we describe the role that metabolism plays in macrophage function in infection and immunity, and propose that reprogramming with metabolic inhibitors might be a novel therapeutic approach for the treatment of inflammatory diseases.", "title": "" }, { "docid": "550e19033cb00938aed89eb3cce50a76", "text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.", "title": "" }, { "docid": "fefc18d1dacd441bd3be641a8ca4a56d", "text": "This paper proposes a new residual convolutional neural network (CNN) architecture for single image depth estimation. Compared with existing deep CNN based methods, our method achieves much better results with fewer training examples and model parameters. The advantages of our method come from the usage of dilated convolution, skip connection architecture and soft-weight-sum inference. Experimental evaluation on the NYU Depth V2 dataset shows that our method outperforms other state-of-the-art methods by a margin.", "title": "" }, { "docid": "19339fa01942ad3bf33270aa1f6ceae2", "text": "This study investigated query formulations by users with {\\it Cognitive Search Intents} (CSIs), which are users' needs for the cognitive characteristics of documents to be retrieved, {\\em e.g. comprehensibility, subjectivity, and concreteness. Our four main contributions are summarized as follows (i) we proposed an example-based method of specifying search intents to observe query formulations by users without biasing them by presenting a verbalized task description;(ii) we conducted a questionnaire-based user study and found that about half our subjects did not input any keywords representing CSIs, even though they were conscious of CSIs;(iii) our user study also revealed that over 50\\% of subjects occasionally had experiences with searches with CSIs while our evaluations demonstrated that the performance of a current Web search engine was much lower when we not only considered users' topical search intents but also CSIs; and (iv) we demonstrated that a machine-learning-based query expansion could improve the performances for some types of CSIs.Our findings suggest users over-adapt to current Web search engines,and create opportunities to estimate CSIs with non-verbal user input.", "title": "" } ]
scidocsrr
e729bbce9851f97c2387ef35d3fcd67a
Robust real-time pupil tracking in highly off-axis images
[ { "docid": "1705ba479a7ff33eef46e0102d4d4dd0", "text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.", "title": "" }, { "docid": "06c0b39b820da9549c72ae48544d096c", "text": "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.", "title": "" } ]
[ { "docid": "b35922663b4728c409528675be15d586", "text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.", "title": "" }, { "docid": "9175794d83b5f110fb9f08dc25a264b8", "text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.", "title": "" }, { "docid": "2c7920f53eed99e3a7380ebc036e67a5", "text": "We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers.", "title": "" }, { "docid": "63d26f3336960c1d92afbd3a61a9168c", "text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.", "title": "" }, { "docid": "a4e57fe3d24d6eb8be1d0e0659dda58a", "text": "Automated game design has remained a key challenge within the field of Game AI. In this paper, we introduce a method for recombining existing games to create new games through a process called conceptual expansion. Prior automated game design approaches have relied on hand-authored or crowdsourced knowledge, which limits the scope and applications of such systems. Our approach instead relies on machine learning to learn approximate representations of games. Our approach recombines knowledge from these learned representations to create new games via conceptual expansion. We evaluate this approach by demonstrating the ability for the system to recreate existing games. To the best of our knowledge, this represents the first machine learning-based automated game design system.", "title": "" }, { "docid": "3ac89f0f4573510942996ae66ef8184c", "text": "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.", "title": "" }, { "docid": "6825d0150a4fb1e1788e50bcc1d178d5", "text": "Severely stressful life events can have a substantial impact on those who experience them. For some, experience with a traumatic life event can leave them confused, withdrawn, depressed, and increasingly vulnerable to the next stressful situation that arises. The clinical literature, for example, has found various stressful life events to be risk factors for the development of depression, anxiety, and in extreme cases, posttraumatic stress disorder (PTSD). For other individuals, a traumatic experience can serve as a catalyst for positive change, a chance to reexamine life priorities or develop strong ties with friends and family. Recent research has explored the immediate and long-term positive effects of similarly severe life events, such as cancer, bereavement, and HIV-infection, to identify the factors and processes that appear to contribute to resilience and growth. These two lines of research, however, have developed largely independent of each other and a number of questions remain to be explored in their integration. For example, do the roots of these apparently divergent patterns lie in the events themselves or in the people who experience them? Do some experiences typically lead to negative outcomes, whereas others contribute to the development of positive changes? What psychological factors appear to moderate these outcomes? How do positive outcomes, such as perceptions of stress-related growth and benefit, relate to measures of negative adjustment? To address these questions, we begin with a review of positive outcomes that have been reported in response to stressful life events, such as the perceptions of stressrelated growth and benefit and theories that help to explain these changes. We then", "title": "" }, { "docid": "c05f825b7520423c9ff95a1a8e5d260f", "text": "Accurate detection and tracking of objects is vital for effective video understanding. In previous work, the two tasks have been combined in a way that tracking is based heavily on detection, but the detection benefits marginally from the tracking. To increase synergy, we propose to more tightly integrate the tasks by conditioning the object detection in the current frame on tracklets computed in prior frames. With this approach, the object detection results not only have high detection responses, but also improved coherence with the existing tracklets. This greater coherence leads to estimated object trajectories that are smoother and more stable than the jittered paths obtained without tracklet-conditioned detection. Over extensive experiments, this approach is shown to achieve state-of-the-art performance in terms of both detection and tracking accuracy, as well as noticeable improvements in tracking stability.", "title": "" }, { "docid": "aa2e16e6ed5d2610a567e358807834d4", "text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.", "title": "" }, { "docid": "eed5c66d0302c492f2480a888678d1dc", "text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.", "title": "" }, { "docid": "c8c9e542c966a7c474f2a5c8d494ec23", "text": "We present ASPIER -- the first framework that combines software model checking with a standard protocol security model to automatically analyze authentication and secrecy properties of protocol implementations in C. The technical approach extends the iterative abstraction-refinement methodology for software model checking with a domain-specific protocol and symbolic attacker model. We have implemented the ASPIER tool and used it to verify authentication and secrecy properties of a part of an industrial strength protocol implementation -- the handshake in OpenSSL -- for configurations consisting of up to 3 servers and 3 clients. We have also implemented two distinct methods for reasoning about attacker message derivations, and evaluated them in the context of OpenSSL verification. ASPIER detected the \"version-rollback\" vulnerability in OpenSSL 0.9.6c source code and successfully verified the implementation when clients and servers are only willing to run SSL 3.0.", "title": "" }, { "docid": "1026fa138e36ac1ccef81c1660c9dbf9", "text": "The Java®HotSpot Virtual Machine includes a multi-tier compilation system that may invoke a compiler at any time. Lower tiers instrument the program to gather information for the highly optimizing compiler at the top tier, and this compiler bases its optimizations on these profiles. But if the assumptions made by the top-tier compiler are proven wrong (e.g., because the profile does not cover all execution paths), the method is deoptimized: the code generated for the method is discarded and the method is then executed at Tier 0 again. Eventually, after profile information has been gathered, the method is recompiled at the top tier again (this time with less-optimistic assumptions). Users of the system experience such deoptimization cycles (discard, profile, compile) as performance fluctuations and potentially as variations in the system's responsiveness. Unpredictable performance however is problematic in many time-critical environments even if the system is not a hard real-time system.\n A profile cache captures the profile of earlier executions. When the application is executed again, with a fresh VM, the top tier (highly optimizing) compiler can base its decisions on a profile that reflects prior executions and not just the recent history observed during this run. We report in this paper the design and effectiveness of a profile cache for Java applications which is implemented and evaluated as part of the multi-tier compilation system of the HotSpot Java Virtual Machine in OpenJDK version 9. For a set of benchmarks, profile caching reduces the number of (re)compilations by up to 23%, the number of deoptimizations by up to 90%, and thus improves performance predictability.", "title": "" }, { "docid": "7dcdf69f47a0a56d437cc8b7ea5352a6", "text": "A wide range of domain-specific languages (DSLs) has been implemented successfully by embedding them in general purpose languages. This paper reviews embedding, and summarizes how two alternative techniques—staged interpreters and templates—can be used to overcome the limitations of embedding. Both techniques involve a form of generative programming. The paper reviews and compares three programming languages that have special support for generative programming. Two of these languages (MetaOCaml and Template Haskell) are research languages, while the third (C++) is already in wide industrial use. The paper identifies several dimensions that can serve as a basis for comparing generative languages.", "title": "" }, { "docid": "4b22eaf527842e0fa41a1cd740ad9b40", "text": "Music transcription is the process of creating a written score of music from an audio recording. Musicians and musicologists use transcription to better understand music that may not have a written form, from improvised jazz solos to traditional folk music. Automatic music transcription introduces signal-processing algorithms to extract pitch and rhythm information from recordings. This speeds up and automates the process of music transcription, which requires musical training and is very time consuming even for experts. This thesis explores the still unsolved problem of automatic music transcription through an in-depth analysis of the problem itself and an overview of different techniques to solve the hardest subtask of music transcription, multiple pitch estimation. It concludes with a close study of a typical multiple pitch estimation algorithm and highlights the challenges that remain unsolved.", "title": "" }, { "docid": "69a01ea46134301abebd6159942c0b52", "text": "This paper proposes a crowd counting method. Crowd counting is difficult because of large appearance changes of a target which caused by density and scale changes. Conventional crowd counting methods generally utilize one predictor (e.g. regression and multi-class classifier). However, such only one predictor can not count targets with large appearance changes well. In this paper, we propose to predict the number of targets using multiple CNNs specialized to a specific appearance, and those CNNs are adaptively selected according to the appearance of a test image. By integrating the selected CNNs, the proposed method has the robustness to large appearance changes. In experiments, we confirm that the proposed method can count crowd with lower counting error than a CNN and integration of CNNs with fixed weights. Moreover, we confirm that each predictor automatically specialized to a specific appearance.", "title": "" }, { "docid": "f9da4bfe6dba0a6ec886758b164cd10b", "text": "Physically based deformable models have been widely embraced by the Computer Graphics community. Many problems outlined in a previous survey by Gibson and Mirtich [GM97] have been addressed, thereby making these models interesting and useful for both offline and real-time applications, such as motion pictures and video games. In this paper, we present the most significant contributions of the past decade, which produce such impressive and perceivably realistic animations and simulations: finite element/difference/volume methods, mass-spring systems, meshfree methods, coupled particle systems and reduced deformable models based on modal analysis. For completeness, we also make a connection to the simulation of other continua, such as fluids, gases and melting objects. Since time integration is inherent to all simulated phenomena, the general notion of time discretization is treated separately, while specifics are left to the respective models. Finally, we discuss areas of application, such as elastoplastic deformation and fracture, cloth and hair animation, virtual surgery simulation, interactive entertainment and fluid/smoke animation, and also suggest areas for future research.", "title": "" }, { "docid": "e6555beb963f40c39089959a1c417c2f", "text": "In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as the weights matrix decomposition, binarization and hashing. It is shown that the most efficient optimization can be achieved with the matrices decomposition and hashing. Finally, we explore the possibility to distill the knowledge from the large neural network, if only large unlabeled sample of facial images is available.", "title": "" }, { "docid": "7ce350ec696066026e094687e96fb9d4", "text": "Convergence of communication technologies and innovative product features are expanding the markets for technological products and services. Prior literature on technology acceptance and use has focused on utilitarian belief factors as predictors of rational adoption decisions and subsequent user behavior. This presupposes that consumers’ intentions to use technology are based on functional or utilitarian needs. Using netnographic evidence on iPhone usage, this study suggests that innovative consumers adopt and use new technology for not just utilitarian but also for experiential outcomes. The study presents an interpretive analysis of the consumption behavior of very early iPhone users. Apple introduced iPhone as a revolutionary mobile handset offering integrated features and converged services—a handheld computercum-phone with a touch-screen web browser, a music player, an organizer, a note-taker, and a camera. This revolutionary product opened up new possibilities to meld functional tasks, hedonism, and social signaling. The study suggests that even utilitarian users have hedonic and social factors present in their consumption patterns. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "11510b7d7421aea0f59aee3687a12d04", "text": "In this paper, we present the Cooperative Adaptive Cruise Control (CACC) architecture, which was proposed and implemented by the team from Chalmers University of Technology, Göteborg, Sweden, that joined the Grand Cooperative Driving Challenge (GCDC) in 2011. The proposed CACC architecture consists of the following three main components, which are described in detail: 1) communication; 2) sensor fusion; and 3) control. Both simulation and experimental results are provided, demonstrating that the proposed CACC system can drive within a vehicle platoon while minimizing the inter-vehicle spacing within the allowed range of safety distances, tracking a desired speed profile, and attenuating acceleration shockwaves.", "title": "" }, { "docid": "bc7f80192416aa7787657aed1bda3997", "text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.", "title": "" } ]
scidocsrr
86eee987d903265e3aaba20107a8b626
The Global Superorganism : An Evolutionary-cybernetic Model of the Emerging Network Society
[ { "docid": "a172c51270d6e334b50dcc6233c54877", "text": "m U biquitous computing enhances computer use by making many computers available throughout the physical environment, while making them effectively invisible to the user. This article explains what is new and different about the computer science involved in ubiquitous computing. First, it provides a brief overview of ubiquitous computing, then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g., chips), network protocols, interaction substrates (e.g., software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science. Since we started this work at Xerox Palo Alto Research Center (PARC) in 1988 a few places have begun work on this possible next-generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected computers. The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user. To bring computers to this point while retaining their power will require radically new kinds of computers of all sizes and shapes to be available to each person. I call this future world \"Ubiquitous Comput ing\" (Ubicomp) [27]. The research method for ubiquitous computing is standard experimental computer science: the construction of working prototypes of the necessai-y infrastructure in sufficient quantity to debug the viability of the systems in everyday use; ourselves and a few colleagues serving as guinea pigs. This is", "title": "" }, { "docid": "c213dd0989659d413b39e6698eb097cc", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the the major transitions in evolution. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.", "title": "" } ]
[ { "docid": "a71d0d3748f6be2adbd48ab7671dd9f8", "text": "Considerable overlap has been identified in the risk factors, comorbidities and putative pathophysiological mechanisms of Alzheimer disease and related dementias (ADRDs) and type 2 diabetes mellitus (T2DM), two of the most pressing epidemics of our time. Much is known about the biology of each condition, but whether T2DM and ADRDs are parallel phenomena arising from coincidental roots in ageing or synergistic diseases linked by vicious pathophysiological cycles remains unclear. Insulin resistance is a core feature of T2DM and is emerging as a potentially important feature of ADRDs. Here, we review key observations and experimental data on insulin signalling in the brain, highlighting its actions in neurons and glia. In addition, we define the concept of 'brain insulin resistance' and review the growing, although still inconsistent, literature concerning cognitive impairment and neuropathological abnormalities in T2DM, obesity and insulin resistance. Lastly, we review evidence of intrinsic brain insulin resistance in ADRDs. By expanding our understanding of the overlapping mechanisms of these conditions, we hope to accelerate the rational development of preventive, disease-modifying and symptomatic treatments for cognitive dysfunction in T2DM and ADRDs alike.", "title": "" }, { "docid": "e776c87ec35d67c6acbdf79d8a5cac0a", "text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.", "title": "" }, { "docid": "a3da533f428b101c8f8cb0de04546e48", "text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.", "title": "" }, { "docid": "7ff7f006cef141fa2662ad502facc8fa", "text": "Commonly it is assumed that the nanocrystalline materials are composed of elements like grains, crystallites, layers, e.g., of a size of ca. 100 nm. (more typically less than 50 nm; often less than 10 nm – in the case of superhard nanocomposite, materials for optoelectronic applications, etc.) at least in one direction. The definition give above limits the size of the structure elements, however it has to be seen only as a theoretical value and doesn’t have any physical importance. Thin films and coatings are applied to structural bulk materials in order to improve the desired properties of the surface, such as corrosion resistance, wear resistance, hardness, friction or required colour, e.g., golden, black or a polished brass-like. The research issues concerning the production of coatings are one of the more important directions of surface engineering development, ensuring the obtainment of coatings of high utility properties in the scope of mechanical characteristics and wear resistance. Giving new utility characteristics to commonly known materials is frequently obtained by laying simple monolayer, multilayer or gradient coatings using PVD methods (Dobrzanski et al., 2005; Lukaszkowicz & Dobrzanski, 2008). While selecting the coating material, we encounter a barrier caused by the fact that numerous properties expected from an ideal coating are impossible to be obtained simultaneously. The application of the nanostructure coatings is seen as the solution of this issue. Nanostructure and particularly nanocomposite coatings deposited by physical vapour deposition or chemical vapour deposition, have gained considerable attention due to their unique physical and chemical properties, e.g. extremely high indentation hardness (40-80 GPa) (Veprek et al., 2006, 2000; Zou et al., 2010), corrosion resistance (Audronis et al., 2008; Lukaszkowicz et al., 2010), excellent high temperature oxidization resistance (Vaz et al., 2000; Voevodin & Zabinski, 2005), as well high abrasion and erosion resistance (Cheng et al., 2010; Polychronopoulou et al., 2009; Veprek & Veprek-Heijman, 2008). In the present work, the emphasis is put on current practices and future trends for nanocomposite thin films and coatings deposited by physical vapour deposition (PVD) and chemical vapour deposition (CVD) techniques. This review will not be so exhaustive as to cover all aspects of such coatings, but the main objective is to give a general sense of what has so far been accomplished and where the field is going.", "title": "" }, { "docid": "fa151d877d387a250caa8d1c1da32a10", "text": "Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of applications in the cloud. In this paper, we propose extending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors . Each unikernel is bundled with a tiny, specialized monitor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel monitors improve isolation through minimal interfaces, reduce complexity, and boot unikernels quickly. Our initial prototype,ukvm, is less than 5% the code size of a traditional monitor, and boots MirageOS unikernels in as little as 10ms (8× faster than a traditional monitor).", "title": "" }, { "docid": "c1ca7ef76472258c6359111dd4d014d5", "text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.", "title": "" }, { "docid": "2af4728858b2baa29b13b613f902f644", "text": "Money has been said to change people's motivation (mainly for the better) and their behavior toward others (mainly for the worse). The results of nine experiments suggest that money brings about a self-sufficient orientation in which people prefer to be free of dependency and dependents. Reminders of money, relative to nonmoney reminders, led to reduced requests for help and reduced helpfulness toward others. Relative to participants primed with neutral concepts, participants primed with money preferred to play alone, work alone, and put more physical distance between themselves and a new acquaintance.", "title": "" }, { "docid": "cdcd2a627b1d7d94adc1bfa831667cf7", "text": "Solving mazes is not just a fun pastime: They are prototype models in several areas of science and technology. However, when maze complexity increases, their solution becomes cumbersome and very time consuming. Here, we show that a network of memristors--resistors with memory--can solve such a nontrivial problem quite easily. In particular, maze solving by the network of memristors occurs in a massively parallel fashion since all memristors in the network participate simultaneously in the calculation. The result of the calculation is then recorded into the memristors' states and can be used and/or recovered at a later time. Furthermore, the network of memristors finds all possible solutions in multiple-solution mazes and sorts out the solution paths according to their length. Our results demonstrate not only the application of memristive networks to the field of massively parallel computing, but also an algorithm to solve mazes, which could find applications in different fields.", "title": "" }, { "docid": "8c472fd910141f1819fa9ed8e0aaeeb9", "text": "In this work, we contribute a new multi-layer neural network architecture named ONCF to perform collaborative filtering. The idea is to use an outer product to explicitly model the pairwise correlations between the dimensions of the embedding space. In contrast to existing neural recommender models that combine user embedding and item embedding via a simple concatenation or element-wise product, our proposal of using outer product above the embedding layer results in a two-dimensional interaction map that is more expressive and semantically plausible. Above the interaction map obtained by outer product, we propose to employ a convolutional neural network to learn high-order correlations among embedding dimensions. Extensive experiments on two public implicit feedback data demonstrate the effectiveness of our proposed ONCF framework, in particular, the positive effect of using outer product to model the correlations between embedding dimensions in the low level of multi-layer neural recommender model. 1", "title": "" }, { "docid": "ad09fcab0aac68007eac167cafdd3d3c", "text": "We present HARP, a novel method for learning low dimensional embeddings of a graph’s nodes which preserves higherorder structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the stateof-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP’s hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on classification tasks on real-world graphs such as DBLP, BlogCatalog, and CiteSeer, where we achieve a performance gain over the original implementations by up to 14% Macro F1.", "title": "" }, { "docid": "e2a1ceadf01443a36af225b225e4d521", "text": "Event detection remains a challenge because of the difficulty of encoding the word semantics in various contexts. Previous approaches have heavily depended on language-specific knowledge and preexisting natural language processing tools. However, not all languages have such resources and tools available compared with English language. A more promising approach is to automatically learn effective features from data, without relying on language-specific resources. In this study, we develop a language-independent neural network to capture both sequence and chunk information from specific contexts and use them to train an event detector for multiple languages without any manually encoded features. Experiments show that our approach can achieve robust, efficient and accurate results for various languages. In the ACE 2005 English event detection task, our approach achieved a 73.4% F-score with an average of 3.0% absolute improvement compared with state-of-the-art. Additionally, our experimental results are competitive for Chinese and Spanish.", "title": "" }, { "docid": "93ae39ed7b4d6b411a2deb9967e2dc7d", "text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.", "title": "" }, { "docid": "e307e6af66c94dc2f1edabb18e2ca62d", "text": "OBJECTIVES\nTo determine the prevalence of depressive and high trait anxiety symptoms and substance use, including alcohol and nicotine, in first-year and second-year medical students in Skopje University Medical School, Republic of Macedonia.\n\n\nBACKGROUND\nIt is important to investigate medical students because they are under significant pressure during early years of medical education, a period during which the attitudes and behaviors of physicians develop.\n\n\nMETHODS\nA cross-sectional survey in classroom settings, using an anonymous self-administered questionnaire, was performed in 354 participants (181 first-year, 118 females and 63 males and 173 second-year medical students, 116 females and 57 males) aged 18 to 23 years. The Beck Depression Inventory (BDI) and Taylor Manifest Anxiety Scale (TMAS) were used to determine depressive and high trait anxiety symptoms. BDI scores 17 or higher were categorized as depressive and TMAS scores 16 or higher as high anxiety symptoms. A Student t-test was used for continuous data analysis.\n\n\nRESULTS\nOut of all participants 10.4% had BDI score 17 or higher and 65.5% had TMAS score 16 or higher. Alcohol was the most frequently used substance in both groups. Smoking prevalence was 25%. Benzodiazepines (diazepam, alprazolam) use was 13.1%. Illicit drug use was rare (1.1% in freshmen and 3.6% in juniors) in both groups.\n\n\nCONCLUSIONS\nHigh frequency of manifest high anxiety symptoms and depressive symptoms and benzodiazepine use among Macedonian junior medical students should be taken seriously and a student counseling service offering mental health assistance is necessary (Tab. 3, Ref. 23). Full Text (Free, PDF) www.bmj.sk.", "title": "" }, { "docid": "20ca4823a5bb5388404e509cb558fae9", "text": "Developing learning experiences that facilitate self-actualization and creativity is among the most important goals of our society in preparation for the future. To facilitate deep understanding of a new concept, to facilitate learning, learners must have the opportunity to develop multiple and flexible perspectives. The process of becoming an expert involves failure, as well as the ability to understand failure and the motivation to move onward. Meta-cognitive awareness and personal strategies can play a role in developing an individual’s ability to persevere through failure, and combat other diluting influences. Awareness and reflective technologies can be instrumental in developing a meta-cognitive ability to make conscious and unconscious decisions about engagement that will ultimately enhance learning, expertise, creativity, and self-actualization. This paper will review diverse perspectives from psychology, engineering, education, and computer science to present opportunities to enhance creativity, motivation, and self-actualization in learning systems. r 2005 Published by Elsevier Ltd.", "title": "" }, { "docid": "a7f535275801ee4ed9f83369f416c408", "text": "A recent development in text compression is a “block sorting” algorithm which permutes the input text according to a special sort procedure and then processes the permuted text with Move-to-Front and a final statistical compressor. The technique combines good speed with excellent compression performance. This paper investigates the fundamental operation of the algorithm and presents some improvements based on that analysis. Although block sorting is clearly related to previous compression techniques, it appears that it is best described by techniques derived from work by Shannon in 1951 on the prediction and entropy of English text. A simple model is developed which relates the compression to the proportion of zeros after the MTF stage. Short Title Block Sorting Text Compression Author Peter M. Fenwick Affiliation Department of Computer Science The University of Auckland Private Bag 92019 Auckland, New Zealand. Postal Address Dr P.M. Fenwick Dept of Computer Science The University of Auckland Private Bag 92019 Auckland New Zealand. E-mail p_fenwick@cs.auckland.ac.nz Telephone + 64 9 373 7599 ext 8298", "title": "" }, { "docid": "a627db7d9858bd68a34acdcdaf992fab", "text": "In this paper, we propose to conduct anomaly detection across multiple sources to identify objects that have inconsistent behavior across these sources. We assume that a set of objects can be described from various perspectives (multiple information sources). The underlying clustering structure of normal objects is usually shared by multiple sources. However, anomalous objects belong to different clusters when considering different aspects. For example, there exist movies that are expected to be liked by kids by genre, but are liked by grown-ups based on user viewing history. To identify such objects, we propose to compute the distance between different eigen decomposition results of the same object with respect to different sources as its anomalous score. We also give interpretations from the perspectives of constrained spectral clustering and random walks over graph. Experimental results on several UCI as well as DBLP and Movie Lens datasets demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "9fa1b755805d889cff096acf2572f2e1", "text": "Watermarking embeds a secret message into a cover message. In media watermarking the secret is usually a copyright notice and the cover a digital image. Watermarking an object discourages intellectual property theft, or when such theft has occurred, allows us to prove ownership. The Software Watermarking problem can be described as follows. Embed a structure W into a program P such that: W can be reliably located and extracted from P even after P has been subjected to code transformations such as translation, optimization and obfuscation; W is stealthy; W has a high data rate; embedding W into P does not adversely affect the performance of P; and W has a mathematical property that allows us to argue that its presence in P is the result of deliberate actions. In this paper we describe a software watermarking technique in which a dynamic graph watermark is stored in the execution state of a program. Because of the hardness of pointer alias analysis such watermarks are difficult to attack automatically.", "title": "" }, { "docid": "5cfef434d0d33ac5859bcdb77227d7b7", "text": "The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.", "title": "" }, { "docid": "d0526f6c589dc04284312a83ac5d7fff", "text": "Paper delivered at the International Conference on \" Cluster management in structural policy – International experiences and consequences for Northrhine-Westfalia \" , Duisburg, december 5 th", "title": "" }, { "docid": "33cf6c26de09c7772a529905d9fa6b5c", "text": "Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation.\n This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.", "title": "" } ]
scidocsrr
c91aeef553c047b001fb0d7b7e222800
MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network
[ { "docid": "7ec6540b44b23a0380dcb848239ccac4", "text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.", "title": "" }, { "docid": "6a1e614288a7977b72c8037d9d7725fb", "text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.", "title": "" }, { "docid": "9d73ff3f8528bb412c585d802873fcb4", "text": "In this work, we introduce a novel interpretation of residual networks showing they are exponential ensembles. This observation is supported by a large-scale lesion study that demonstrates they behave just like ensembles at test time. Subsequently, we perform an analysis showing these ensembles mostly consist of networks that are each relatively shallow. For example, contrary to our expectations, most of the gradient in a residual network with 110 layers comes from an ensemble of very short networks, i.e., only 10-34 layers deep. This suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble. Ultimately, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by ensembling many short networks together. This insight reveals that depth is still an open research question and invites the exploration of the related notion of multiplicity.", "title": "" } ]
[ { "docid": "1d7c0e0f0ea70669185394ecc4d6c4d8", "text": "We consider the problem of estimating the discrete state of an aircraft electric system under a distributed control architecture through active sensing. The main idea is to use a set of controllable switches to reconfigure the system in order to gather more information about the unknown state. By adaptively making a sequence of reconfiguration decisions with uncertain outcome, then correlating measurements and prior information to make the next decision, we aim to reduce the uncertainty. A greedy strategy is developed that maximizes the one-step expected uncertainty reduction. By exploiting recent results on adaptive submodularity, we give theoretical guarantees on the worst-case performance of the greedy strategy. We apply the proposed method in a fault detection scenario where the discrete state captures possible faults in various circuit components. In addition, simple abstraction rules are proposed to alleviate state space explosion and to scale up the strategy. Finally, the efficiency of the proposed method is demonstrated empirically on different circuits.", "title": "" }, { "docid": "97d1f0c14edeedd8348058b50fae653b", "text": "A high-efficiency self-shielded microstrip-fed Yagi-Uda antenna has been developed for 60 GHz communications. The antenna is built on a Teflon substrate (εr = 2.2) with a thickness of 10 mils (0.254 mm). A 7-element design results in a measured S11 of <; -10 dB at 56.0 - 66.4 GHz with a gain >; 9.5 dBi at 58 - 63 GHz. The antenna shows excellent performance in free space and in the presence of metal-planes used for shielding purposes. A parametric study is done with metal plane heights from 2 mm to 11 mm, and the Yagi-Uda antenna results in a gain >; 12 dBi at 58 - 63 GHz for h = 5 - 8 mm. A 60 GHz four-element switched-beam Yagi-Uda array is also presented with top and bottom shielding planes, and allows for 180° angular coverage with <; 3 dB amplitude variations. This antenna is ideal for inclusion in complex platforms, such as laptops, for point-to-point communication systems, either as a single element or a switched-beam system.", "title": "" }, { "docid": "ea94a3c561476e88d5ac2640656a3f92", "text": "Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2eab0513e7d381ebd9bafcbffa2a2f83", "text": "This note tries to attempt a sketch of the history of spectral ranking—a general umbrella name for techniques that apply the theory of linear maps (in particular, eigenvalues and eigenvectors) to matrices that do not represent geometric transformations, but rather some kind of relationship between entities. Albeit recently made famous by the ample press coverage of Google’s PageRank algorithm, spectral ranking was devised more than fifty years ago, almost exactly in the same terms, and has been studied in psychology, social sciences, and choice theory. I will try to describe it in precise and modern mathematical terms, highlighting along the way the contributions given by previous scholars.", "title": "" }, { "docid": "e31901738e78728a7376457f7d1acd26", "text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.", "title": "" }, { "docid": "cdafa5aac95ea3e8a7ddfaf412de9ae0", "text": "Uplifts in the Campi Flegrei caldera reach values unsurpassed anywhere in the world (~2 meters). Despite the marked deformation, the release of strain appears delayed. The rock physics analysis of well cores highlights the presence of two horizons, above and below the seismogenic area, underlying a coupled process. The basement is a calc-silicate rock housing hydrothermal decarbonation reactions, which provide lime-rich fluids. The caprock above the seismogenic area has a pozzolanic composition and a fibril-rich matrix that results from lime-pozzolanic reactions. These findings provide evidence for a natural process reflecting that characterizing the cementitious pastes in modern and Roman concrete. The formation of fibrous minerals by intertwining filaments confers shear and tensile strength to the caprock, contributing to its ductility and increased resistance to fracture.", "title": "" }, { "docid": "a95f77c59a06b2d101584babc74896fb", "text": "Magnetic wall and ceiling climbing robots have been proposed in many industrial applications where robots must move over ferromagnetic material surfaces. The magnetic circuit design with magnetic attractive force calculation of permanent magnetic wheel plays an important role which significantly affects the system reliability, payload ability and power consumption of the robot. In this paper, a flexible wall and ceiling climbing robot with six permanent magnetic wheels is proposed to climb along the vertical wall and overhead ceiling of steel cargo containers as part of an illegal contraband inspection system. The permanent magnetic wheels are designed to apply to the wall and ceiling climbing robot, whilst finite element method is employed to estimate the permanent magnetic wheels with various wheel rims. The distributions of magnetic flux lines and magnetic attractive forces are compared on both plane and corner scenarios so that the robot can adaptively travel through the convex and concave surfaces of the cargo container. Optimisation of wheel rims is presented to achieve the equivalent magnetic adhesive forces along with the estimation of magnetic ring dimensions in the axial and radial directions. Finally, the practical issues correlated with the applications of the techniques are discussed and the conclusions are drawn with further improvement and prototyping.", "title": "" }, { "docid": "8980ba98c30bb75a8771d3546bf3f27b", "text": "Due to their often unexpected nature, natural and man-made disasters are difficult to monitor and detect for journalists and disaster management response teams. Journalists are increasingly relying on signals from social media to detect such stories in their early stage of development. Twitter, which features a vast network of local news outlets, is a major source of early signal for disaster detection. Journalists who work for global desks often follow these sources via Twitter’s lists, but have to comb through thousands of small-scale or low-impact stories to find events that may be globally relevant. These are events that have a large scope, high impact, or potential geo-political relevance. We propose a model for automatically identifying events from local news sources that may break on a global scale within the next 24 hours. The results are promising and can be used in a predictive setting to help journalists manage their sources more effectively, or in a descriptive manner to analyze media coverage of disasters. Through the feature evaluation process, we also address the question: “what makes a disaster event newsworthy on a global scale?” As part of our data collection process, we have created a list of local sources of disaster/accident news on Twitter, which we have made publicly available.", "title": "" }, { "docid": "c923b40345b614eeffa758d50f78983d", "text": "A 6b converter array operates at a 600MHz clock frequency with input signals up to 600MHz and only 10mW power consumption. The array consists of 8 interleaved successive approximation converters implemented in a 90nm digital CMOS technology.", "title": "" }, { "docid": "c05d94b354b1d3a024a87e64d06245f1", "text": "This paper outlines an innovative game model for learning computational thinking (CT) skills through digital game-play. We have designed a game framework where students can practice and develop their skills in CT with little or no programming knowledge. We analyze how this game supports various CT concepts and how these concepts can be mapped to programming constructs to facilitate learning introductory computer programming. Moreover, we discuss the potential benefits of our approach as a support tool to foster student motivation and abilities in problem solving. As initial evaluation, we provide some analysis of feedback from a survey response group of 25 students who have played our game as a voluntary exercise. Structured empirical evaluation will follow, and the plan for that is briefly described.", "title": "" }, { "docid": "4e2bed31e5406e30ae59981fa8395d5b", "text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.", "title": "" }, { "docid": "68689ad05be3bf004120141f0534fd2b", "text": "A group of 156 first year medical students completed measures of emotional intelligence (EI) and physician empathy, and a scale assessing their feelings about a communications skills course component. Females scored significantly higher than males on EI. Exam performance in the autumn term on a course component (Health and Society) covering general issues in medicine was positively and significantly related to EI score but there was no association between EI and exam performance later in the year. High EI students reported more positive feelings about the communication skills exercise. Females scored higher than males on the Health and Society component in autumn, spring and summer exams. Structural equation modelling showed direct effects of gender and EI on autumn term exam performance, but no direct effects other than previous exam performance on spring and summer term performance. EI also partially mediated the effect of gender on autumn term exam performance. These findings provide limited evidence for a link between EI and academic performance for this student group. More extensive work on associations between EI, academic success and adjustment throughout medical training would clearly be of interest. 2005 Elsevier Ltd. All rights reserved. 0191-8869/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.04.014 q Ethical approval from the College of Medicine and Veterinary Medicine was sought and received for this investigation. Student information was gathered and used in accordance with the Data Protection Act. * Corresponding author. Tel.: +44 131 65", "title": "" }, { "docid": "aa19ed0407ded7ff199310c69ed49229", "text": "Due to the revolutionary advances of deep learning achieved in the field of image processing, speech recognition and natural language processing, the deep learning gains much attention. The recommendation task is influenced by the deep learning trend which shows its significant effectiveness and the high-quality of recommendations. The deep learning based recommender models provide a better detention of user preferences, item features and users-items interactions history. In this paper, we provide a recent literature review of researches dealing with deep learning based recommendation approaches which are preceded by a presentation of the main lines of the recommendation approaches and the deep learning techniques. We propose also classification criteria of the different deep learning integration model. Then we finish by presenting the recommendation approach adopted by the most popular video recommendation platform YouTube which is based essentially on deep learning advances. Keywords—Recommender system; deep learning; neural network; YouTube recommendation", "title": "" }, { "docid": "40e8a625c716ebe9e58300836c0db37f", "text": "In this paper, we proposed an Implementing Intelligent Traffic Control for Congestion, Ambulance clearance, and Stolen Vehicle Detection. This system was implemented based on present criteria that tracking three conditions in those one is heavy traffic control and another one is making a root of emergency vehicle like ambulance and VIP vehicle. In this paper we are going to implement a sensor network work which is used to detect the traffic density and also use RFID reader and tags. We use ARM7 system-on-chip to read the RFID tags attached to the vehicles. It counts number of vehicles that passes on a particular path during a specified duration. If the RFID tag read belongs to the stolen vehicles. GSM SIM300 used for message send to the police control room. In addition, when an ambulance approaching the junction, it will communicate the traffic controller in the junction to turn on the green light. This module uses Zigbee modules on CC2500. Keywords— ZigBee, CC2500, GSM, SIM300, ARM-9, ambulance vehicle stolen vehicle, congestion control, traffic junction.", "title": "" }, { "docid": "3cc9d3767cbfac13fcb7d363419eccad", "text": "SpeechPy is an open source Python package that contains speech preprocessing techniques, speech features, and important post-processing operations. It provides most frequent used speech features including MFCCs and filterbank energies alongside with the log-energy of filter-banks. The aim of the package is to provide researchers with a simple tool for speech feature extraction and processing purposes in applications such as Automatic Speech Recognition and Speaker Verification.", "title": "" }, { "docid": "abc2d0757184f5c50e4f2b3a6dabb56c", "text": "This paper describes the hardware implementation of the RANdom Sample Consensus (RANSAC) algorithm for featured-based image registration applications. The Multiple-Input Signature Register (MISR) and the index register are used to achieve the random sampling effect. The systolic array architecture is adopted to implement the forward elimination step in the Gaussian elimination. The computational complexity in the forward elimination is reduced by sharing the coefficient matrix. As a result, the area of the hardware cost is reduced by more than 50%. The proposed architecture is realized using Verilog and achieves real-time calculation on 30 fps 1024 * 1024 video stream on 100 MHz clock.", "title": "" }, { "docid": "45bd28fbea66930fca36bc20328d6d6f", "text": "Localization is one of the most challenging and important issues in wireless sensor networks (WSNs), especially if cost-effective approaches are demanded. In this paper, we present intensively discuss and analyze approaches relying on the received signal strength indicator (RSSI). The advantage of employing the RSSI values is that no extra hardware (e.g. ultrasonic or infra-red) is needed for network-centric localization. We studied different factors that affect the measured RSSI values. Finally, we evaluate two methods to estimate the distance; the first approach is based on statistical methods. For the second one, we use an artificial neural network to estimate the distance.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "8d4fdbdd76085391f2a80022f130459e", "text": "Recently completed whole-genome sequencing projects marked the transition from gene-based phylogenetic studies to phylogenomics analysis of entire genomes. We developed an algorithm MGRA for reconstructing ancestral genomes and used it to study the rearrangement history of seven mammalian genomes: human, chimpanzee, macaque, mouse, rat, dog, and opossum. MGRA relies on the notion of the multiple breakpoint graphs to overcome some limitations of the existing approaches to ancestral genome reconstructions. MGRA also generates the rearrangement-based characters guiding the phylogenetic tree reconstruction when the phylogeny is unknown.", "title": "" }, { "docid": "564322060dee31328da7b3bc3d762f95", "text": "The automatic detection and transcription of musical chords from audio is an established music computing task. The choice of chord profiles and higher-level time-series modelling have received a lot of attention, resulting in methods with an overall performance of more than 70% in the MIREX Chord Detection task 2009. Research on the front end of chord transcription algorithms has often concentrated on finding good chord templates to fit the chroma features. In this paper we reverse this approach and seek to find chroma features that are more suitable for usage in a musically-motivated model. We do so by performing a prior approximate transcription using an existing technique to solve non-negative least squares problems (NNLS). The resulting NNLS chroma features are tested by using them as an input to an existing state-of-the-art high-level model for chord transcription. We achieve very good results of 80% accuracy using the song collection and metric of the 2009 MIREX Chord Detection tasks. This is a significant increase over the top result (74%) in MIREX 2009. The nature of some chords makes their identification particularly susceptible to confusion between fundamental frequency and partials. We show that the recognition of these diffcult chords in particular is substantially improved by the prior approximate transcription using NNLS.", "title": "" } ]
scidocsrr
ade0904c51f97c1252db3c65f0a3dcb1
Non-parametric Similarity Measures for Unsupervised Texture Segmentation and Image Retrieval
[ { "docid": "662b1ec9e2481df760c19567ce635739", "text": "Semantic versus nonsemantic information icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today’s technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see “Semantic versus nonsemantic” sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they", "title": "" } ]
[ { "docid": "ab01efad4c65bbed9e4a499844683326", "text": "To achieve good generalization in supervised learning, the training and testing examples are usually required to be drawn from the same source distribution. In this paper we propose a method to relax this requirement in the context of logistic regression. Assuming <i>D<sup>p</sup></i> and <i>D<sup>a</sup></i> are two sets of examples drawn from two mismatched distributions, where <i>D<sup>a</sup></i> are fully labeled and <i>D<sup>p</sup></i> partially labeled, our objective is to complete the labels of <i>D<sup>p</sup>.</i> We introduce an auxiliary variable μ for each example in <i>D<sup>a</sup></i> to reflect its mismatch with <i>D<sup>p</sup>.</i> Under an appropriate constraint the μ's are estimated as a byproduct, along with the classifier. We also present an active learning approach for selecting the labeled examples in <i>D<sup>p</sup>.</i> The proposed algorithm, called \"Migratory-Logit\" or M-Logit, is demonstrated successfully on simulated as well as real data sets.", "title": "" }, { "docid": "d2cbeb1f764b5a574043524bb4a0e1a9", "text": "The latest 6th generation Carrier Stored Trench Gate Bipolar Transistor (CSTBT™) provides state of the art optimization of conduction and switching losses in IGBT modules. Use of low values of resistance in series with the IGBT gate produces low turn-on losses but increases stress on the recovery of the free-wheel diode resulting in higher dv/dt and increased EMI. The latest modules also incorporate new, improved recovery free-wheel diode chips which improve this situation but detailed evaluation of the trade-off between turn-on loss and dv/dt performance is required. This paper describes the evaluation, test results, and a comparative analysis of dv/dt versus turn-on loss as a function of gate drive conditions for the 6th generation IGBT compared to the standard 5th generation module.", "title": "" }, { "docid": "731df77ded13276e7bdb9f67474f3810", "text": "Given a graph <i>G</i> = (<i>V,E</i>) and positive integral vertex weights <i>w</i> : <i>V</i> → N, the <i>max-coloring problem</i> seeks to find a proper vertex coloring of <i>G</i> whose color classes <i>C</i><inf>1,</inf> <i>C</i><inf>2,</inf>...,<i>C</i><inf><i>k</i></inf>, minimize Σ<sup><i>k</i></sup><inf><i>i</i> = 1</inf> <i>max</i><inf>ν∈<i>C</i><inf>i</inf></inf><i>w</i>(ν). This problem, restricted to interval graphs, arises whenever there is a need to design dedicated memory managers that provide better performance than the general purpose memory management of the operating system. Specifically, companies have tried to solve this problem in the design of memory managers for wireless protocol stacks such as GPRS or 3G.Though this problem seems similar to the wellknown dynamic storage allocation problem, we point out fundamental differences. We make a connection between max-coloring and on-line graph coloring and use this to devise a simple 2-approximation algorithm for max-coloring on interval graphs. We also show that a simple first-fit strategy, that is a natural choice for this problem, yields a 10-approximation algorithm. We show this result by proving that the first-fit algorithm for on-line coloring an interval graph <i>G</i> uses no more than 10.<i>x</i>(<i>G</i>) colors, significantly improving the bound of 26.<i>x</i>(<i>G</i>) by Kierstead and Qin (<i>Discrete Math.</i>, 144, 1995). We also show that the max-coloring problem is NP-hard.", "title": "" }, { "docid": "2bdb86d8152413a9823f1b0ba6bc9d87", "text": "Robustly detecting keywords in human speech is an important precondition for cognitive systems, which aim at intelligently interacting with users. Conventional techniques for keyword spotting usually show good performance when evaluated on well articulated read speech. However, modeling natural, spontaneous, and emotionally colored speech is challenging for today’s speech recognition systems and thus requires novel approaches with enhanced robustness. In this article, we propose a new architecture for vocabulary independent keyword detection as needed for cognitive virtual agents such as the SEMAINE system. Our word spotting model is composed of a Dynamic Bayesian Network (DBN) and a bidirectional Long Short-Term Memory (BLSTM) recurrent neural net. The BLSTM network uses a self-learned amount of contextual information to provide a discrete phoneme prediction feature for the DBN, which is able to distinguish between keywords and arbitrary speech. We evaluate our Tandem BLSTM-DBN technique on both read speech and spontaneous emotional speech and show that our method significantly outperforms conventional Hidden Markov Model-based approaches for both application scenarios.", "title": "" }, { "docid": "6a9738cbe28b53b3a9ef179091f05a4a", "text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.", "title": "" }, { "docid": "b893e0321a51a2b06e1d8f2a59a296b6", "text": "Green tea (GT) and green tea extracts (GTE) have been postulated to decrease cancer incidence. In vitro results indicate a possible effect; however, epidemiological data do not support cancer chemoprevention. We have performed a PubMED literature search for green tea consumption and the correlation to the common tumor types lung, colorectal, breast, prostate, esophageal and gastric cancer, with cohorts from both Western and Asian countries. We additionally included selected mechanistical studies for a possible mode of action. The comparability between studies was limited due to major differences in study outlines; a meta analysis was thus not possible and studies were evaluated individually. Only for breast cancer could a possible small protective effect be seen in Asian and Western cohorts, whereas for esophagus and stomach cancer, green tea increased the cancer incidence, possibly due to heat stress. No effect was found for colonic/colorectal and prostatic cancer in any country, for lung cancer Chinese studies found a protective effect, but not studies from outside China. Epidemiological studies thus do not support a cancer protective effect. GT as an indicator of as yet undefined parameters in lifestyle, environment and/or ethnicity may explain some of the observed differences between China and other countries.", "title": "" }, { "docid": "d991f2ecffd6ddb045a7917ac5e99011", "text": "Human intervention trials have provided evidence for protective effects of various (poly)phenol-rich foods against chronic disease, including cardiovascular disease, neurodegeneration, and cancer. While there are considerable data suggesting benefits of (poly)phenol intake, conclusions regarding their preventive potential remain unresolved due to several limitations in existing studies. Bioactivity investigations using cell lines have made an extensive use of both (poly)phenolic aglycones and sugar conjugates, these being the typical forms that exist in planta, at concentrations in the low-μM-to-mM range. However, after ingestion, dietary (poly)phenolics appear in the circulatory system not as the parent compounds, but as phase II metabolites, and their presence in plasma after dietary intake rarely exceeds nM concentrations. Substantial quantities of both the parent compounds and their metabolites pass to the colon where they are degraded by the action of the local microbiota, giving rise principally to small phenolic acid and aromatic catabolites that are absorbed into the circulatory system. This comprehensive review describes the different groups of compounds that have been reported to be involved in human nutrition, their fate in the body as they pass through the gastrointestinal tract and are absorbed into the circulatory system, the evidence of their impact on human chronic diseases, and the possible mechanisms of action through which (poly)phenol metabolites and catabolites may exert these protective actions. It is concluded that better performed in vivo intervention and in vitro mechanistic studies are needed to fully understand how these molecules interact with human physiological and pathological processes.", "title": "" }, { "docid": "299d59735ea1170228aff531645b5d4a", "text": "While the economic case for cloud computing is compelling, the security challenges it poses are equally striking. In this work we strive to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. We examine contemporary and historical perspectives from industry, academia, government, and “black hats”. We argue that few cloud computing security issues are fundamentally new or fundamentally intractable; often what appears “new” is so only relative to “traditional” computing of the past several years. Looking back further to the time-sharing era, many of these problems already received attention. On the other hand, we argue that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual auditability.", "title": "" }, { "docid": "a4596f959393557182f0f9f718984e6f", "text": "This paper presents an overview of limitations imposed on the range performance of passive UHF RFID systems by such factors as tag characteristics, propagation environment, and RFID reader parameters", "title": "" }, { "docid": "3d815d0bc4744b36e12252fab58104fd", "text": "Anyone who has signed up for cell phone service, attempted to claim a rebate, or navigated a call center has probably suffered from a company's apparent indifference to what should be its first concern: the customer experiences that culminate in either satisfaction or disappointment and defection. Customer experience is the subjective response customers have to direct or indirect contact with a company. It encompasses every aspect of an offering: customer care, advertising, packaging, features, ease of use, reliability. Customer experience is shaped by customers' expectations, which largely reflect previous experiences. Few CEOs would argue against the significance of customer experience or against measuring and analyzing it. But many don't appreciate how those activities differ from CRM or just how illuminating the data can be. For instance, the majority of the companies in a recent survey believed they have been providing \"superior\" experiences to customers, but most customers disagreed. The authors describe a customer experience management (CEM) process that involves three kinds of monitoring: past patterns (evaluating completed transactions), present patterns (tracking current relationships), and potential patterns (conducting inquiries in the hope of unveiling future opportunities). Data are collected at or about touch points through such methods as surveys, interviews, focus groups, and online forums. Companies need to involve every function in the effort, not just a single customer-facing group. The authors go on to illustrate how a cross-functional CEM system is created. With such a system, companies can discover which customers are prospects for growth and which require immediate intervention.", "title": "" }, { "docid": "2ca724447a205918e8d96a493ae55147", "text": "As the previous chapters emphasized, the human cognition—and the technology necessary to support it—are central to Cyber Situational Awareness. Therefore, this chapter focuses on challenges and approaches to integration of information technology and computational representations of human situation awareness. To illustrate these aspects of CSA, the chapter uses the process of intrusion detection as a key example. We argue that effective development of technologies and processes that produce CAS in a way properly aligned with human cognition calls for cognitive models—dynamic and adaptable computational representations of the cognitive structures and mechanisms involved in developing SA and processing information for decision making. While visualization and machine learning are often seen among the key approaches to enhancing CSA, we point out a number of limitations in their current state of development and applications to CSA. The current knowledge gaps in our understanding of cognitive demands in CSA include the lack of a theoretical model of cyber SA within a cognitive architecture; the decision gap, representing learning, experience and dynamic decision making in the cyberspace; and the", "title": "" }, { "docid": "1fa056e87c10811b38277d161c81c2ac", "text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.", "title": "" }, { "docid": "502868464f3d83ef19783cff59e3fb7e", "text": "Most image dehazing algorithms require, for their operation, the atmospheric light vector, A, which describes the ambient light in the scene. Existing methods either rely on user input or follow error-prone assumptions such as the gray-world assumption. In this paper we present a new automatic method for recovering the atmospheric light vector in hazy scenes given a single input image. The method first recovers the vector's orientation, Â = A/∥A∥, by exploiting the abundance of small image patches in which the scene transmission and surface albedo are approximately constant. We derive a reduced formation model that describes the distribution of the pixels inside such patches as lines in RGB space and show how these lines are used for robustly extracting Â. We show that the magnitude of the atmospheric light vector,∥A∥, cannot be recovered using patches of constant transmission. We also show that errors in its estimation results in dehazed images that suffer from brightness biases that depend on the transmission level. This dependency implies that the biases are highly-correlated with the scene and are therefore hard to detect via local image analysis. We address this challenging problem by exploiting a global regularity which we observe in hazy images where the intensity level of the brightest pixels is approximately independent of their transmission value. To exploit this property we derive an analytic expression for the dependence that a wrong magnitude introduces and recover ∥A∥ by minimizing this particular type of dependence. We validate the assumptions of our method through a number of experiments as well as evaluate the expected accuracy at which our procedure estimates A as function of the transmission in the scene. Results show a more successful recovery of the atmospheric light vector compared to existing procedures.", "title": "" }, { "docid": "7ef2f4a771aa0d1724127c97aa21e1ea", "text": "This paper demonstrates the efficient use of Internet of Things for the traditional agriculture. It shows the use of Arduino and ESP8266 based monitored and controlled smart irrigation systems, which is also cost-effective and simple. It is beneficial for farmers to irrigate there land conveniently by the application of automatic irrigation system. This smart irrigation system has pH sensor, water flow sensor, temperature sensor and soil moisture sensor that measure respectively and based on these sensors arduino microcontroller drives the servo motor and pump. Arduino received the information and transmitted with ESP8266 Wi-Fi module wirelessly to the website through internet. This transmitted information is monitor and control by using IOT. This enables the remote control mechanism through a secure internet web connection to the user. A website has been prepared which present the actual time values and reference values of various factors needed by crops. Users can control water pumps and sprinklers through the website and keep an eye on the reference values which will help the farmer increase production with quality crops.", "title": "" }, { "docid": "1241bc6b7d3522fe9e285ae843976524", "text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.", "title": "" }, { "docid": "f4a703793623890b59a8f7471fc49d0e", "text": "The authors investigate the interplay between answer quality and answer speed across question types in community question-answering sites (CQAs). The research questions addressed are the following: (a) How do answer quality and answer speed vary across question types? (b) How do the relationships between answer quality and answer speed vary across question types? (c) How do the best quality answers and the fastest answers differ in terms of answer quality and answer speed across question types? (d) How do trends in answer quality vary over time across question types? From the posting of 3,000 questions in six CQAs, 5,356 answers were harvested and analyzed. There was a significant difference in answer quality and answer speed across question types, and there were generally no significant relationships between answer quality and answer speed. The best quality answers had better overall answer quality than the fastest answers but generally took longer to arrive. In addition, although the trend in answer quality had been mostly random across all question types, the quality of answers appeared to improve gradually when given time. By highlighting the subtle nuances in answer quality and answer speed across question types, this study is an attempt to explore a territory of CQA research that has hitherto been relatively uncharted.", "title": "" }, { "docid": "672c235a95be02e7bfea7eccca9629e2", "text": "The treatment of rigid and severe scoliosis and kyphoscoliosis is a surgical challenge. Presurgical halo-gravity traction (HGT) achieves an increase in curve flexibility, a reduction in neurologic risks through gradual traction on a chronically tethered cord and an improvement in preoperative pulmonary function. However, little is known with respect to the ideal indications for HGT, its appropriate duration, or its efficacy in the treatment of rigid deformities. To investigate the use of HGT in severe deformities, we performed a retrospective review of 45 patients who had severe and rigid scoliosis or kyphoscoliosis. The analysis focused on the impact of HGT on curve flexibility, pulmonary function tests (PFTs), complications and surgical outcomes in a single spine centre. PFTs were used to assess the predicted forced vital capacity (FVC%). The mean age of the sample was 24 ± 14 years. 39 patients had rigid kyphoscoliosis, and 6 had scoliosis. The mean apical rotation was 3.6° ± 1.4°, according to the Nash and Moe grading system. The curve apices were mainly in the thoracic spine. HGT was used preoperatively in all the patients. The mean preoperative scoliosis was 106.1° ± 34.5°, and the mean kyphosis was 90.7° ± 29.7°. The instrumentation used included hybrids and pedicle screw-based constructs. In 18 patients (40%), a posterior concave thoracoplasty was performed. Preoperative PFT data were obtained for all the patients, and 24 patients had ≥3 assessments during the HGT. The difference between the first and the final PFTs during the HGT averaged 7.0 ± 8.2% (p < .001). Concerning the evolution of pulmonary function, 30 patients had complete data sets, with the final PFT performed, on average, 24 months after the index surgery. The mean preoperative FVC% in these patients was 47.2 ± 18%, and the FVC% at follow-up was 44.5 ± 17% (a difference that did not reach statistical significance). The preoperative FVC% was highly predictive of the follow-up FVC% and the response during HGT. The mean flexibility of the scoliosis curve during HGT was only 14.8 ± 11.4%, which was not significantly different from the flexibility measures achieved on bending radiographs or Cotrel traction radiographs. In rigid curves, the Cobb angle difference between the first and final radiographs during HGT was only 8° ± 9° for scoliosis and 7° ± 12° for kyphosis. Concerning surgical outcomes, 13 patients (28.9%) experienced minor and 15 (33.3%) experienced major complications. No permanent neurologic deficits or deaths occurred. Additional surgery was indicated in 12 patients (26.7%), including 7 rib-hump resections. At the final evaluation, 69% of the patients had improved coronal balance, and at a mean follow-up of 33 ± 23.3 months, 39 patients (86.7%) were either satisfied or very satisfied with the overall outcome. The improvement of pulmonary function and the restoration of sagittal and coronal balance are the main goals in the treatment of severe and rigid scoliosis and kyphoscoliosis. A review of the literature showed that HGT is a useful tool for selected patients. Preoperative HGT is indicated in severe curves with moderate to severe pulmonary compromise. HGT should not be expected to significantly improve severe curves without a prior anterior and/or posterior release. The data presented in this study can be used in future studies to compare the surgical and pulmonary outcomes of severe and rigid deformities.", "title": "" }, { "docid": "c16499b3945603d04cf88fec7a2c0a85", "text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.", "title": "" }, { "docid": "fc1c3291c631562a6d1b34d5b5ccd27e", "text": "There are many methods for making a multicast protocol “reliable.” At one end of the spectrum, a reliable multicast protocol might offer tomicity guarantees, such as all-or-nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. At the other are protocols that use local repair to overcome transient packet loss in the network, offering “best effort” reliability. Yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. This article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a “bimodal multicast” in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput.", "title": "" }, { "docid": "04fe2706a8da54365e4125867613748b", "text": "We consider a sequence of multinomial data for which the probabilities associated with the categories are subject to abrupt changes of unknown magnitudes at unknown locations. When the number of categories is comparable to or even larger than the number of subjects allocated to these categories, conventional methods such as the classical Pearson’s chi-squared test and the deviance test may not work well. Motivated by high-dimensional homogeneity tests, we propose a novel change-point detection procedure that allows the number of categories to tend to infinity. The null distribution of our test statistic is asymptotically normal and the test performs well with finite samples. The number of change-points is determined by minimizing a penalized objective function based on segmentation, and the locations of the change-points are estimated by minimizing the objective function with the dynamic programming algorithm. Under some mild conditions, the consistency of the estimators of multiple change-points is established. Simulation studies show that the proposed method performs satisfactorily for identifying change-points in terms of power and estimation accuracy, and it is illustrated with an analysis of a real data set.", "title": "" } ]
scidocsrr
a478b22dc545038859338408b3680625
Single Image Super-resolution via a Lightweight Residual Convolutional Neural Network
[ { "docid": "fb1c9fcea2f650197b79711606d4678b", "text": "Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.", "title": "" } ]
[ { "docid": "0b6c8d79180a4a17d4da661d6ab0b983", "text": "The online social media such as Facebook, Twitter and YouTube has been used extensively during disaster and emergency situation. Despite the advantages offered by these services on supplying information in vague situation by citizen, we raised the issue of spreading misinformation on Twitter by using retweets. Accordingly, in this study, we conduct a user survey (n = 133) to investigate what is the user’s action towards spread message in Twitter, and why user decide to perform retweet on the spread message. As the result of the factor analyses, we extracted 3 factors on user’s action towards spread message which are: 1) Desire to spread the retweet messages as it is considered important, 2) Mark the retweet messages as favorite using Twitter “Favorite” function, and 3) Search for further information about the content of the retweet messages. Then, we further analyze why user decides to perform retweet. The results reveal that user has desire to spread the message which they think is important and the reason why they retweet it is because of the need to retweet, interesting tweet content and the tweet user. The results presented in this paper provide an understanding on user behavior of information diffusion, with the aim to reduce the spread of misinformation using Twitter during emergency situation.", "title": "" }, { "docid": "66805d6819e3c4b5f7c71b7a851c7371", "text": "We consider classification of email messages as to whether or not they contain certain \"email acts\", such as a request or a commitment. We show that exploiting the sequential correlation among email messages in the same thread can improve email-act classification. More specifically, we describe a new text-classification algorithm based on a dependency-network based collective classification method, in which the local classifiers are maximum entropy models based on words and certain relational features. We show that statistically significant improvements over a bag-of-words baseline classifier can be obtained for some, but not all, email-act classes. Performance improvements obtained by collective classification appears to be consistent across many email acts suggested by prior speech-act theory.", "title": "" }, { "docid": "c7fd1c565c5d08a69adb328886251899", "text": "In this paper, a novel PWM dimming solution with the optimal trajectory control for a multichannel constant current (MC3) LLC resonant LED driver is proposed. When PWM dimming is on, the LLC resonant converter operates under the full-load condition. The LED intensity is controlled by the ratio between the on-time and off-time of the PWM dimming signal. To eliminate the dynamic oscillations when the MC3 LLC starts to work from the idle status, the switching pattern is optimized based on the graphic state-trajectory analysis. Thus, the full-load steady state is tracked within the minimum time. Moreover, under low dimming conditions, the LED intensity can be controlled more precisely. Finally, the optimal PWM dimming approach is verified on a 200 W, 4-channel MC3 LLC LED driver prototype.", "title": "" }, { "docid": "14e8006ae1fc0d97e737ff2a5a4d98dd", "text": "Building dialogue systems that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human utterances in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialogue. Our model represents the first attempt to integrating a large commonsense knowledge base into end-toend conversational models. In the retrieval-based scenario, we propose a model to jointly take into account message content and related commonsense for selecting an appropriate response. Our experiments suggest that the knowledgeaugmented models are superior to their knowledge-free counterparts.", "title": "" }, { "docid": "a33348ee1396be9be333eb3be8dadb39", "text": "In the multi-MHz low voltage, high current applications, Synchronous Rectification (SR) is strongly needed due to the forward recovery and the high conduction loss of the rectifier diodes. This paper applies the SR technique to a 10-MHz isolated class-Φ2 resonant converter and proposes a self-driven level-shifted Resonant Gate Driver (RGD) for the SR FET. The proposed RGD can reduce the average on-state resistance and the associated conduction loss of the MOSFET. It also provides precise switching timing for the SR so that the body diode conduction time of the SR FET can be minimized. A 10-MHz prototype with 18 V input, 5 V/2 A output was built to verify the advantage of the SR with the proposed RGD. At full load of 2 A, the SR with the proposed RGD improves the converter efficiency from 80.2% using the SR with the conventional RGD to 82% (an improvement of 1.8%). Compared to the efficiency of 77.3% using the diode rectification, the efficiency improvement is 4.7%.", "title": "" }, { "docid": "96bc9c8fa154d8e6cc7d0486c99b43d5", "text": "A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the output. In the ideal case such structures achieve a voltage gain which equals the number of transmission lines used. To achieve maximum efficiency, mismatch and secondary modes must be suppressed. Here we describe a TLT based on parallel plate transmission lines. The chosen geometry results in a high efficiency, due to good matching and minimized secondary modes. A second advantage of this design is that the electric field strength between the conductors is the same throughout the entire TLT. This makes the design suitable for high voltage applications. To investigate the concept of this TLT design, measurements are done on two different TLT designs. One TLT consists of 4 transmission lines, while the other one has 8 lines. Both designs are constructed of DiBond™. This material consists of a flat polyethylene inner core with an aluminum sheet on both sides. Both TLT's have an input impedance of 3.125 Ω. Their output impedances are 50 and 200 Ω, respectively. The measurements show that, on a matched load, this structure achieves a voltage gain factor of 3.9 when using 4 transmission lines and 7.9 when using 8 lines.", "title": "" }, { "docid": "daaa048824f1fa8303a2f4ac95301ccc", "text": "The Internet of Things (IoT) represents a diverse technology and usage with unprecedented business opportunities and risks. The Internet of Things is changing the dynamics of security industry & reshaping it. It allows data to be transferred seamlessly among physical devices to the Internet. The growth of number of intelligent devices will create a network rich with information that allows supply chains to assemble and communicate in new ways. The technology research firm Gartner predicts that there will be 26 billion installed units on the Internet of Things (IoT) by 2020[1]. This paper explains the concept of Internet of Things (IoT), its characteristics, explain security challenges, technology adoption trends & suggests a reference architecture for E-commerce enterprise.", "title": "" }, { "docid": "d5b4ba8e3491f4759924be4ceee8f418", "text": "Researchers and practitioners have long regarded procrastination as a self-handicapping and dysfunctional behavior. In the present study, the authors proposed that not all procrastination behaviors either are harmful or lead to negative consequences. Specifically, the authors differentiated two types of procrastinators: passive procrastinators versus active procrastinators. Passive procrastinators are procrastinators in the traditional sense. They are paralyzed by their indecision to act and fail to complete tasks on time. In contrast, active procrastinators are a \"positive\" type of procrastinator. They prefer to work under pressure, and they make deliberate decisions to procrastinate. The present results showed that although active procrastinators procrastinate to the same degree as passive procrastinators, they are more similar to nonprocrastinators than to passive procrastinators in terms of purposive use of time, control of time, self-efficacy belief, coping styles, and outcomes including academic performance. The present findings offer a more sophisticated understanding of procrastination behavior and indicate a need to reevaluate its implications for outcomes of individuals.", "title": "" }, { "docid": "6b6b4de917de527351939c3493581275", "text": "Several studies have used the Edinburgh Postnatal Depression Scale (EPDS), developed to screen new mothers, also for new fathers. This study aimed to further contribute to this knowledge by comparing assessment of possible depression in fathers and associated demographic factors by the EPDS and the Gotland Male Depression Scale (GMDS), developed for \"male\" depression screening. The study compared EPDS score ≥10 and ≥12, corresponding to minor and major depression, respectively, in relation to GMDS score ≥13. At 3-6 months after child birth, a questionnaire was sent to 8,011 fathers of whom 3,656 (46%) responded. The detection of possibly depressed fathers by EPDS was 8.1% at score ≥12, comparable to the 8.6% detected by the GMDS. At score ≥10, the proportion detected by EPDS increased to 13.3%. Associations with possible risk factors were analyzed for fathers detected by one or both scales. A low income was associated with depression in all groups. Fathers detected by EPDS alone were at higher risk if they had three or more children, or lower education. Fathers detected by EPDS alone at score ≥10, or by both scales at EPDS score ≥12, more often were born in a foreign country. Seemingly, the EPDS and the GMDS are associated with different demographic risk factors. The EPDS score appears critical since 5% of possibly depressed fathers are excluded at EPDS cutoff 12. These results suggest that neither scale alone is sufficient for depression screening in new fathers, and that the decision of EPDS cutoff is crucial.", "title": "" }, { "docid": "49575576bc5a0b949c81b0275cbc5f41", "text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.", "title": "" }, { "docid": "bd4ac40e4b9016f6b969ac9b8bfedc15", "text": "The Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol used to exchange reachability information between Autonomous Systems in the global Internet. BGP is a path-vector protocol that allows each Autonomous System to override distance-based metrics with policy-based metrics when choosing best routes. Varadhan et al. [18] have shown that it is possible for a group of Autonomous Systems to independently define BGP policies that together lead to BGP protocol oscillations that never converge on a stable routing. One approach to addressing this problem is based on static analysis of routing policies to determine if they are safe. We explore the worst-case complexity for convergence-oriented static analysis of BGP routing policies. We present an abstract model of BGP and use it to define several global sanity conditions on routing policies that are related to BGP convergence/divergence. For each condition we show that the complexity of statically checking it is either NP-complete or NP-hard.", "title": "" }, { "docid": "3bdc67068b0726904c161cdc763fb6a8", "text": "Vibrant online communities are in constant flux. As members join and depart, the interactional norms evolve, stimulating further changes to the membership and its social dynamics. Linguistic change --- in the sense of innovation that becomes accepted as the norm --- is essential to this dynamic process: it both facilitates individual expression and fosters the emergence of a collective identity.\n We propose a framework for tracking linguistic change as it happens and for understanding how specific users react to these evolving norms. By applying this framework to two large online communities we show that users follow a determined two-stage lifecycle with respect to their susceptibility to linguistic change: a linguistically innovative learning phase in which users adopt the language of the community followed by a conservative phase in which users stop changing and the evolving community norms pass them by.\n Building on this observation, we show how this framework can be used to detect, early in a user's career, how long she will stay active in the community. Thus, this work has practical significance for those who design and maintain online communities. It also yields new theoretical insights into the evolution of linguistic norms and the complex interplay between community-level and individual-level linguistic change.", "title": "" }, { "docid": "7e982cdcc53f63b2cf04d0409409afc4", "text": "Progress over the past decades in proton-conducting materials has generated a variety of polyelectrolytes and microporous polymers. However, most studies are still based on a preconception that large pores eventually cause simply flow of proton carriers rather than efficient conduction of proton ions, which precludes the exploration of large-pore polymers for proton transport. Here, we demonstrate proton conduction across mesoporous channels in a crystalline covalent organic framework. The frameworks are designed to constitute hexagonally aligned, dense, mesoporous channels that allow for loading of N-heterocyclic proton carriers. The frameworks achieve proton conductivities that are 2-4 orders of magnitude higher than those of microporous and non-porous polymers. Temperature-dependent and isotopic experiments revealed that the proton transport in these channels is controlled by a low-energy-barrier hopping mechanism. Our results reveal a platform based on porous covalent organic frameworks for proton conduction.", "title": "" }, { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" }, { "docid": "835fd7a4410590a3d848222eb3159aeb", "text": "Modularity in organizations can facilitate the creation and development of dynamic capabilities. Paradoxically, however, modular management can also stifle the strategic potential of such capabilities by conflicting with the horizontal integration of units. We address these issues through an examination of how modular management of information technology (IT), project teams and front-line personnel in concert with knowledge management (KM) interventions influence the creation and development of dynamic capabilities at a large Asia-based call center. Our findings suggest that a full capitalization of the efficiencies created by modularity may be closely linked to the strategic sense making abilities of senior managers to assess the long-term business value of the dominant designs available in the market. Drawing on our analysis we build a modular management-KM-dynamic capabilities model, which highlights the evolution of three different levels of dynamic capabilities and also suggests an inherent complementarity between modular and integrated approaches. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4096499f4e34f6c1f0c3bb0bb63fb748", "text": "A detailed examination of evolving traffic characteristics, operator requirements, and network technology trends suggests a move away from nonblocking interconnects in data center networks (DCNs). As a result, recent efforts have advocated oversubscribed networks with the capability to adapt to traffic requirements on-demand. In this paper, we present the design, implementation, and evaluation of OSA, a novel Optical Switching Architecture for DCNs. Leveraging runtime reconfigurable optical devices, OSA dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns. Extensive analytical simulations using both real and synthetic traffic patterns demonstrate that OSA can deliver high bisection bandwidth (60%-100% of the nonblocking architecture). Implementation and evaluation of a small-scale functional prototype further demonstrate the feasibility of OSA.", "title": "" }, { "docid": "809392d489af5e1f8e85a9ad8a8ba9e0", "text": "Although a large number of ion channels are now believed to be regulated by phosphoinositides, particularly phosphoinositide 4,5-bisphosphate (PIP2), the mechanisms involved in phosphoinositide regulation are unclear. For the TRP superfamily of ion channels, the role and mechanism of PIP2 modulation has been especially difficult to resolve. Outstanding questions include: is PIP2 the endogenous regulatory lipid; does PIP2 potentiate all TRPs or are some TRPs inhibited by PIP2; where does PIP2 interact with TRP channels; and is the mechanism of modulation conserved among disparate subfamilies? We first addressed whether the PIP2 sensor resides within the primary sequence of the channel itself, or, as recently proposed, within an accessory integral membrane protein called Pirt. Here we show that Pirt does not alter the phosphoinositide sensitivity of TRPV1 in HEK-293 cells, that there is no FRET between TRPV1 and Pirt, and that dissociated dorsal root ganglion neurons from Pirt knock-out mice have an apparent affinity for PIP2 indistinguishable from that of their wild-type littermates. We followed by focusing on the role of the C terminus of TRPV1 in sensing PIP2. Here, we show that the distal C-terminal region is not required for PIP2 regulation, as PIP2 activation remains intact in channels in which the distal C-terminal has been truncated. Furthermore, we used a novel in vitro binding assay to demonstrate that the proximal C-terminal region of TRPV1 is sufficient for PIP2 binding. Together, our data suggest that the proximal C-terminal region of TRPV1 can interact directly with PIP2 and may play a key role in PIP2 regulation of the channel.", "title": "" }, { "docid": "34118709a36ba09a822202753cbff535", "text": "Our healthcare sector daily collects a huge data including clinical examination, vital parameters, investigation reports, treatment follow-up and drug decisions etc. But very unfortunately it is not analyzed and mined in an appropriate way. The Health care industry collects the huge amounts of health care data which unfortunately are not “mined” to discover hidden information for effective decision making for health care practitioners. Data mining refers to using a variety of techniques to identify suggest of information or decision making knowledge in database and extracting these in a way that they can put to use in areas such as decision support , Clustering ,Classification and Prediction. This paper has developed a Computer-Based Clinical Decision Support System for Prediction of Heart Diseases (CCDSS) using Naïve Bayes data mining algorithm. CCDSS can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, spO2,chest pain type, heart rate, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. CCDSS is Webbased, user-friendly, scalable, reliable and expandable. It is implemented on the PHPplatform. Keywords—Computer-Based Clinical Decision Support System(CCDSS), Heart disease, Data mining, Naïve Bayes.", "title": "" }, { "docid": "ebf0152b3138ecd1e2a54f52a5a02462", "text": "Organizations are increasingly interested in classifying texts or parts thereof into categories, as this enables more effective use of their information. Manual procedures for text classification work well for up to a few hundred documents. However, when the number of documents is larger, manual procedures become laborious, time-consuming, and potentially unreliable. Techniques from text mining facilitate the automatic assignment of text strings to categories, making classification expedient, fast, and reliable, which creates potential for its application in organizational research. The purpose of this article is to familiarize organizational researchers with text mining techniques from machine learning and statistics. We describe the text classification process in several roughly sequential steps, namely training data preparation, preprocessing, transformation, application of classification techniques, and validation, and provide concrete recommendations at each step. To help researchers develop their own text classifiers, the R code associated with each step is presented in a tutorial. The tutorial draws from our own work on job vacancy mining. We end the article by discussing how researchers can validate a text classification model and the associated output.", "title": "" }, { "docid": "99ffaa3f845db7b71a6d1cbc62894861", "text": "There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines), automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade and dedicated to documents of historical interest.", "title": "" } ]
scidocsrr
27f27d042745d9ad327fc7818105e44d
High performance GaN single-chip frontend for compact X-band AESA systems
[ { "docid": "f4d1a3530cb84b2efa9d5a2a63e66d2f", "text": "Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realize robust receiver components. This paper presents the design and measurement of a robust AlGaN/GaN Low Noise Amplifier and Transmit/Receive Switch MMIC. Two versions of both MMICs have been designed in the Alcatel-Thales III-V lab AlGaN/GaN microstrip technology. One chipset version operates at X-band and the second also shows wideband performance. Input power handling of >46 dBm for the switch and >41 dBm for the LNA have been measured.", "title": "" } ]
[ { "docid": "f384b2db44cc662336096d691cabd80c", "text": "OBJECTIVES\nWe compare positioning with orthotic therapy in 298 consecutive infants referred for correction of head asymmetry.\n\n\nSTUDY DESIGN\nWe evaluated 176 infants treated with repositioning, 159 treated with helmets, and 37 treated with initial repositioning followed by helmet therapy when treatment failed. We compared reductions in diagonal difference (RDD) between repositioning and cranial orthotic therapy. Helmets were routinely used for infants older than 6 months with DD >1 cm.\n\n\nRESULTS\nFor infants treated with repositioning at a mean age of 4.8 months, the mean RDD was 0.55 cm (from an initial mean DD of 1.05 cm). For infants treated with cranial orthotics at a mean age of 6.6 months, the mean RDD was 0.71 cm (from an initial mean DD of 1.13 cm).\n\n\nCONCLUSIONS\nInfants treated with orthotics were older and required a longer length of treatment (4.2 vs 3.5 months). Infants treated with orthosis had a mean final DD closer to the DD in unaffected infants (0.3 +/- 0.1 cm), orthotic therapy was more effective than repositioning (61% decrease versus 52% decrease in DD), and early orthosis was significantly more effective than later orthosis (65% decrease versus 51% decrease in DD).", "title": "" }, { "docid": "f2ae9fda60bd3aaf0f60682f8ca12b55", "text": "This paper describes an investigation of a high-voltage spark gap in the conditions of subnanosecond switching. The high-voltage pulsed generator “Sinus” is used to charge a coaxial line loaded to a high-pressure spark gap. Typical charging time for the coaxial line is in a range from 1 to 2 ns, maximum charging voltage is up to 250 kV, and a range of pressures for the gap is from 2 to 9 MPa. The theoretical models of the switching process on subnanosecond time scale are examined. A general equation for temporal behavior of the gap voltage, which is applicable for the avalanche model and the Rompe-Weitzel model has been obtained. It is revealed that the approach based on the avalanche model offers a possibility to describe the switching process only at extremely high overvoltages. The Rompe-Weitzel model demonstrates a good agreement with the experimental data both for the conditions of static breakdown and for the regime of a high overvolted gap. Comparison of experimental and calculated voltage waveforms has made it possible to estimate an empirical constant in the Rompe-Weitzel model (the price of ionization). This constant is varied from 420 to 920 eV, depending on the initial electric field in the gap.", "title": "" }, { "docid": "034cff2d59e2ad3926530e6f0cb8cd71", "text": "Aim of the study: To find out the various clinical types of cutaneous warts in immuno-competent and immuno-compromised patients (human immuno defiency virus infected, renal transplant recipients), and its histopathological aspects. Materials and Methods: This study was conducted in dermatology department of a tertiary hospital and a total of three hundred patients were screened for this study in each groups. Detailed clinical history and examination, skin biopsy, HIV status, immunosuppressant used and period of follow up since transplant (for renal transplant patients) were taken for all the patients. Results: Among the 300 cases in immuno-competent patients, verruca vulgaris was observed in 157 (52%), plantar warts in 57 (19%), anogenital warts in 42 (14%) and plane warts in 21 (7%) respectively. Among the 300 HIV patients, ano-genital warts were observed in 35 (11.6%) and palmo-plantar warts in 1 patient respectively. Among the 300 patients screened in renal transplant patients, verruca vulgaris was seen in 4 (0.01%) patients, followed by 1(0.003%) plane wart and 1 (0.003%) filiform wart. Conclusion: Verruca vulgaris was the most common type in immuno-competent patients and ano-genital warts in immuno-compromised patients. In renal transplant recipients, the decreased incidence of warts in this study may be due to the difference in skin type, immunosuppressant used, and geographical location which needs further evaluation in a", "title": "" }, { "docid": "93f1ee5523f738ab861bcce86d4fc906", "text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.", "title": "" }, { "docid": "cbc9e0641caea9af6d75a94de26e09df", "text": "At present, spatio-temporal action detection in the video is still a challenging problem, considering the complexity of the background, the variety of the action or the change of the viewpoint in the unconstrained environment. Most of current approaches solve the problem via a two-step processing: first detecting actions at each frame; then linking them, which neglects the continuity of the action and operates in an offline and batch processing manner. In this paper, we attempt to build an online action detection model that introduces the spatio-temporal coherence existed among action regions when performing action category inference and position localization. Specifically, we seek to represent the spatio-temporal context pattern via establishing an encoder-decoder model based on the convolutional recurrent network. The model accepts a video snippet as input and encodes the dynamic information of the action in the forward pass. During the backward pass, it resolves such information at each time instant for action detection via fusing the current static or motion cue. Additionally, we propose an incremental action tube generation algorithm, which accomplishes action bounding-boxes association, action label determination and the temporal trimming in a single pass. Our model takes in the appearance, motion or fused signals as input and is tested on two prevailing datasets, UCF-Sports and UCF-101. The experiment results demonstrate the effectiveness of our method which achieves a performance superior or comparable to compared existing approaches.", "title": "" }, { "docid": "4f64b2b2b50de044c671e3d0d434f466", "text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …", "title": "" }, { "docid": "ace2fa767a14ee32f596256ebdf9554f", "text": "Computing systems have steadily evolved into more complex, interconnected, heterogeneous entities. Ad-hoc techniques are most often used in designing them. Furthermore, researchers and designers from both academia and industry have focused on vertical approaches to emphasizing the advantages of one specific feature such as fault tolerance, security or performance. Such approaches led to very specialized computing systems and applications. Autonomic systems, as an alternative approach, can control and manage themselves automatically with minimal intervention by users or system administrators. This paper presents an autonomic framework in developing and implementing autonomic computing services and applications. Firstly, it shows how to apply this framework to autonomically manage the security of networks. Then an approach is presented to develop autonomic components from existing legacy components such as software modules/applications or hardware resources (router, processor, server, etc.). Experimental evaluation of the prototype shows that the system can be programmed dynamically to enable the components to operate autonomously.", "title": "" }, { "docid": "19cb14825c6654101af1101089b66e16", "text": "Critical infrastructures, such as power grids and transportation systems, are increasingly using open networks for operation. The use of open networks poses many challenges for control systems. The classical design of control systems takes into account modeling uncertainties as well as physical disturbances, providing a multitude of control design methods such as robust control, adaptive control, and stochastic control. With the growing level of integration of control systems with new information technologies, modern control systems face uncertainties not only from the physical world but also from the cybercomponents of the system. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. Exploitation of these vulnerabilities can lead to severe damage as has been reported in various news outlets [1], [2]. More recently, it has been reported in [3] and [4] that a computer worm, Stuxnet, was spread to target Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes.", "title": "" }, { "docid": "eb6675c6a37aa6839fa16fe5d5220cfb", "text": "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.", "title": "" }, { "docid": "1d6e23fedc5fa51b5125b984e4741529", "text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.", "title": "" }, { "docid": "d196fad248811b1d3f7f8d4d11d3b83b", "text": "Recent developments in telecommunications have allowed drawing new paradigms, including the Internet of Everything, to provide services by the interconnection of different physical devices enabling the exchange of data to enrich and automate people’s daily activities; and Fog computing, which is an extension of the well-known Cloud computing, bringing tasks to the edge of the network exploiting characteristics such as lower latency, mobility support, and location awareness. Combining these paradigms opens a new set of possibilities for innovative services and applications; however, it also brings a new complex scenario that must be efficiently managed to properly fulfill the needs of the users. In this scenario, the Fog Orchestrator component is the key to coordinate the services in the middle of Cloud computing and Internet of Everything. In this paper, key challenges in the development of the Fog Orchestrator to support the Internet of Everything are identified, including how they affect the tasks that a Fog service Orchestrator should perform. Furthermore, different service Orchestrator architectures for the Fog are explored and analyzed in order to identify how the previously listed challenges are being tackled. Finally, a discussion about the open challenges, technological directions, and future of the research on this subject is presented.", "title": "" }, { "docid": "d2d134363fc993d68194e770c338b301", "text": "The demand for coal has been on the rise in modern society. With the number of opencast coal mines decreasing, it has become increasingly difficult to find coal. Low efficiencies and high casualty rates have always been problems in the process of coal exploration due to complicated geological structures in coal mining areas. Therefore, we propose a new exploration technology for coal that uses satellite images to explore and monitor opencast coal mining areas. First, we collected bituminous coal and lignite from the Shenhua opencast coal mine in China in addition to non-coal objects, including sandstones, soils, shales, marls, vegetation, coal gangues, water, and buildings. Second, we measured the spectral data of these objects through a spectrometer. Third, we proposed a multilayer extreme learning machine algorithm and constructed a coal classification model based on that algorithm and the spectral data. The model can assist in the classification of bituminous coal, lignite, and non-coal objects. Fourth, we collected Landsat 8 satellite images for the coal mining areas. We divided the image of the coal mine using the constructed model and correctly described the distributions of bituminous coal and lignite. Compared with the traditional coal exploration method, our method manifested an unparalleled advantage and application value in terms of its economy, speed, and accuracy.", "title": "" }, { "docid": "f2cdaf0198077253d9c0738cabab367a", "text": "In this paper, we present an approach for automatically creating a Combinatory Categorial Grammar (CCG) treebank from a dependency treebank for the Subject-Object-Verb language Hindi. Rather than a direct conversion from dependency trees to CCG trees, we propose a two stage approach: a language independent generic algorithm first extracts a CCG lexicon from the dependency treebank. A deterministic CCG parser then creates a treebank of CCG derivations. We also discuss special cases of this generic algorithm to handle linguistic phenomena specific to Hindi. In doing so we extract different constructions with long-range dependencies like coordinate constructions and non-projective dependencies resulting from constructions like relative clauses, noun elaboration and verbal modifiers.", "title": "" }, { "docid": "75abbacacec7a018fadf4829d1a3084d", "text": "BACKGROUND\nThe fertilizer use efficiency (FUE) of agricultural crops is generally low, which results in poor crop yields and low economic benefits to farmers. Among the various approaches used to enhance FUE, impregnation of mineral fertilizers with plant growth-promoting bacteria (PGPB) is attracting worldwide attention. The present study was aimed to improve growth, yield and nutrient use efficiency of wheat by bacterially impregnated mineral fertilizers.\n\n\nRESULTS\nResults of the pot study revealed that impregnation of diammonium phosphate (DAP) and urea with PGPB was helpful in enhancing the growth, yield, photosynthetic rate, nitrogen use efficiency (NUE) and phosphorus use efficiency (PUE) of wheat. However, the plants treated with F8 type DAP and urea, prepared by coating a slurry of PGPB (Bacillus sp. strain KAP6) and compost on DAP and urea granules at the rate of 2.0 g 100 g-1 fertilizer, produced better results than other fertilizer treatments. In this treatment, growth parameters including plant height, root length, straw yield and root biomass significantly (P ≤ 0.05) increased from 58.8 to 70.0 cm, 41.2 to 50.0 cm, 19.6 to 24.2 g per pot and 1.8 to 2.2 g per pot, respectively. The same treatment improved grain yield of wheat by 20% compared to unimpregnated DAP and urea (F0). Likewise, the maximum increase in photosynthetic rate, grain NP content, grain NP uptake, NUE and PUE of wheat were also recorded with F8 treatment.\n\n\nCONCLUSION\nThe results suggest that the application of bacterially impregnated DAP and urea is highly effective for improving growth, yield and FUE of wheat. © 2017 Society of Chemical Industry.", "title": "" }, { "docid": "aec14ffcc8e2f2cea1e00fd6f0a0d425", "text": "BACKGROUND\nOne of the reasons women with macromastia chose to undergo a breast reduction is to relieve their complaints of back, neck, and shoulder pain. We hypothesized that changes in posture after surgery may be the reason for the pain relief and that patient posture may correlate with symptomatic macromastia and may serve as an objective measure for complaints. The purpose of our study was to evaluate the effect of reduction mammaplasty on the posture of women with macromastia.\n\n\nMETHODS\nA prospective controlled study at a university medical center. Forty-two patients that underwent breast reduction were studied before surgery and an average of 4.3 years following surgery. Thirty-seven healthy women served as controls. Standardized lateral photos were taken. The inclination angle of the back was measured. Regression analysis was performed for the inclination angle.\n\n\nRESULTS\nPreoperatively, the mean inclination angle was 1.61 degrees ventrally; this diminished postoperatively to 0.72 degrees ventrally. This change was not significant (P-value=0.104). In the control group that angle was 0.28 degrees dorsally. Univariate regression analysis revealed that the inclination was dependent on body mass index (BMI) and having symptomatic macromastia; on multiple regression it was only dependent on BMI.\n\n\nCONCLUSIONS\nThe inclination angle of the back in breast reduction candidates is significantly different from that of controls; however, this difference is small and probably does not account for the symptoms associated with macromastia. Back inclination should not be used as a surrogate \"objective\" measure for symptomatic macromastia.", "title": "" }, { "docid": "4a8482a9567475267a9a1b3d3cd5a7f0", "text": "We present in this paper a simple, yet efficient convolutional neural network (CNN) architecture for robust audio event recognition. Opposing to deep CNN architectures with multiple convolutional and pooling layers topped up with multiple fully connected layers, the proposed network consists of only three layers: convolutional, pooling, and softmax layer. Two further features distinguish it from the deep architectures that have been proposed for the task: varying-size convolutional filters at the convolutional layer and 1-max pooling scheme at the pooling layer. In intuition, the network tends to select the most discriminative features from the whole audio signals for recognition. Our proposed CNN not only shows state-of-the-art performance on the standard task of robust audio event recognition but also outperforms other deep architectures up to 4.5% in terms of recognition accuracy, which is equivalent to 76.3% relative error reduction.", "title": "" }, { "docid": "98cebe058fccdf7ec799dfc95afd2e78", "text": "An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators.", "title": "" }, { "docid": "a741a386cdbaf977468782c1971c8d86", "text": "There is a trend that, virtually everyone, ranging from big Web companies to traditional enterprisers to physical science researchers to social scientists, is either already experiencing or anticipating unprecedented growth in the amount of data available in their world, as well as new opportunities and great untapped value. This paper reviews big data challenges from a data management respective. In particular, we discuss big data diversity, big data reduction, big data integration and cleaning, big data indexing and query, and finally big data analysis and mining. Our survey gives a brief overview about big-data-oriented research and problems.", "title": "" }, { "docid": "0cd9577750b6195c584e55aac28cc2ba", "text": "The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, incentives are becoming as important to dependability as technical design. The new field provides valuable insights not just into ‘security’ topics such as privacy, bugs, spam, and phishing, but into more general areas such as system dependability (the design of peer-to-peer systems and the optimal balance of effort by programmers and testers), and policy (particularly digital rights management). This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and sociology. Most recently it has started to interact with psychology, both through the psychology-and-economics tradition and in response to phishing. The promise of this research program is a novel framework for analyzing information security problems – one that is both principled and effective.", "title": "" } ]
scidocsrr
eaa855dcfd8fbda651d2287a26470b1b
SEMANTIC WEB MINING FOR INTELLIGENT WEB PERSONALIZATION
[ { "docid": "bed9bdf4d4965610b85378f2fdbfab2a", "text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.", "title": "" } ]
[ { "docid": "553d7f8c6b4c04349b65379e1e6cb0d8", "text": "Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.", "title": "" }, { "docid": "b3a6bc6036376d33ef78896f21778a21", "text": "Document clustering has many important applications in the area of data mining and information retrieval. Many existing document clustering techniques use the “bag-of-words” model to represent the content of a document. However, this representation is only effective for grouping related documents when these documents share a large proportion of lexically equivalent terms. In other words, instances of synonymy between related documents are ignored, which can reduce the effectiveness of applications using a standard full-text document representation. To address this problem, we present a new approach for clustering scientific documents, based on the utilization of citation contexts. A citation context is essentially the text surrounding the reference markers used to refer to other scientific works. We hypothesize that citation contexts will provide relevant synonymous and related vocabulary which will help increase the effectiveness of the bag-of-words representation. In this paper, we investigate the power of these citation-specific word features, and compare them with the original document’s textual representation in a document clustering task on two collections of labeled scientific journal papers from two distinct domains: High Energy Physics and Genomics. We also compare these text-based clustering techniques with a link-based clustering algorithm which determines the similarity between documents based on the number of co-citations, that is in-links represented by citing documents and out-links represented by cited documents. Our experimental results indicate that the use of citation contexts, when combined with the vocabulary in the full-text of the document, is a promising alternative means of capturing critical topics covered by journal articles. More specifically, this document representation strategy when used by the clustering algorithm investigated in this paper, outperforms both the full-text clustering approach and the link-based clustering technique on both scientific journal datasets.", "title": "" }, { "docid": "1a86b10556b5e38823bbb1aadb5fb378", "text": "The advances in the field of machine learning using neuromorphic systems have paved the pathway for extensive research on possibilities of hardware implementations of neural networks. Various memristive technologies such as oxide-based devices, spintronics, and phase change materials have been explored to implement the core functional units of neuromorphic systems, namely the synaptic network, and the neuronal functionality, in a fast and energy efficient manner. However, various nonidealities in the crossbar implementations of the synaptic arrays can significantly degrade performance of neural networks, and hence, impose restrictions on feasible crossbar sizes. In this paper, we build mathematical models of various nonidealities that occur in crossbar implementations such as source resistance, neuron resistance, and chip-to-chip device variations and analyze their impact on the classification accuracy of a fully connected network (FCN) and convolutional neural network (CNN) trained with Backpropagation algorithm. We show that a network trained under ideal conditions can suffer accuracy degradation as large as 59.84% for FCNs and 62.4% for CNNs when implemented on nonideal crossbars for relevant nonideality ranges. This severely constrains the sizes for crossbars. As a solution, we propose a technology aware training algorithm, which incorporates the mathematical models of the nonidealities in the backpropagation algorithm. We demonstrate that our proposed methodology achieves significant recovery of testing accuracy within 1.9% of the ideal accuracy for FCNs and 1.5% for CNNs. We further show that our proposed training algorithm can potentially allow the use of significantly larger crossbar arrays of sizes 784 × 500 for FCNs and 4096 × 512 for CNNs with a minor or no tradeoff in accuracy.", "title": "" }, { "docid": "c194e9c91d4a921b42ddacfc1d5a214f", "text": "Smartphone applications' energy efficiency is vital, but many Android applications suffer from serious energy inefficiency problems. Locating these problems is labor-intensive and automated diagnosis is highly desirable. However, a key challenge is the lack of a decidable criterion that facilitates automated judgment of such energy problems. Our work aims to address this challenge. We conducted an in-depth study of 173 open-source and 229 commercial Android applications, and observed two common causes of energy problems: missing deactivation of sensors or wake locks, and cost-ineffective use of sensory data. With these findings, wepropose an automated approach to diagnosing energy problems in Android applications. Our approach explores an application's state space by systematically executing the application using Java PathFinder (JPF). It monitors sensor and wake lock operations to detect missing deactivation of sensors and wake locks. It also tracks the transformation and usage of sensory data and judges whether they are effectively utilized by the application using our state-sensitive data utilization metric. In this way, our approach can generate detailed reports with actionable information to assist developers in validating detected energy problems. We built our approach as a tool, GreenDroid, on top of JPF. Technically, we addressed the challenges of generating user interaction events and scheduling event handlers in extending JPF for analyzing Android applications. We evaluated GreenDroid using 13 real-world popular Android applications. GreenDroid completed energy efficiency diagnosis for these applications in a few minutes. It successfully located real energy problems in these applications, and additionally found new unreported energy problems that were later confirmed by developers.", "title": "" }, { "docid": "647d93e8fce72c5669dcc9cf7a9c255c", "text": "Scientific evidence based on neuroimaging approaches over the last decade has demonstrated the efficacy of physical activity improving cognitive health across the human lifespan. Aerobic fitness spares age-related loss of brain tissue during aging, and enhances functional aspects of higher order regions involved in the control of cognition. More active or higher fit individuals are capable of allocating greater attentional resources toward the environment and are able to process information more quickly. These data are suggestive that aerobic fitness enhances cognitive strategies enabling to respond effectively to an imposed challenge with a better yield in task performance. In turn, animal studies have shown that exercise has a benevolent action on health and plasticity of the nervous system. New evidence indicates that exercise exerts its effects on cognition by affecting molecular events related to the management of energy metabolism and synaptic plasticity. An important instigator in the molecular machinery stimulated by exercise is brain-derived neurotrophic factor, which acts at the interface of metabolism and plasticity. Recent studies show that exercise collaborates with other aspects of lifestyle to influence the molecular substrates of cognition. In particular, select dietary factors share similar mechanisms with exercise, and in some cases they can complement the action of exercise. Therefore, exercise and dietary management appear as a noninvasive and effective strategy to counteract neurological and cognitive disorders.", "title": "" }, { "docid": "ac8dd0134ce110e8a662f7f9ded9f5c0", "text": "In this paper, we present a data acquisition and analysis framework for materials-to-devices processes, named 4CeeD, that focuses on the immense potential of capturing, accurately curating, correlating, and coordinating materials-to-devices digital data in a real-time and trusted manner before fully archiving and publishing them for wide access and sharing. In particular, 4CeeD consists of novel services: a curation service for collecting data from microscopes and fabrication instruments, curating, and wrapping of data with extensive metadata in real-time and in a trusted manner, and a cloud-based coordination service for storing data, extracting meta-data, analyzing and finding correlations among the data. Our evaluation results show that our novel cloud framework can help researchers significantly save time and cost spent on experiments, and is efficient in dealing with high-volume and fast-changing workload of heterogeneous types of experimental data.", "title": "" }, { "docid": "c898f6186ff15dff41dcb7b3376b975d", "text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.", "title": "" }, { "docid": "00801556f47ccd22804a81babd53dca7", "text": "BACKGROUND\nFood product reformulation is seen as one among several tools to promote healthier eating. Reformulating the recipe for a processed food, e.g. reducing the fat, sugar or salt content of the foods, or increasing the content of whole-grains, can help the consumers to pursue a healthier life style. In this study, we evaluate the effects on calorie sales of a 'silent' reformulation strategy, where a retail chain's private-label brands are reformulated to a lower energy density without making specific claims on the product.\n\n\nMETHODS\nUsing an ecological study design, we analyse 52 weeks' sales data - enriched with data on products' energy density - from a Danish retail chain. Sales of eight product categories were studied. Within each of these categories, specific products had been reformulated during the 52 weeks data period. Using econometric methods, we decompose the changes in calorie turnover and sales value into direct and indirect effects of product reformulation.\n\n\nRESULTS\nFor all considered products, the direct effect of product reformulation was a reduction in the sale of calories from the respective product categories - between 0.5 and 8.2%. In several cases, the reformulation led to indirect substitution effects that were counterproductive with regard to reducing calorie turnover. However, except in two insignificant cases, these indirect substitution effects were dominated by the direct effect of the reformulation, leading to net reductions in calorie sales between -3.1 and 7.5%. For all considered product reformulations, the reformulation had either positive, zero or very moderate negative effects on the sales value of the product category to which the reformulated product belonged.\n\n\nCONCLUSIONS\nBased on these findings, 'silent' reformulation of retailer's private brands towards lower energy density seems to contribute to lowering the calorie intake in the population (although to a moderate extent) with moderate losses in retailer's sales revenues.", "title": "" }, { "docid": "9493fa9f3749088462c1af7b34d9cfc9", "text": "Computer vision assisted diagnostic systems are gaining popularity in different healthcare applications. This paper presents a video analysis and pattern recognition framework for the automatic grading of vertical suspension tests on infants during the Hammersmith Infant Neurological Examination (HINE). The proposed vision-guided pipeline applies a color-based skin region segmentation procedure followed by the localization of body parts before feature extraction and classification. After constrained localization of lower body parts, a stick-diagram representation is used for extracting novel features that correspond to the motion dynamic characteristics of the infant's leg movements during HINE. This set of pose features generated from such a representation includes knee angles and distances between knees and hills. Finally, a time-series representation of the feature vector is used to train a Hidden Markov Model (HMM) for classifying the grades of the HINE tests into three predefined categories. Experiments are carried out by testing the proposed framework on a large number of vertical suspension test videos recorded at a Neuro-development clinic. The automatic grading results obtained from the proposed method matches the scores of experts at an accuracy of 74%.", "title": "" }, { "docid": "3ae6440666a5ea56dee2000991a50444", "text": "Flexible medical robots can improve surgical procedures by decreasing invasiveness and increasing accessibility within the body. Using preoperative images, these robots can be designed to optimize a procedure for a particular patient. To minimize invasiveness and maximize biocompatibility, the actuation units of flexible medical robots should be placed fully outside the patient's body. In this letter, we present a novel, compact, lightweight, modular actuation, and control system for driving a class of these flexible robots, known as concentric tube robots. A key feature of the design is the use of three-dimensional printed waffle gears to enable compact control of two degrees of freedom within each module. We measure the precision and accuracy of a single actuation module and demonstrate the ability of an integrated set of three actuation modules to control six degrees of freedom. The integrated system drives a three-tube concentric tube robot to reach a final tip position that is on average less than 2 mm from a given target. In addition, we show a handheld manifestation of the device and present its potential applications.", "title": "" }, { "docid": "3378680ac3eddfde464e1be5ee6986e6", "text": "Boundaries between formal and informal learning settings are shaped by influences beyond learners’ control. This can lead to the proscription of some familiar technologies that learners may like to use from some learning settings. This contested demarcation is not well documented. In this paper, we introduce the term ‘digital dissonance’ to describe this tension with respect to learners’ appropriation of Web 2.0 technologies in formal contexts. We present the results of a study that explores learners’ inand out-of-school use of Web 2.0 and related technologies. The study comprises two data sources: a questionnaire and a mapping activity. The contexts within which learners felt their technologies were appropriate or able to be used are also explored. Results of the study show that a sense of ‘digital dissonance’ occurs around learners’ experience of Web 2.0 activity in and out of school. Many learners routinely cross institutionally demarcated boundaries, but the implications of this activity are not well understood by institutions or indeed by learners themselves. More needs to be understood about the transferability of Web 2.0 skill sets and ways in which these can be used to support formal learning.", "title": "" }, { "docid": "fc431a3c46bdd4fa4ad83b9af10c0922", "text": "The importance of the kidney's role in glucose homeostasis has gained wider understanding in recent years. Consequently, the development of a new pharmacological class of anti-diabetes agents targeting the kidney has provided new treatment options for the management of type 2 diabetes mellitus (T2DM). Sodium glucose co-transporter type 2 (SGLT2) inhibitors, such as dapagliflozin, canagliflozin, and empagliflozin, decrease renal glucose reabsorption, which results in enhanced urinary glucose excretion and subsequent reductions in plasma glucose and glycosylated hemoglobin concentrations. Modest reductions in body weight and blood pressure have also been observed following treatment with SGLT2 inhibitors. SGLT2 inhibitors appear to be generally well tolerated, and have been used safely when given as monotherapy or in combination with other oral anti-diabetes agents and insulin. The risk of hypoglycemia is low with SGLT2 inhibitors. Typical adverse events appear to be related to the presence of glucose in the urine, namely genital mycotic infection and lower urinary tract infection, and are more often observed in women than in men. Data from long-term safety studies with SGLT2 inhibitors and from head-to-head SGLT2 inhibitor comparator studies are needed to fully determine their benefit-risk profile, and to identify any differences between individual agents. However, given current safety and efficacy data, SGLT2 inhibitors may present an attractive option for T2DM patients who are failing with metformin monotherapy, especially if weight is part of the underlying treatment consideration.", "title": "" }, { "docid": "d0bb735eadd569508827d9a55ff492f5", "text": "The emergence of social media has had a significant impact on how people communicate and socialize. Teens use social media to make and maintain social connections with friends and build their reputation. However, the way of analyzing the characteristics of teens in social media has mostly relied on ethnographic accounts or quantitative analyses with small datasets. This paper shows the possibility of detecting age information in user profiles by using a combination of textual and facial recognition methods and presents a comparative study of 27K teens and adults in Instagram. Our analysis highlights that (1) teens tend to post fewer photos but highly engage in adding more tags to their own photos and receiving more Likes and comments about their photos from others, and (2) to post more selfies and express themselves more than adults, showing a higher sense of self-representation. We demonstrate the application of our novel method that shows clear trends of age differences as well as substantiates previous insights in social media.", "title": "" }, { "docid": "2cd905573be23462b5768e2dcdf8847b", "text": "Identity verification is an increasingly important process in our daily lives. Whether we need to use our own equipment or to prove our identity to third parties in order to use services or gain access to physical places, we are constantly required to declare our identity and prove our claim. Traditional authentication methods fall into two categories: proving that you know something (i.e., password-based authentication) and proving that you own something (i.e., token-based authentication). These methods connect the identity with an alternate and less rich representation, for instance a password, that can be lost, stolen, or shared. A solution to these problems comes from biometric recognition systems. Biometrics offers a natural solution to the authentication problem, as it contributes to the construction of systems that can recognize people by the analysis of their anatomical and/or behavioral characteristics. With biometric systems, the representation of the identity is something that is directly derived from the subject, therefore it has properties that a surrogate representation, like a password or a token, simply cannot have (Jain et al. (2006; 2004); Prabhakar et al. (2003)). The strength of a biometric system is determined mainly by the trait that is used to verify the identity. Plenty of biometric traits have been studied and some of them, like fingerprint, iris and face, are nowadays used in widely deployed systems. Today, one of the most important research directions in the field of biometrics is the characterization of novel biometric traits that can be used in conjunction with other traits, to limit their shortcomings or to enhance their performance. The aim of this chapter is to introduce the reader to the usage of heart sounds for biometric recognition, describing the strengths and the weaknesses of this novel trait and analyzing in detail the methods developed so far and their performance. The usage of heart sounds as physiological biometric traits was first introduced in Beritelli & Serrano (2007), in which the authors proposed and started exploring this idea. Their system is based on the frequency analysis, by means of the Chirp z-Transform (CZT), of the sounds produced by the heart during the closure of the mitral tricuspid valve and during the closure of the aortic pulmonary valve. These sounds, called S1 and S2, are extracted from the input 11", "title": "" }, { "docid": "9070c149fba6467b1c9abd44865ad9f7", "text": "The World Wide Web has intensely evolved a novel way for people to express their views and opinions about different topics, trends and issues. The user-generated content present on different mediums such as internet forums, discussion groups, and blogs serves a concrete and substantial base for decision making in various fields such as advertising, political polls, scientific surveys, market prediction and business intelligence. Sentiment analysis relates to the problem of mining the sentiments from online available data and categorizing the opinion expressed by an author towards a particular entity into at most three preset categories: positive, negative and neutral. In this paper, firstly we present the sentiment analysis process to classify highly unstructured data on Twitter. Secondly, we discuss various techniques to carryout sentiment analysis on Twitter data in detail. Moreover, we present the parametric comparison of the discussed techniques based on our identified parameters.", "title": "" }, { "docid": "c428c35e7bd0a2043df26d5e2995f8eb", "text": "Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show that when a script execution requires nontrivial computation effort, practical attacks exist which either waste miners' computational resources or lead miners to accept incorrect script results. These attacks drive miners to an ill-fated choice, which we call the verifier's dilemma, whereby rational miners are well-incentivized to accept unvalidated blockchains. We call the framework of computation through a scriptable cryptocurrency a consensus computer and develop a model that captures incentives for verifying computation in it. We propose a resolution to the verifier's dilemma which incentivizes correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify. Finally we discuss two distinct, practical implementations of our consensus computer in real cryptocurrency networks like Ethereum.", "title": "" }, { "docid": "8b05f1d48e855580a8b0b91f316e89ab", "text": "The demand for improved service delivery requires new approaches and attitudes from local government. Implementation of knowledge sharing practices in local government is one of the critical processes that can help to establish learning organisations. The main purpose of this paper is to investigate how knowledge management systems can be used to improve the knowledge sharing culture among local government employees. The study used an inductive research approach which included a thorough literature review and content analysis. The technology-organisation-environment theory was used as the theoretical foundation of the study. Making use of critical success factors, the study advises how existing knowledge sharing practices can be supported and how new initiatives can be developed, making use of a knowledge management system. The study recommends that local government must ensure that knowledge sharing practices and initiatives are fully supported and promoted by top management.", "title": "" }, { "docid": "922c0a315751c90a11b018547f8027b2", "text": "We propose a model for the recently discovered Θ+ exotic KN resonance as a novel kind of a pentaquark with an unusual color structure: a 3c ud diquark, coupled to 3c uds̄ triquark in a relative P -wave. The state has J P = 1/2+, I = 0 and is an antidecuplet of SU(3)f . A rough mass estimate of this pentaquark is close to experiment.", "title": "" }, { "docid": "c858d0fd00e7cc0d5ee38c49446264f4", "text": "Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research. However, the majority of works aim to adopt and assess methods that have been shown to be effective in other domains, while there is still a great need for more original research focusing on music primarily and utilising musical knowledge and insight. The goal of this paper is to boost the interest of beginners by providing a comprehensive tutorial and reducing the barriers to entry into deep learning for MIR. We lay out the basic principles and review prominent works in this hard to navigate field. We then outline the network structures that have been successful in MIR problems and facilitate the selection of building blocks for the problems at hand. Finally, guidelines for new tasks and some advanced topics in deep learning are discussed to stimulate new research in this fascinating field.", "title": "" }, { "docid": "39be1d73b84872b0ae1d61bbd0fc96f8", "text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.", "title": "" } ]
scidocsrr
f4a4ad1971056cb6f51932600c1196d1
AprilTag 2: Efficient and robust fiducial detection
[ { "docid": "3acb0ab9f20e1efece96a2414a9c9c8c", "text": "Artificial markers are successfully adopted to solve several vision tasks, ranging from tracking to calibration. While most designs share the same working principles, many specialized approaches exist to address specific application domains. Some are specially crafted to boost pose recovery accuracy. Others are made robust to occlusion or easy to detect with minimal computational resources. The sheer amount of approaches available in recent literature is indeed a statement to the fact that no silver bullet exists. Furthermore, this is also a hint to the level of scholarly interest that still characterizes this research topic. With this paper we try to add a novel option to the offer, by introducing a general purpose fiducial marker which exhibits many useful properties while being easy to implement and fast to detect. The key ideas underlying our approach are three. The first one is to exploit the projective invariance of conics to jointly find the marker and set a reading frame for it. Moreover, the tag identity is assessed by a redundant cyclic coded sequence implemented using the same circular features used for detection. Finally, the specific design and feature organization of the marker are well suited for several practical tasks, ranging from camera calibration to information payload delivery.", "title": "" } ]
[ { "docid": "0943628b72cff16fd50affa40e98d360", "text": "The aim of image captioning is to generate captions by machine to describe image contents. Despite many efforts, generating discriminative captions for images remains non-trivial. Most traditional approaches imitate the language structure patterns, thus tend to fall into a stereotype of replicating frequent phrases or sentences and neglect unique aspects of each image. In this work, we propose an image captioning framework with a self-retrieval module as training guidance, which encourages generating discriminative captions. It brings unique advantages: (1) the self-retrieval guidance can act as a metric and an evaluator of caption discriminativeness to assure the quality of generated captions. (2) The correspondence between generated captions and images are naturally incorporated in the generation process without human annotations, and hence our approach could utilize a large amount of unlabeled images to boost captioning performance with no additional annotations. We demonstrate the effectiveness of the proposed retrievalguided method on COCO and Flickr30k captioning datasets, and show its superior captioning performance with more discriminative captions.", "title": "" }, { "docid": "4d56abf003caaa11e5bef74a14bd44e0", "text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.", "title": "" }, { "docid": "9d33565dbd5148730094a165bb2e968f", "text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.", "title": "" }, { "docid": "798cd7ebdd234cb62b32d963fdb51af0", "text": "The use of frontal sinus radiographs in positive identification has become an increasingly applied and accepted technique among forensic anthropologists, radiologists, and pathologists. From an evidentiary standpoint, however, it is important to know whether frontal sinus radiographs are a reliable method for confirming or rejecting an identification, and standardized methods should be applied when making comparisons. The purpose of the following study is to develop an objective, standardized comparison method, and investigate the reliability of that method. Elliptic Fourier analysis (EFA) was used to assess the variation in 808 outlines of frontal sinuses by calculating likelihood ratios and posterior probabilities from EFA coefficients. Results show that using EFA coefficient comparison to estimate the probability of a correct identification is a reliable technique, and EFA comparison of frontal sinus outlines is recommended when it may be necessary to provide quantitative substantiation for a forensic identification based on these structures.", "title": "" }, { "docid": "0618529a20e00174369a05077294de5b", "text": "In this paper we present a case study of the steps leading up to the extraction of the spam bot payload found within a backdoor rootkit known as Backdoor.Rustock.B or Spam-Mailbot.c. Following the extraction of the spam module we focus our analysis on the steps necessary to decrypt the communications between the command and control server and infected hosts. Part of the discussion involves a method to extract the encryption key from within the malware binary and use that to decrypt the communications. The result is a better understanding of an advanced botnet communications scheme.", "title": "" }, { "docid": "1a2f2e75691e538c867b6ce58591a6a5", "text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.", "title": "" }, { "docid": "cdac5244050d0127273b8a845129257a", "text": "Existing sentence regression methods for extractive summarization usually model sentence importance and redundancy in two separate processes. They first evaluate the importance f(s) of each sentence s and then select sentences to generate a summary based on both the importance scores and redundancy among sentences. In this paper, we propose to model importance and redundancy simultaneously by directly evaluating the relative importance f(s|S) of a sentence s given a set of selected sentences S. Specifically, we present a new framework to conduct regression with respect to the relative gain of s given S calculated by the ROUGE metric. Besides the single sentence features, additional features derived from the sentence relations are incorporated. Experiments on the DUC 2001, 2002 and 2004 multi-document summarization datasets show that the proposed method outperforms state-of-the-art extractive summarization approaches.", "title": "" }, { "docid": "8b6116105914e3d912d4594b875e443b", "text": "Patients with neuropathic pain (NP) are challenging to manage and evidence-based clinical recommendations for pharmacologic management are needed. Systematic literature reviews, randomized clinical trials, and existing guidelines were evaluated at a consensus meeting. Medications were considered for recommendation if their efficacy was supported by at least one methodologically-sound, randomized clinical trial (RCT) demonstrating superiority to placebo or a relevant comparison treatment. Recommendations were based on the amount and consistency of evidence, degree of efficacy, safety, and clinical experience of the authors. Available RCTs typically evaluated chronic NP of moderate to severe intensity. Recommended first-line treatments include certain antidepressants (i.e., tricyclic antidepressants and dual reuptake inhibitors of both serotonin and norepinephrine), calcium channel alpha2-delta ligands (i.e., gabapentin and pregabalin), and topical lidocaine. Opioid analgesics and tramadol are recommended as generally second-line treatments that can be considered for first-line use in select clinical circumstances. Other medications that would generally be used as third-line treatments but that could also be used as second-line treatments in some circumstances include certain antiepileptic and antidepressant medications, mexiletine, N-methyl-D-aspartate receptor antagonists, and topical capsaicin. Medication selection should be individualized, considering side effects, potential beneficial or deleterious effects on comorbidities, and whether prompt onset of pain relief is necessary. To date, no medications have demonstrated efficacy in lumbosacral radiculopathy, which is probably the most common type of NP. Long-term studies, head-to-head comparisons between medications, studies involving combinations of medications, and RCTs examining treatment of central NP are lacking and should be a priority for future research.", "title": "" }, { "docid": "be852bd342e8051c01fdac3f9de9dbd3", "text": "Dimensional sentiment analysis aims to recognize continuous numerical values in multiple dimensions such as the valencearousal (VA) space. Compared to the categorical approach that focuses on sentiment classification such as binary classification (i.e., positive and negative), the dimensional approach can provide more fine-grained sentiment analysis. This study proposes a regional CNN-LSTM model consisting of two parts: regional CNN and LSTM to predict the VA ratings of texts. Unlike a conventional CNN which considers a whole text as input, the proposed regional CNN uses an individual sentence as a region, dividing an input text into several regions such that the useful affective information in each region can be extracted and weighted according to their contribution to the VA prediction. Such regional information is sequentially integrated across regions using LSTM for VA prediction. By combining the regional CNN and LSTM, both local (regional) information within sentences and long-distance dependency across sentences can be considered in the prediction process. Experimental results show that the proposed method outperforms lexicon-based, regression-based, and NN-based methods proposed in previous studies.", "title": "" }, { "docid": "eee48e3e78f630a78c3b7e666503d849", "text": "Few psychological concepts evoke simultaneously as much fascination and misunderstanding as psychopathic personality , or psychopathy. Typically, individuals with psychopathy are misconceived as fundamentally different from the rest of humanity and as inalterably dangerous. Popular portrayals of \" psychopaths \" are diverse and conflicting, ranging from uncommonly impulsive and violent criminal offenders to corporate figures who callously and skillfully manuever their way to the highest rungs of the social ladder. Despite this diversity of perspectives, a single well-validated measure of psychopathy, the Psychopathy Checklist-Revised (PCL-R; Hare, 1991; 2003), has come to dominate clinical and legal practice over recent years. The items of the PCL-R cover two basic content domains—an interpersonal-affective domain that encompasses core traits such as callousness and manipulativeness and an antisocial domain that entails disinhibition and chronic antisocial behavior. In most Western countries, the PCL-R and its derivatives are routinely applied to inform legal decisions about criminal offenders that hinge upon issues of dangerousness and treatability. In fact, clinicians in many cases choose the PCL-R over other, purpose-built risk-assessment tools to inform their opinions about what sentence offenders should receive, whether they should be indefinitely incarcerated as a \" dangerous offender \" or \" sexually violent predator, \" or whether they should be transferred from juvenile to adult court. The PCL-R has played an extraordinarily generative role in research and practice over the past three decades—so much so, that concerns have been raised that the measure has become equated in many minds with the psychopathy construct itself (Skeem & Cooke 2010a). Equating a measure with a construct may impede scientific progress because it disregards the basic principle that measures always imperfectly operationalize constructs and that our understanding of a construct is ever-evolving (Cronbach & Meehl, 1955). In virtually any domain, the construct-validation process is an incremental one that entails shifts in conceptualization and measurement at successive points in the process of clarifying the nature and boundaries of a hypothetical entity. Despite the predominance of the PCL-R measurement model in recent years, vigorous scientific debates have continued regarding what psychopathy is and what it is not. Should adaptive, positive-adjustment features (on one hand) and criminal and antisocial behaviors (on the other) be considered essential features of the construct? Are anxious and emotionally reactive people that are identified as psychopaths by the PCL-R and other measures truly psychopathic? More fundamentally , is psychopathy a unitary entity (i.e., a global syndrome …", "title": "" }, { "docid": "19100853a7f0f4d519e0a5513a83aa08", "text": "The authors explain how to perform software inspections to locate defects. They present metrics for inspection and examples of its effectiveness. The authors contend, on the basis of their experiences and those reported in the literature, that inspections can detect and eliminate faults more cheaply than testing.<<ETX>>", "title": "" }, { "docid": "e35f6f4e7b6589e992ceeccb4d25c9f1", "text": "One of the key success factors of lending organizations in general and banks in particular is the assessment of borrower credit worthiness in advance during the credit evaluation process. Credit scoring models have been applied by many researchers to improve the process of assessing credit worthiness by differentiating between prospective loans on the basis of the likelihood of repayment. Thus, credit scoring is a very typical Data Mining (DM) classification problem. Many traditional statistical and modern computational intelligence techniques have been presented in the literature to tackle this problem. The main objective of this paper is to describe an experiment of building suitable Credit Scoring Models (CSMs) for the Sudanese banks. Two commonly discussed data mining classification techniques are chosen in this paper namely: Decision Tree (DT) and Artificial Neural Networks (ANN). In addition Genetic Algorithms (GA) and Principal Component Analysis (PCA) are also applied as feature selection techniques. In addition to a Sudanese credit dataset, German credit dataset is also used to evaluate these techniques. The results reveal that ANN models outperform DT models in most cases. Using GA as a feature selection is more effective than PCA technique. The highest accuracy of German data set (80.67%) and Sudanese credit scoring models (69.74%) are achieved by a hybrid GA-ANN model. Although DT and its hybrid models (PCA-DT, GA-DT) are outperformed by ANN and its hybrid models (PCA-ANN, GA-ANN) in most cases, they produced interpretable loan granting decisions.", "title": "" }, { "docid": "3f657657a24c03038bd402498b7abddd", "text": "We propose a system for real-time animation of eyes that can be interactively controlled in a WebGL enabled device using a small number of animation parameters, including gaze. These animation parameters can be obtained using traditional keyframed animation curves, measured from an actor's performance using off-the-shelf eye tracking methods, or estimated from the scene observed by the character, using behavioral models of human vision. We present a model of eye movement, that includes not only movement of the globes, but also of the eyelids and other soft tissues in the eye region. The model includes formation of expression wrinkles in soft tissues. To our knowledge this is the first system for real-time animation of soft tissue movement around the eyes based on gaze input.", "title": "" }, { "docid": "96d2e884c65205ef458214594f8b64f5", "text": "The weak methods occur pervasively in AI systems and may form die basic methods for all intelligent systems. The purpose of this paper is to characterize die weak methods and to explain how and why they arise in intelligent systems. We propose an organization, called a universal weak method that provides functionality of all the weak methods.* A universal weak method is an organizational scheme for knowledge that produces the appropriate search behavior given the available task-domain knowledge. We present a problem solving architecture, called SOAR, in which we realize a universal weak method. We then demonstrate the universal weak method with a variety of weak methods on a set of tasks. This research was sponsored by die Defense Advanced Research Projects Agency (DOD), ARPA Order No: 3597, monitored by die Air Force Avionics Laboratory Under Contract F33515-78-C-155L The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of die Defense Advanced Research Projects Agency or the US Government.", "title": "" }, { "docid": "cfb06477edaa39f53b1b892cdfc1621a", "text": "This paper presents ray casting as the methodological basis for a CAD/CAM solid modeling system. Solid objects are modeled by combining primitive solids, such as blocks and cylinders, using the set operators union, intersection, and difference. To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding linesurface intersection points. So surfaces such as planes, quad&, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. New methods are presented, accompanied by sample pictures and CPU times, to meet the challenge.", "title": "" }, { "docid": "a583b48a8eb40a9e88a5137211f15bce", "text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.", "title": "" }, { "docid": "c66d556686c60af51f007ec36c29bd38", "text": "The main question we try to answer in this work is whether it is feasible to employ super-resolution (SR) algorithms to increase the spatial resolution of endoscopic high-definition (HD) images in order to reveal new details which may have got lost due to the limited endoscope magnification of the HD endoscope used (e.g. mucosal structures). For this purpose we compare the quality achieved of different SR methods. This is done on standard test images as well as on images obtained from endoscopic video frames. We also investigate whether compression artifacts have a noticeable effect on the SR results. We show that, due to several limitations in case of endoscopic videos, we are not consistently able to achieve a higher visual quality when using SR algorithms instead of bicubic interpolation.", "title": "" }, { "docid": "28c1416fd464af8543e6486339e1a483", "text": "In today’s global competitive marketplace, there is intense pressure for manufacturing industries to continuously reduce and eliminate costly, unscheduled downtime and unexpected breakdowns. With the advent of Internet and tether-free technologies, companies necessitate dramatic changes in transforming traditional ‘‘fail and fix (FAF)’’ maintenance practices to a ‘‘predict and prevent (PAP)’’ e-maintenance methodology. Emaintenance addresses the fundamental needs of predictive intelligence tools to monitor the degradation rather than detecting the faults in a networked environment and, ultimately to optimize asset utilization in the facility. This paper introduces the emerging field of e-maintenance and its critical elements. Furthermore, performance assessment and prediction tools are introduced for continuous assessment and prediction of a particular product’s performance, ultimately enable proactive maintenance to prevent machine from breakdowns. Recent advances on intelligent prognostic technologies and tools are discussed. Several case studies are introduced to validate these developed technologies and tools. # 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f398eee40f39acd2c2955287ccbb4924", "text": "One of the ultimate goals of natural language processing (NLP) systems is understanding the meaning of what is being transmitted, irrespective of the medium (e.g., written versus spoken) or the form (e.g., static documents versus dynamic dialogues). Although much work has been done in traditional language domains such as speech and static written text, little has yet been done in the newer communication domains enabled by the Internet, e.g., online chat and instant messaging. This is in part due to the fact that there are no annotated chat corpora available to the broader research community. The purpose of this research is to build a chat corpus, tagged with lexical (token part-of-speech labels), syntactic (post parse tree), and discourse (post classification) information. Such a corpus can then be used to develop more complex, statistical-based NLP applications that perform tasks such as author profiling, entity identification, and social network analysis.", "title": "" }, { "docid": "3e6e1125fcdd9757206d0d3f25039e0d", "text": "Native Language Identification, or NLI, is the task of automatically classifying the L1 of a writer based solely on his or her essay written in another language. This problem area has seen a spike in interest in recent years as it can have an impact on educational applications tailored towards non-native speakers of a language, as well as authorship profiling. While there has been a growing body of work in NLI, it has been difficult to compare methodologies because of the different approaches to pre-processing the data, different sets of languages identified, and different splits of the data used. In this shared task, the first ever for Native Language Identification, we sought to address the above issues by providing a large corpus designed specifically for NLI, in addition to providing an environment for systems to be directly compared. In this paper, we report the results of the shared task. A total of 29 teams from around the world competed across three different sub-tasks.", "title": "" } ]
scidocsrr
6c90f85299bda47769ee1e09c960f67f
Modeling and design solutions to overcome warpage challenge for fan-out wafer level packaging (FO-WLP) technology
[ { "docid": "4a5151d07c31cc2d36c0c24423801312", "text": "A wafer level package (WLP) that has a flip chip form and uses thin-film redistribution with solder bumps to connect the package to the printed wiring board directly is discussed in this paper. A liquid molding compound is used for the encapsulation process. Since the thickness of the fan-out WLP is smaller than that in a traditional integrated circuit (IC) package, the fan-out WLP induces more serious warpage. Warpage plays an important role during the IC encapsulation process, and too large a warpage would not let the package proceed to the next manufacturing process. This paper uses an approach that considers both cure- and thermal-induced shrinkages during the encapsulation process to predict the amount of warpage. Cure-induced shrinkage is described by the pressure-volume-temperature-cure (PVTC) equation of the liquid compound. The thermally induced shrinkage is described by the coefficients of thermal expansion of the component materials. The liquid compound properties are obtained by various techniques, such as cure kinetics by differential scanning calorimetry- and cure-induced shrinkage by a PVTC testing machine. These experimental data are used to formulate the PVTC equation. A fan-out WLP is first simulated, and the simulation results are verified with experiments. It is shown that an approach that considers both thermal and cure/compressibility effects can better predict the amount of warpage for the fan-out WLP. The PVTC equation is successfully implemented, and it is verified that warpage is governed by both thermal and cure shrinkages. The amount of warpage after molding could be accurately predicted with this methodology. Simulation results show that cure shrinkage of the liquid compound is the dominant factor responsible for package warpage after encapsulation.", "title": "" }, { "docid": "051403aac652a3372e94520fbc642520", "text": "Fan-out Wafer Level Packaging (FOWLP) is one of the latest packaging trends in microelectronics. Mold embedding for this technology is currently done on wafer level up to 12\"/300 mm diameter. For higher productivity and therewith lower costs larger mold embedding form factors are forecasted for the near future. Following the wafer level approach then the next step will be a reconfigured wafer size of 450 mm. An alternative option would be leaving the wafer shape and moving to panel sizes leading to Fan-out Panel Level Packaging (FOPLP). Sizes for the panel could range up to 24\"×18\" or even larger. For reconfigured mold embedding, compression mold processes are used in combination with liquid, granular or sheet compound. As a process alternative also lamination as used e.g. in PCB manufacturing can be taken into account.=Within this paper the evaluation of panel level compression molding with a target form factor of 24”*18” / 610×457 mm2 is described. The large panel size equals a typical PCB manufacturing full format and is selected to achieve process compatibility with cost efficient PCB processes. Here not only conventional compression molding is considered but also the new process compression mold lamination is introduced as a tool-less mold alternative. Panel level molding is compared to 8” and 12” wafer molding as well as to low cost PCB 24”×18” lamination focusing on manufacturing challenges, high volume capability and estimated cost. Technological focus of this study will be the evaluation of liquid, granular and sheet molding compound. This includes thorough material analysis regarding the process relevant material properties as reactivity or viscosity. One key process step for homogeneous large area embedding is material application before compression molding. Where sheet compounds already deliver a uniform material layer the application of liquid and granular compound must be optimized and adapted for a homogeneous distribution without flow marks, knit lines and incomplete fills. Hence, dispense patterns of liquid and granular molding compounds are studied to achieve high yield and reliable mold embedding. In addition applicable thickness ranges, total thickness variations, void risks and warpage will be investigated for the different material types. The overall a process flow will be demonstrated for selected compression mold variants resulting in a 24”×18” / 610×457 mm2 FOPLP using PCB based redistribution layer (RDL) as low cost alternative to thin film technology. For=PCB based RDLs a resin coated copper sheet (RCC) is laminated on the reconfigured wafer or panel, respectively. Micro vias are drilled through the RCC layer to the die pads and electrically connected by Cu plating. Final process step is the etching of Cu lines using laser direct imaging (LDI) techniques for maskless patterning. All process steps are carried out on full format 24”×18” / 610×457 mm2.", "title": "" } ]
[ { "docid": "b5c5f9ded838acd14a84d88fd7d53016", "text": "In this paper, a new control algorithm is proposed to achieve optimal dynamic performance for dc-to-dc converters under a load current change and for a given set of circuit parameters, such as the output inductor, output capacitor, switching frequency, input voltage, and output voltage. Using the concept of capacitor charge balance, the proposed algorithm predicts the optimal transient response for a dc-to-dc converter during a large signal load current change. During steady state operation, conventional current mode proportional-integral-derivative (PID) is used. During large signal transient conditions, the new control algorithm takes over. The equations needed to calculate the transient time and the required duty cycle series are presented. By using the proposed algorithm, the optimal transient performances, including the smallest output voltage overshoot/undershoot and the shortest recovery time, is achieved. In addition, since the large signal dynamic response of the power converter is successfully predicted, the large signal stability is guaranteed. Experimental results show that the proposed method produces superior dynamic performance over a conventional current mode PID controller.", "title": "" }, { "docid": "4d69fbb950ffe534ace5fdbcc2951f0c", "text": "In this paper we introduce a novel single-document summarization method based on a hidden semi-Markov model. This model can naturally model single-document summarization as the optimization problem of selecting the best sequence from among the sentences in the input document under the given objective function and knapsack constraint. This advantage makes it possible for sentence selection to take the coherence of the summary into account. In addition our model can also incorporate sentence compression into the summarization process. To demonstrate the effectiveness of our method, we conduct an experimental evaluation with a large-scale corpus consisting of 12,748 pairs of a document and its reference. The results show that our method significantly outperforms the competitive baselines in terms of ROUGE evaluation, and the linguistic quality of summaries is also improved. Our method successfully mimicked the reference summaries, about 20 percent of the summaries generated by our method were completely identical to their references. Moreover, we show that large-scale training samples are quite effective for training a summarizer.", "title": "" }, { "docid": "b911c86e5672f9a669e25c7771076d24", "text": "This paper discusses an implementation of Extended Kalman filter (EKF) in performing Simultaneous Localization and Mapping (SLAM). The implementation is divided into software and hardware phases. The software implementation applies EKF using Python on a library dataset to produce a map of the supposed environment. The result was verified against the original map and found to be relatively accurate with minor inaccuracies. In the hardware implementation stage, real life data was gathered from an indoor environment via a laser range finder and a pair of wheel encoders placed on a mobile robot. The resulting map shows at least five marked inaccuracies but the overall form is passable.", "title": "" }, { "docid": "57cbffa039208b85df59b7b3bc1718d5", "text": "This paper provides an in-depth analysis of the technological and social factors that led to the successful adoption of groupware by a virtual team in a educational setting. Drawing on a theoretical framework based on the concept of technological frames, we conducted an action research study to analyse the chronological sequence of events in groupware adoption. We argue that groupware adoption can be conceptualised as a three-step process of expanding and aligning individual technological frames towards groupware. The first step comprises activities that bring knowledge of new technological opportunities to the participants. The second step involves facilitating the participants to articulate and evaluate their work practices and their use of tech© Scandinavian Journal of Information Systems, 2006, 18(2):29-68 nology. The third and final step deals with the participants' commitment to, and practical enactment of, groupware technology. The alignment of individual technological frames requires the articulation and re-evaluation of experience with collaborative practice and with the use of technology. One of the key findings is that this activity cannot take place at the outset of groupware adoption.", "title": "" }, { "docid": "902aab15808014d55a9620bcc48621f5", "text": "Software developers are always looking for ways to boost their effectiveness and productivity and perform complex jobs more quickly and easily, particularly as projects have become increasingly large and complex. Programmers want to shed unneeded complexity and outdated methodologies and move to approaches that focus on making programming simpler and faster. With this in mind, many developers are increasingly using dynamic languages such as JavaScript, Perl, Python, and Ruby. Although software experts disagree on the exact definition, a dynamic language basically enables programs that can change their code and logical structures at runtime, adding variable types, module names, classes, and functions as they are running. These languages frequently are interpreted and generally check typing at runtime", "title": "" }, { "docid": "1f2f6aab0e3c813392ecab46cdc171b5", "text": "Theory of mind (ToM) refers to the ability to represent one's own and others' cognitive and affective mental states. Recent imaging studies have aimed to disentangle the neural networks involved in cognitive as opposed to affective ToM, based on clinical observations that the two can functionally dissociate. Due to large differences in stimulus material and task complexity findings are, however, inconclusive. Here, we investigated the neural correlates of cognitive and affective ToM in psychologically healthy male participants (n = 39) using functional brain imaging, whereby the same set of stimuli was presented for all conditions (affective, cognitive and control), but associated with different questions prompting either a cognitive or affective ToM inference. Direct contrasts of cognitive versus affective ToM showed that cognitive ToM recruited the precuneus and cuneus, as well as regions in the temporal lobes bilaterally. Affective ToM, in contrast, involved a neural network comprising prefrontal cortical structures, as well as smaller regions in the posterior cingulate cortex and the basal ganglia. Notably, these results were complemented by a multivariate pattern analysis (leave one study subject out), yielding a classifier with an accuracy rate of more than 85% in distinguishing between the two ToM-conditions. The regions contributing most to successful classification corresponded to those found in the univariate analyses. The study contributes to the differentiation of neural patterns involved in the representation of cognitive and affective mental states of others.", "title": "" }, { "docid": "ec9eb309dd9d6f72bd7286580e75d36d", "text": "This paper describes SONDY, a tool for analysis of trends and dynamics in online social network data. SONDY addresses two audiences: (i) end-users who want to explore social activity and (ii) researchers who want to experiment and compare mining techniques on social data. SONDY helps end-users like media analysts or journalists understand social network users interests and activity by providing emerging topics and events detection as well as network analysis functionalities. To this end, the application proposes visualizations such as interactive time-lines that summarize information and colored user graphs that reflect the structure of the network. SONDY also provides researchers an easy way to compare and evaluate recent techniques to mine social data, implement new algorithms and extend the application without being concerned with how to make it accessible. In the demo, participants will be invited to explore information from several datasets of various sizes and origins (such as a dataset consisting of 7,874,772 messages published by 1,697,759 Twitter users during a period of 7 days) and apply the different functionalities of the platform in real-time.", "title": "" }, { "docid": "6b19893324e4012a622c0250436e1ab3", "text": "Nowadays, email is one of the fastest ways to conduct communications through sending out information and attachments from one to another. Individuals and organizations are all benefit the convenience from email usage, but at the same time they may also suffer the unexpected user experience of receiving spam email all the time. Spammers flood the email servers and send out mass quantity of unsolicited email to the end users. From a business perspective, email users have to spend time on deleting received spam email which definitely leads to the productivity decrease and cause potential loss for organizations. Thus, how to detect the email spam effectively and efficiently with high accuracy becomes a significant study. In this study, data mining will be utilized to process machine learning by using different classifiers for training and testing and filters for data preprocessing and feature selection. It aims to seek out the optimal hybrid model with higher accuracy or base on other metric’s evaluation. The experiment results show accuracy improvement in email spam detection by using hybrid techniques compared to the single classifiers used in this research. The optimal hybrid model provides 93.00% of accuracy and 7.80% false positive rate for email spam detection.", "title": "" }, { "docid": "e08df000f02cdb073d3509e826b5b8eb", "text": "Although gold is the subject of one of the most ancient themes of investigation in science, its renaissance now leads to an exponentially increasing number of publications, especially in the context of emerging nanoscience and nanotechnology with nanoparticles and self-assembled monolayers (SAMs). We will limit the present review to gold nanoparticles (AuNPs), also called gold colloids. AuNPs are the most stable metal nanoparticles, and they present fascinating aspects such as their assembly of multiple types involving materials science, the behavior of the individual particles, size-related electronic, magnetic and optical properties (quantum size effect), and their applications to catalysis and biology. Their promises are in these fields as well as in the bottom-up approach of nanotechnology, and they will be key materials and building block in the 21st century. Whereas the extraction of gold started in the 5th millennium B.C. near Varna (Bulgaria) and reached 10 tons per year in Egypt around 1200-1300 B.C. when the marvelous statue of Touthankamon was constructed, it is probable that “soluble” gold appeared around the 5th or 4th century B.C. in Egypt and China. In antiquity, materials were used in an ecological sense for both aesthetic and curative purposes. Colloidal gold was used to make ruby glass 293 Chem. Rev. 2004, 104, 293−346", "title": "" }, { "docid": "63de2448edead6e16ef2bc86c3acd77b", "text": "In traditional topic models such as LDA, a word is generated by choosing a topic from a collection. However, existing topic models do not identify different types of topics in a document, such as topics that represent the content and topics that represent the sentiment. In this paper, our goal is to discover such different types of topics, if they exist. We represent our model as several parallel topic models (called topic factors), where each word is generated from topics from these factors jointly. Since the latent membership of the word is now a vector, the learning algorithms become challenging. We show that using a variational approximation still allows us to keep the algorithm tractable. Our experiments over several datasets show that our approach consistently outperforms many classic topic models while also discovering fewer, more meaningful, topics. 1", "title": "" }, { "docid": "9c2c74da1e0f5ea601e50f257015c5b3", "text": "We present a new lock-based algorithm for concurrent manipulation of a binary search tree in an asynchronous shared memory system that supports search, insert and delete operations. Some of the desirable characteristics of our algorithm are: (i) a search operation uses only read and write instructions, (ii) an insert operation does not acquire any locks, and (iii) a delete operation only needs to lock up to four edges in the absence of contention. Our algorithm is based on an internal representation of a search tree and it operates at edge-level (locks edges) rather than at node-level (locks nodes); this minimizes the contention window of a write operation and improves the system throughput. Our experiments indicate that our lock-based algorithm outperforms existing algorithms for a concurrent binary search tree for medium-sized and larger trees, achieving up to 59% higher throughput than the next best algorithm.", "title": "" }, { "docid": "9fa05fdcaeb09d881e4bcc7e92cf8311", "text": "A new broadband in-phase power divider based on multilayer technology is presented. A simple design procedure is developed for the proposed multilayer power divider. An S-band four-way multilayer power divider was designed and measured. The simulated results are compared with the measured data, and good agreement is reported. The measured 15 dB return loss bandwidth is demonstrated to be about 72%, and its phase difference between the output signals is less than 38.", "title": "" }, { "docid": "44eb03a8f6c785acdfe2d1db28a503b9", "text": "Big Data refers to data volumes in the range of Exabyte (1018) and beyond. Such volumes exceed the capacity of current on-line storage and processing systems. With characteristics like volume, velocity and variety big data throws challenges to the traditional IT establishments. Computer assisted innovation, real time data analytics, customer-centric business intelligence, industry wide decision making and transparency are possible advantages, to mention few, of Big Data. There are many issues with Big Data that warrant quality assessment methods. The issues are pertaining to storage and transport, management, and processing. This paper throws light into the present state of quality issues related to Big Data. It provides valuable insights that can be used to leverage Big Data science activities.", "title": "" }, { "docid": "9280eb309f7a6274eb9d75d898768f56", "text": "In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.", "title": "" }, { "docid": "b1e5edf2e102cd08dfec421ad5de03a1", "text": "In this paper we describe the computational and architectural requirements for systems which support real-time multimodal interaction with an embodied conversational character. We argue that the three primary design drivers are real-time multithreaded entrainment, processing of both interactional and propositional information, and an approach based on a functional understanding of human face-to-face conversation. We then present an architecture which meets these requirements and an initial conversational character that we have developed who is capable of increasingly sophisticated multimodal input and output in a limited", "title": "" }, { "docid": "d80070cf7ab3d3e75c2da1525e59be67", "text": "This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.", "title": "" }, { "docid": "c9b0954503fa8b6309a0736ac1a5cb62", "text": "Rigid Point Cloud Registration (PCReg) refers to the problem of finding the rigid transformation between two sets of point clouds. This problem is particularly important due to the advances in new 3D sensing hardware, and it is challenging because neither the correspondence nor the transformation parameters are known. Traditional local PCReg methods (e.g., ICP) rely on local optimization algorithms, which can get trapped in bad local minima in the presence of noise, outliers, bad initializations, etc. To alleviate these issues, this paper proposes Inverse Composition Discriminative Optimization (ICDO), an extension of Discriminative Optimization (DO), which learns a sequence of update steps from synthetic training data that search the parameter space for an improved solution. Unlike DO, ICDO is object-independent and generalizes even to unseen shapes. We evaluated ICDO on both synthetic and real data, and show that ICDO can match the speed and outperform the accuracy of state-of-the-art PCReg algorithms.", "title": "" }, { "docid": "d51ddec1ea405d9bde56f3b3b6baefc7", "text": "Background. Inconsistent data exist about the role of probiotics in the treatment of constipated children. The aim of this study was to investigate the effectiveness of probiotics in childhood constipation. Materials and Methods. In this placebo controlled trial, fifty-six children aged 4-12 years with constipation received randomly lactulose plus Protexin or lactulose plus placebo daily for four weeks. Stool frequency and consistency, abdominal pain, fecal incontinence, and weight gain were studied at the beginning, after the first week, and at the end of the 4th week in both groups. Results. Forty-eight patients completed the study. At the end of the fourth week, the frequency and consistency of defecation improved significantly (P = 0.042 and P = 0.049, resp.). At the end of the first week, fecal incontinence and abdominal pain improved significantly in intervention group (P = 0.030 and P = 0.017, resp.) but, at the end of the fourth week, this difference was not significant (P = 0.125 and P = 0.161, resp.). A significant weight gain was observed at the end of the 1st week in the treatment group. Conclusion. This study showed that probiotics had a positive role in increasing the frequency and improving the consistency at the end of 4th week.", "title": "" }, { "docid": "32b12ea15bea5eef932f2bfd97db7120", "text": "By studying capabilities inherent in the nerve proper and carefully considering patient complaints and limitations, the surgeon-therapist team may be able to guide patients through a restorative phase via nerve gliding techniques. Nerve symptoms must be heeded when employing rehabilitation techniques. Rather than encouraging the patient to push beyond nerve pain either proximally or distally, the patient is instructed to perform exercises in positions that enhance nerve gliding in a slow, controlled manner. \"Tincture of time\" is prescribed as the patient advances to a less symptomatic level of function.", "title": "" }, { "docid": "0b507193ca68d05a3432a9e735df5d95", "text": "Capturing image with defocused background by using a large aperture is a widely used technique in digital single-lens reflex (DSLR) camera photography. It is also desired to provide this function to smart phones. In this paper, a new algorithm is proposed to synthesize such an effect for a single portrait image. The foreground portrait is detected using a face prior based salient object detection algorithm. Then with an improved gradient domain guided image filter, the details in the foreground are enhanced while the background pixels are blurred. In this way, the background objects are defocused and thus the foreground objects are emphasized. The resultant image looks similar to image captured using a camera with a large aperture. The proposed algorithm can be adopted in smart phones, especially for the front cameras of smart phones.", "title": "" } ]
scidocsrr
658940a1b0e552d4819f2a4cd8d4fdf2
Multi-client/Multi-server split architecture
[ { "docid": "17c4ad36c7e97097d783382d7450279c", "text": "Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peerto-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution.", "title": "" } ]
[ { "docid": "bce13771747a367febb7688874ccca10", "text": "Recent research on highly distributed control methods has produced a series of new philosophies based on negotiation, which bring together the process engineering with computer science. Among these control philosophies, the ones based on Multi-agent Systems (MAS) have become especially relevant to address complex tasks and to support distributed decision making in asset management, manufacturing, and logistics. However, these MAS models have the drawback of an excessive dependence on up-to-date field information. In this work, a theoretical and experimental MAS, called MAS-DUO, is presented to test new strategies for managing handling operations supported by feedback coming from radio frequency identification (RFID) systems. These strategies have been based on a new distributed organization model to enforce the idea of division between physical elements and information and communication technologies (ICT) in the product scheduling control. This division in two platforms simplifies the design, the development, and the validation of the MAS, allowing an abstraction and preserving the independency between platforms. The communication between both platforms is based on sharing the parameters of the Markov reward function. This function is mainly made up of the field information coming from the RFID readers incorporated as the internal beliefs of the agent. The proposed MAS have been deployed on the Ciudad Real Central Airport in Spain in order to dimension the ground handling resources.", "title": "" }, { "docid": "f3ec01232e9ce081d5684df997d3db54", "text": "The present study used a behavioral version of an anti-saccade task, called the 'faces task', developed by [Bialystok, E., Craik, F. I. M., & Ryan, J. (2006). Executive control in a modified anti-saccade task: Effects of aging and bilingualism. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 1341-1354] to isolate the components of executive functioning responsible for previously reported differences between monolingual and bilingual children and to determine the generality of these differences by comparing bilinguals in two cultures. Three components of executive control were investigated: response suppression, inhibitory control, and cognitive flexibility. Ninety children, 8-years old, belonged to one of three groups: monolinguals in Canada, bilinguals in Canada, and bilinguals in India. The bilingual children in both settings were faster than monolinguals in conditions based on inhibitory control and cognitive flexibility but there was no significant difference between groups in response suppression or on a control condition that did not involve executive control. The children in the two bilingual groups performed equivalently to each other and differently from the monolinguals on all measures in which there were group differences, consistent with the interpretation that bilingualism is responsible for the enhanced executive control. These results contribute to understanding the mechanism responsible for the reported bilingual advantages by identifying the processes that are modified by bilingualism and establishing the generality of these findings across bilingual experiences. They also contribute to theoretical conceptions of the components of executive control and their development.", "title": "" }, { "docid": "e86ad4e9b61df587d9e9e96ab4eb3978", "text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.", "title": "" }, { "docid": "1158e01718dd8eed415dd5b3513f4e30", "text": "Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.", "title": "" }, { "docid": "79b26ac97deb39c4de11a87604003f26", "text": "This paper presents a novel wheel-track-Leg hybrid Locomotion Mechanism that has a compact structure. Compared to most robot wheels that have a rigid round rim, the transformable wheel with a flexible rim can switch to track mode for higher efficiency locomotion on swampy terrain or leg mode for better over-obstacle capability on rugged road. In detail, the wheel rim of this robot is cut into four end-to-end circles to make it capable of transforming between a round circle with a flat ring (just like “O” and “∞”) to change the contact type between transformable wheels with the ground. The transformation principle and constraint conditions between different locomotion modes are explained. The driving methods and locomotion strategies on various terrains of the robot are analyzed. Meanwhile, an initial experiment is conducted to verify the design.", "title": "" }, { "docid": "86005973a0d524320fb3c7be4b2be516", "text": "There is increasing evidence that user characteristics can have a significant impact on visualization effectiveness, suggesting that visualizations could be designed to better fit each user’s specific needs. Most studies to date, however, have looked at static visualizations. Studies considering interactive visualizations have only looked at a limited number of user characteristics, and consider either low-level tasks (e.g., value retrieval), or high-level tasks (in particular: discovery), but not both. This paper contributes to this line of work by looking at the impact of a large set of user characteristics on user performance with interactive visualizations, for both low and high-level tasks. We focus on interactive visualizations that support decision making, exemplified by a visualization known as Value Charts. We include in the study two versions of ValueCharts that differ in terms of layout, to ascertain whether layout mediates the impact of individual differences and could be considered as a form of personalization. Our key findings are that (i) performance with low and high-level tasks is affected by different user characteristics, and (ii) users with low visual working memory perform better with a horizontal layout. We discuss how these findings can inform the provision of personalized support to visualization processing.", "title": "" }, { "docid": "c3e63d82514b9e9b1cc172ea34f7a53e", "text": "Deep Learning is one of the next big things in Recommendation Systems technology. The past few years have seen the tremendous success of deep neural networks in a number of complex machine learning tasks such as computer vision, natural language processing and speech recognition. After its relatively slow uptake by the recommender systems community, deep learning for recommender systems became widely popular in 2016.\n We believe that a tutorial on the topic of deep learning will do its share to further popularize the topic. Notable recent application areas are music recommendation, news recommendation, and session-based recommendation. The aim of the tutorial is to encourage the application of Deep Learning techniques in Recommender Systems, to further promote research in deep learning methods for Recommender Systems.", "title": "" }, { "docid": "2fde565affe47df370054c0643b17929", "text": "KnowNER is a multilingual Named Entity Recognition (NER) system that leverages different degrees of external knowledge. A novel modular framework divides the knowledge into four categories according to the depth of knowledge they convey. Each category consists of a set of features automatically generated from different information sources (such as a knowledge-base, a list of names or document-specific semantic annotations) and is used to train a conditional random field (CRF). Since those information sources are usually multilingual, KnowNER can be easily trained for a wide range of languages. In this paper, we show that the incorporation of deeper knowledge systematically boosts accuracy and compare KnowNER with state-of-the-art NER approaches across three languages (i.e., English, German and Spanish) performing amongst state-of-the art systems in all of them.", "title": "" }, { "docid": "44a6cfa975745624ae4bebec17702d2a", "text": "OBJECTIVE\nTo evaluate the performance of the International Ovarian Tumor Analysis (IOTA) ADNEX model in the preoperative discrimination between benign ovarian (including tubal and para-ovarian) tumors, borderline ovarian tumors (BOT), Stage I ovarian cancer (OC), Stage II-IV OC and ovarian metastasis in a gynecological oncology center in Brazil.\n\n\nMETHODS\nThis was a diagnostic accuracy study including 131 women with an adnexal mass invited to participate between February 2014 and November 2015. Before surgery, pelvic ultrasound examination was performed and serum levels of tumor marker CA 125 were measured in all women. Adnexal masses were classified according to the IOTA ADNEX model. Histopathological diagnosis was the gold standard. Receiver-operating characteristics (ROC) curve analysis was used to determine the diagnostic accuracy of the model to classify tumors into different histological types.\n\n\nRESULTS\nOf 131 women, 63 (48.1%) had a benign ovarian tumor, 16 (12.2%) had a BOT, 17 (13.0%) had Stage I OC, 24 (18.3%) had Stage II-IV OC and 11 (8.4%) had ovarian metastasis. The area under the ROC curve (AUC) was 0.92 (95% CI, 0.88-0.97) for the basic discrimination between benign vs malignant tumors using the IOTA ADNEX model. Performance was high for the discrimination between benign vs Stage II-IV OC, BOT vs Stage II-IV OC and Stage I OC vs Stage II-IV OC, with AUCs of 0.99, 0.97 and 0.94, respectively. Performance was poor for the differentiation between BOT vs Stage I OC and between Stage I OC vs ovarian metastasis with AUCs of 0.64.\n\n\nCONCLUSION\nThe majority of adnexal masses in our study were classified correctly using the IOTA ADNEX model. On the basis of our findings, we would expect the model to aid in the management of women with an adnexal mass presenting to a gynecological oncology center. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.", "title": "" }, { "docid": "a2246533e2973193586e2a3c8e672c10", "text": "Krill Herd (KH) optimization algorithm was recently proposed based on herding behavior of krill individuals in the nature for solving optimization problems. In this paper, we develop Standard Krill Herd (SKH) algorithm and propose Fuzzy Krill Herd (FKH) optimization algorithm which is able to dynamically adjust the participation amount of exploration and exploitation by looking the progress of solving the problem in each step. In order to evaluate the proposed FKH algorithm, we utilize some standard benchmark functions and also Inventory Control Problem. Experimental results indicate the superiority of our proposed FKH optimization algorithm in comparison with the standard KH optimization algorithm.", "title": "" }, { "docid": "a3ace9ac6ae3f3d2dd7e02bd158a5981", "text": "The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. Thesis Supervisor: David R. Karger Title: Associate Professor", "title": "" }, { "docid": "be2e96a37e48c0ca187639c8a6d6a15b", "text": "Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a ‘‘wisdom of nature’’ which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifest as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in ‘‘nature knows best’’ attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature.", "title": "" }, { "docid": "e462c0cfc1af657cb012850de1b7b717", "text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.", "title": "" }, { "docid": "6b0349726d029403279ab32355bf74d4", "text": "This paper is about tracking an extended object or a group target, which gives rise to a varying number of measurements from different measurement sources. For this purpose, the shape of the target is tracked in addition to its kinematics. The target extent is modeled with a new approach called Random Hypersurface Model (RHM) that assumes varying measurement sources to lie on scaled versions of the shape boundaries. In this paper, a star-convex RHM is introduced for tracking star-convex shape approximations of targets. Bayesian inference for star-convex RHMs is performed by means of a Gaussian-assumed state estimator allowing for an efficient recursive closed-form measurement update. Simulations demonstrate the performance of this approach for typical extended object and group tracking scenarios.", "title": "" }, { "docid": "e55fdc146f334c9257e5b2a3e9f2d2d9", "text": "Customer churn prediction models aim to detect customers with a high propensity to attrite. Predictive accuracy, comprehensibility, and justifiability are three key aspects of a churn prediction model. An accurate model permits to correctly target future churners in a retention marketing campaign, while a comprehensible and intuitive rule-set allows to identify the main drivers for customers to churn, and to develop an effective retention strategy in accordance with domain knowledge. This paper provides an extended overview of the literature on the use of data mining in customer churn prediction modeling. It is shown that only limited attention has been paid to the comprehensibility and the intuitiveness of churn prediction models. Therefore, two novel data mining techniques are applied to churn prediction modeling, and benchmarked to traditional rule induction techniques such as C4.5 and RIPPER. Both AntMiner+ and ALBA are shown to induce accurate as well as comprehensible classification rule-sets. AntMiner+ is a high performing data mining technique based on the principles of Ant Colony Optimization that allows to include domain knowledge by imposing monotonicity constraints on the final rule-set. ALBA on the other hand combines the high predictive accuracy of a non-linear support vector machine model with the comprehensibility of the rule-set format. The results of the benchmarking experiments show that ALBA improves learning of classification techniques, resulting in comprehensible models with increased performance. AntMiner+ results in accurate, comprehensible, but most importantly justifiable models, unlike the other modeling techniques included in this study. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f6e8c34656a40fd7b97c3e84d6ba8ebb", "text": "We propose a novel approach to fully automatic lesion boundary detection in ultrasound breast images. The novelty of the proposed work lies in the complete automation of the manual process of initial Region-of-Interest (ROI) labeling and in the procedure adopted for the subsequent lesion boundary detection. Histogram equalization is initially used to preprocess the images followed by hybrid filtering and multifractal analysis stages. Subsequently, a single valued thresholding segmentation stage and a rule-based approach is used for the identification of the lesion ROI and the point of interest that is used as the seed-point. Next, starting from this point an Isotropic Gaussian function is applied on the inverted, original ultrasound image. The lesion area is then separated from the background by a thresholding segmentation stage and the initial boundary is detected via edge detection. Finally to further improve and refine the initial boundary, we make use of a state-of-the-art active contour method (i.e. gradient vector flow (GVF) snake model). We provide results that include judgments from expert radiologists on 360 ultrasound images proving that the final boundary detected by the proposed method is highly accurate. We compare the proposed method with two existing stateof-the-art methods, namely the radial gradient index filtering (RGI) technique of Drukker et. al. and the local mean technique proposed by Yap et. al., in proving the proposed method’s robustness and accuracy.", "title": "" }, { "docid": "242a79e9e0d38c5dbd2e87d109566b6e", "text": "Δ9-Tetrahydrocannabinol (THC) is the main active constituent of cannabis. In recent years, the average THC content of some cannabis cigarettes has increased up to approximately 60 mg per cigarette (20% THC cigarettes). Acute cognitive and psychomotor effects of THC among recreational users after smoking cannabis cigarettes containing such high doses are unknown. The objective of this study was to study the dose–effect relationship between the THC dose contained in cannabis cigarettes and cognitive and psychomotor effects for THC doses up to 69.4 mg (23%). This double-blind, placebo-controlled, randomised, four-way cross-over study included 24 non-daily male cannabis users (two to nine cannabis cigarettes per month). Participants smoked four cannabis cigarettes containing 0, 29.3, 49.1 and 69.4 mg THC on four exposure days. The THC dose in smoked cannabis was linearly associated with a slower response time in all tasks (simple reaction time, visuo-spatial selective attention, sustained attention, divided attention and short-term memory tasks) and motor control impairment in the motor control task. The number of errors increased significantly with increasing doses in the short-term memory and the sustained attention tasks. Some participants showed no impairment in motor control even at THC serum concentrations higher than 40 ng/mL. High feeling and drowsiness differed significantly between treatments. Response time slowed down and motor control worsened, both linearly, with increasing THC doses. Consequently, cannabis with high THC concentrations may be a concern for public health and safety if cannabis smokers are unable to titrate to a high feeling corresponding to a desired plasma THC level.", "title": "" }, { "docid": "74e15be321ec4e2d207f3331397f0399", "text": "Interoperability has been a basic requirement for the modern information systems environment for over two decades. How have key requirements for interoperability changed over that time? How can we understand the full scope of interoperability issues? What has shaped research on information system interoperability? What key progress has been made? This chapter provides some of the answers to these questions. In particular, it looks at different levels of information system interoperability, while reviewing the changing focus of interoperability research themes, past achievements and new challenges in the emerging global information infrastructure (GII). It divides the research into three generations, and discusses some of achievements of the past. Finally, as we move from managing data to information, and in future knowledge, the need for achieving semantic interoperability is discussed and key components of solutions are introduced. Data and information interoperability has gained increasing attention for several reasons, including: • excellent progress in interconnection afforded by the Internet, Web and distributed computing infrastructures, leading to easy access to a large number of independently created and managed information sources of broad variety;", "title": "" }, { "docid": "19700a52f05178ea1c95d576f050f57d", "text": "With the progress of mobile devices and wireless broadband, a new eMarket platform, termed spatial crowdsourcing is emerging, which enables workers (aka crowd) to perform a set of spatial tasks (i.e., tasks related to a geographical location and time) posted by a requester. In this paper, we study a version of the spatial crowd-sourcing problem in which the workers autonomously select their tasks, called the worker selected tasks (WST) mode. Towards this end, given a worker, and a set of tasks each of which is associated with a location and an expiration time, we aim to find a schedule for the worker that maximizes the number of performed tasks. We first prove that this problem is NP-hard. Subsequently, for small number of tasks, we propose two exact algorithms based on dynamic programming and branch-and-bound strategies. Since the exact algorithms cannot scale for large number of tasks and/or limited amount of resources on mobile platforms, we also propose approximation and progressive algorithms. We conducted a thorough experimental evaluation on both real-world and synthetic data to compare the performance and accuracy of our proposed approaches.", "title": "" }, { "docid": "08f01d6ca1d57c78fe0df0eea4c0457e", "text": "Controlling multiple devices while driving steals drivers' attention from the road and is becoming the cause of accidents in 1 out of 3 cases. Many research efforts are being dedicated to design, manufacture and test Human-Machine Interfaces that allow operating car devices without distracting the drivers' attention. A complete system for controlling the infotainment equipment through hand gestures is explained in this paper. The system works with a visible-infrared camera mounted on the ceiling of the car and pointing to the shift-stick area, and is based in a combination of some new and some well-known computer vision algorithms. The system has been tested by 23 volunteers on a car simulator and a real vehicle and the results show that the users slightly prefer this system to an equivalent one based on a touch-screen interface.", "title": "" } ]
scidocsrr
0896f13373e75f47a5835d85b757c131
A wireless smart-shoe system for gait assistance
[ { "docid": "1667c7e872bac649051bb45fc85e9921", "text": "Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.", "title": "" } ]
[ { "docid": "8093101949a96d27082712ce086bf11f", "text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.", "title": "" }, { "docid": "fab439f694dad00c66cab42526fcaa70", "text": "The nature of consciousness, the mechanism by which it occurs in the brain, and its ultimate place in the universe are unknown. We proposed in the mid 1990's that consciousness depends on biologically 'orchestrated' coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity, and that the continuous Schrödinger evolution of each such process terminates in accordance with the specific Diósi-Penrose (DP) scheme of 'objective reduction' ('OR') of the quantum state. This orchestrated OR activity ('Orch OR') is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space-time geometry, so Orch OR suggests that there is a connection between the brain's biomolecular processes and the basic structure of the universe. Here we review Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology. We also introduce a novel suggestion of 'beat frequencies' of faster microtubule vibrations as a possible source of the observed electro-encephalographic ('EEG') correlates of consciousness. We conclude that consciousness plays an intrinsic role in the universe.", "title": "" }, { "docid": "69f853b90b837211e24155a2f55b9a95", "text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.", "title": "" }, { "docid": "7d3642cc1714951ccd9ec1928a340d81", "text": "Electrical fuse (eFUSE) has become a popular choice to enable memory redundancy, chip identification and authentication, analog device trimming, and other applications. We will review the evolution and applications of electrical fuse solutions for 180 nm to 45 nm technologies at IBM, and provide some insight into future uses in 32 nm technology and beyond with the eFUSE as a building block for the autonomic chip of the future.", "title": "" }, { "docid": "ed2ac159196ce7cf79eb8ee1c258d3f8", "text": "To uncover regulatory mechanisms in Hedgehog (Hh) signaling, we conducted genome-wide screens to identify positive and negative pathway components and validated top hits using multiple signaling and differentiation assays in two different cell types. Most positive regulators identified in our screens, including Rab34, Pdcl, and Tubd1, were involved in ciliary functions, confirming the central role for primary cilia in Hh signaling. Negative regulators identified included Megf8, Mgrn1, and an unannotated gene encoding a tetraspan protein we named Atthog. The function of these negative regulators converged on Smoothened (SMO), an oncoprotein that transduces the Hh signal across the membrane. In the absence of Atthog, SMO was stabilized at the cell surface and concentrated in the ciliary membrane, boosting cell sensitivity to the ligand Sonic Hedgehog (SHH) and consequently altering SHH-guided neural cell-fate decisions. Thus, we uncovered genes that modify the interpretation of morphogen signals by regulating protein-trafficking events in target cells.", "title": "" }, { "docid": "0b4f44030a922ba2c970c263583e8465", "text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.", "title": "" }, { "docid": "60664c058868f08a67d14172d87a4756", "text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.", "title": "" }, { "docid": "055a7be9623e794168b858e41bceaabd", "text": "Lexical Pragmatics is a research field that tries to give a systematic and explanatory account of pragmatic phenomena that are connected with the semantic underspecification of lexical items. Cases in point are the pragmatics of adjectives, systematic polysemy, the distribution of lexical and productive causatives, blocking phenomena, the interpretation of compounds, and many phenomena presently discussed within the framework of Cognitive Semantics. The approach combines a constrained-based semantics with a general mechanism of conversational implicature. The basic pragmatic mechanism rests on conditions of updating the common ground and allows to give a precise explication of notions as generalized conversational implicature and pragmatic anomaly. The fruitfulness of the basic account is established by its application to a variety of recalcitrant phenomena among which its precise treatment of Atlas & Levinson's Qand I-principles and the formalization of the balance between informativeness and efficiency in natural language processing (Horn's division of pragmatic labor) deserve particular mention. The basic mechanism is subsequently extended by an abductive reasoning system which is guided by subjective probability. The extended mechanism turned out to be capable of giving a principled account of lexical blocking, the pragmatics of adjectives, and systematic polysemy.", "title": "" }, { "docid": "591e4719cadd8b9e6dfda932856fffce", "text": "Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community.", "title": "" }, { "docid": "32b04b91bc796a082fb9c0d4c47efbf9", "text": "Intell Sys Acc Fin Mgmt. 2017;24:49–55. Summary A two‐step system is presented to improve prediction of telemarketing outcomes and to help the marketing management team effectively manage customer relationships in the banking industry. In the first step, several neural networks are trained with different categories of information to make initial predictions. In the second step, all initial predictions are combined by a single neural network to make a final prediction. Particle swarm optimization is employed to optimize the initial weights of each neural network in the ensemble system. Empirical results indicate that the two‐ step system presented performs better than all its individual components. In addition, the two‐ step system outperforms a baseline one where all categories of marketing information are used to train a single neural network. As a neural networks ensemble model, the proposed two‐step system is robust to noisy and nonlinear data, easy to interpret, suitable for large and heterogeneous marketing databases, fast and easy to implement.", "title": "" }, { "docid": "a7d3c1a4089d55461f9c74a345883f63", "text": "Robots that can easily interact with humans and move through natural environments are becoming increasingly essential as assistive devices in the home, office and hospital. These machines need to be safe, effective, and easy to control. One strategy towards accomplishing these goals is to build the robots using soft and flexible materials to make them much more approachable and less likely to damage their environment. A major challenge is that comparatively little is known about how best to design, fabricate and control deformable machines. Here we describe the design, fabrication and control of a novel soft robotic platform (Softworms) as a modular device for research, education and public outreach. These robots are inspired by recent neuromechanical studies of crawling and climbing by larval moths and butterflies (Lepidoptera, caterpillars). Unlike most soft robots currently under development, the Softworms do not rely on pneumatic or fluidic actuators but are electrically powered and actuated using either shape-memory alloy microcoils or motor tendons, and they can be modified to accept other muscle-like actuators such as electroactive polymers. The technology is extremely versatile, and different designs can be quickly and cheaply fabricated by casting elastomeric polymers or by direct 3D printing. Softworms can crawl, inch or roll, and they are steerable and even climb steep inclines. Softworms can be made in any shape but here we describe modular and monolithic designs requiring little assembly. These modules can be combined to make multi-limbed devices. We also describe two approaches for controlling such highly deformable structures using either model-free state transition-reward matrices or distributed, mechanically coupled oscillators. In addition to their value as a research platform, these robots can be developed for use in environmental, medical and space applications where cheap, lightweight and shape-changing deformable robots will provide new performance capabilities.", "title": "" }, { "docid": "cff44da2e1038c8e5707cdde37bc5461", "text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "title": "" }, { "docid": "9c1591e811b5983167606728cac2331d", "text": "Persuasive games and gamified systems are effective tools for motivating behavior change using various persuasive strategies. Research has shown that tailoring these systems can increase their efficacy. However, there is little knowledge on how game-based persuasive systems can be tailored to individuals of various personality traits. To advance research in this area, we conducted a large-scale study of 660 participants to investigate how different personalities respond to various persuasive strategies that are used in persuasive health games and gamified systems. Our results reveal that people's personality traits play a significant role in the perceived persuasiveness of different strategies. Conscientious people tend to be motivated by goal setting, simulation, self-monitoring and feedback; people who are more open to experience are more likely to be demotivated by rewards, competition, comparison, and cooperation. We contribute to the CHI community by offering design guidelines for tailoring persuasive games and gamified designs to a particular group of personalities.", "title": "" }, { "docid": "9bcf45278e391a6ab9a0b33e93d82ea9", "text": "Non-orthogonal multiple access (NOMA) is a potential enabler for the development of 5G and beyond wireless networks. By allowing multiple users to share the same time and frequency, NOMA can scale up the number of served users, increase spectral efficiency, and improve user-fairness compared to existing orthogonal multiple access (OMA) techniques. While single-cell NOMA has drawn significant attention recently, much less attention has been given to multi-cell NOMA. This article discusses the opportunities and challenges of NOMA in a multi-cell environment. As the density of base stations and devices increases, inter-cell interference becomes a major obstacle in multi-cell networks. As such, identifying techniques that combine interference management approaches with NOMA is of great significance. After discussing the theory behind NOMA, this article provides an overview of the current literature and discusses key implementation and research challenges, with an emphasis on multi-cell NOMA.", "title": "" }, { "docid": "f73affe1a0bfe7ae12d91feca82a95c3", "text": "Given the deluge of multimedia content that is becoming available over the Internet, it is increasingly important to be able to effectively examine and organize these large stores of information in ways that go beyond browsing or collaborative filtering. In this paper we review previous work on audio and video processing, and define the task of Topic-Oriented Multimedia Summarization (TOMS) using natural language generation: given a set of automatically extracted features from a video (such as visual concepts and ASR transcripts) a TOMS system will automatically generate a paragraph of natural language (\"a recounting\"), which summarizes the important information in a video belonging to a certain topic area, and provides explanations for why a video was matched and retrieved. We see this as a first step towards systems that will be able to discriminate visually similar, but semantically different videos, compare two videos and provide textual output or summarize a large number of videos at once. In this paper, we introduce our approach of solving the TOMS problem. We extract visual concept features and ASR transcription features from a given video, and develop a template-based natural language generation system to produce a textual recounting based on the extracted features. We also propose possible experimental designs for continuously evaluating and improving TOMS systems, and present results of a pilot evaluation of our initial system.", "title": "" }, { "docid": "ee0d89ccd67acc87358fa6dd35f6b798", "text": "Lessons learned from developing four graph analytics applications reveal good research practices and grand challenges for future research. The application domains include electric-power-grid analytics, social-network and citation analytics, text and document analytics, and knowledge domain analytics.", "title": "" }, { "docid": "f7bc1678e45157246bd1cac50fe33aa0", "text": "Histopathologic diagnosis of tubal intraepithelial carcinoma (TIC) has emerged as a significant challenge in the last few years. The avoidance of pitfalls in the diagnosis of TIC is crucial if a better understanding of its natural history and outcome is to be achieved. Herein, we present a case of a 52-year-old woman who underwent a risk-reducing salpingo-oophorectomy procedure. Histologic examination of a fallopian tube demonstrated a focus of atypical epithelial proliferation, which was initially considered to be a TIC. Complete study of the case indicated that the focus was, in fact, papillary syncytial metaplasia of tubal mucosal endometriosis. Papillary syncytial metaplasia may resemble TIC and should be considered in cases of proliferative lesions of the tubal epithelium.", "title": "" }, { "docid": "6f5700dde97988b8bd95cd58956febfc", "text": "The prolapse of one or several pelvic organs is a condition that has been known by medicine since its early days, and different therapeutic approaches have been proposed and accepted. But one of the main problems concerning the prolapse of pelvic organs is the need for a universal, clear and reliable staging method.Because the prolapse has been known and recognized as a disease for more than one hundred years, so are different systems proposed for its staging. But none has proved itself to respond to all the requirements of the medical community, so the vast majority were seen coming and going, failing to become the single most useful system for staging in pelvic organ prolapse (POP).The latest addition to the group of staging systems is the POP-Q system, which is becoming increasingly popular with specialists all over the world, because, although is not very simple as a concept, it helps defining the features of a prolapse at a level of completeness not reached by any other system to date. In this vision, the POP-Q system may reach the importance and recognition of the TNM system use in oncology.This paper briefly describes the POP-Q system, by comparison with other staging systems, analyzing its main features and the concept behind it.", "title": "" }, { "docid": "4bfb389e1ae2433f797458ff3fe89807", "text": "Many if not most markets with network externalities are two-sided. To succeed, platforms in industries such as software, portals and media, payment systems and the Internet, must “get both sides of the market on board ”. Accordingly, platforms devote much attention to their business model, that is to how they court each side while making money overall. The paper builds a model of platform competition with two-sided markets. It unveils the determinants of price allocation and enduser surplus for different governance structures (profit-maximizing platforms and not-for-profit joint undertakings), and compares the outcomes with those under an integrated monopolist and a Ramsey planner.", "title": "" }, { "docid": "8b6832586f5ec4706e7ace59101ea487", "text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.", "title": "" } ]
scidocsrr
96a27f00414afb5d10de8ef79a9dfbc4
Semantic Tagging of Mathematical Expressions
[ { "docid": "83fabef0cead9453d8081f834a08d868", "text": "1. SYSTEM OVERVIEW Researchers working in technical disciplines wishing to search for information related to a particular mathematical expression cannot effectively do so with a text-based search engine unless they know appropriate text keywords. To overcome this difficulty, we demonstrate a math-aware search engine, which extends the capability of existing text search engines to search mathematical content.", "title": "" }, { "docid": "0bf3c08b71fedd629bdc584c3deeaa34", "text": "Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary.", "title": "" } ]
[ { "docid": "adb64a513ab5ddd1455d93fc4b9337e6", "text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.", "title": "" }, { "docid": "86e646b845384d3cfbb146075be5c02a", "text": "Content-Based Image Retrieval (CBIR) has become one of the most active research areas in the past few years. Many visual feature representations have been explored and many systems built. While these research e orts establish the basis of CBIR, the usefulness of the proposed approaches is limited. Speci cally, these e orts have relatively ignored two distinct characteristics of CBIR systems: (1) the gap between high level concepts and low level features; (2) subjectivity of human perception of visual content. This paper proposes a relevance feedback based interactive retrieval approach, which e ectively takes into account the above two characteristics in CBIR. During the retrieval process, the user's high level query and perception subjectivity are captured by dynamically updated weights based on the user's relevance feedback. The experimental results show that the proposed approach greatly reduces the user's e ort of composing a query and captures the user's information need more precisely.", "title": "" }, { "docid": "9ffaf53e8745d1f7f5b7ff58c77602c6", "text": "Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches.", "title": "" }, { "docid": "2e7dd876af56a4698d3e79d3aa5f2eff", "text": "Although there are numerous aetiologies for coccygodynia described in the medical literature, precoccygeal epidermal inclusion cyst presenting as a coccygodynia has not been reported. We report a 30-year-old woman with intractable coccygodynia. Magnetic resonance imaging showed a circumscribed precoccygeal cystic lesion. The removed cyst was pearly-white in appearance and contained cheesy material. Histological evaluation established the diagnosis of epidermal inclusion cyst with mild nonspecific inflammation. The patient became asymptomatic and remained so at two years follow-up. This report suggests that precoccygeal epidermal inclusion cyst should be considered as one of the differential diagnosis of coccygodynia. Our experience suggests that patients with intractable coccygodynia should have a magnetic resonance imaging to rule out treatable causes of coccygodynia.", "title": "" }, { "docid": "1c34abb0e212034a5fb96771499f1ee3", "text": "Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions.", "title": "" }, { "docid": "dd06708ab6f67287e213bdb7b4436491", "text": "Here we present the design of a passive-dynamics based, fully autonomous, 3-D, bipedal walking robot that uses simple control, consumes little energy, and has human-like morphology and gait. Design aspects covered here include the freely rotating hip joint with angle bisecting mechanism; freely rotating knee joints with latches; direct actuation of the ankles with a spring, release mechanism, and reset motor; wide feet that are shaped to aid lateral stability; and the simple control algorithm. The biomechanics context of this robot is discussed in more detail in [1], and movies of the robot walking are available at Science Online and http://www.tam.cornell.edu/~ruina/powerwalk.html. This robot adds evidence to the idea that passive-dynamic approaches might help design walking robots that are simpler, more efficient and easier to control.", "title": "" }, { "docid": "966fa8e8eaf66201494633e582e11a31", "text": "This paper describes the development of a noninvasive blood pressure measurement (NIBP) device based on the oscillometric principle. The device is composed of an arm cuff, an air-pumping motor, a solenoid valve, a pressure transducer, and a 2×16 characters LCD display module and a microcontroller which acts as the central controller and processor for the hardware. In the development stage, an auxiliary instrumentation for signal acquisition and digital signal processing using LabVIEW, which is also known as virtual instrument (VI), is incorporated for learning and experimentation purpose. Since the most problematic part of metrological evaluation of an oscillometric NIBP system is in the proprietary algorithms of determining systolic blood pressure (SBP) and diastolic blood pressure (DBP), the amplitude algorithm is used. The VI is a useful tool for studying data acquisition and signal processing to determine SBP and DBP from the maximum of the oscillations envelope. The knowledge from VI procedures is then adopted into a stand alone NIBP device. SBP and DBP are successfully obtained using the circuit developed for the NIBP device. The work done is a proof of design concept that requires further refinement.", "title": "" }, { "docid": "c481baeab2091672c044c889b1179b1f", "text": "Our research is based on an innovative approach that integrates computational thinking and creative thinking in CS1 to improve student learning performance. Referencing Epstein's Generativity Theory, we designed and deployed a suite of creative thinking exercises with linkages to concepts in computer science and computational thinking, with the premise that students can leverage their creative thinking skills to \"unlock\" their understanding of computational thinking. In this paper, we focus on our study on differential impacts of the exercises on different student populations. For all students there was a linear \"dosage effect\" where completion of each additional exercise increased retention of course content. The impacts on course grades, however, were more nuanced. CS majors had a consistent increase for each exercise, while non-majors benefited more from completing at least three exercises. It was also important for freshmen to complete all four exercises. We did find differences between women and men but cannot draw conclusions.", "title": "" }, { "docid": "a67df1737ca4e5cb41fe09ccb57c0e88", "text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach", "title": "" }, { "docid": "253fb54d00d50a407452fff881390ba1", "text": "In this work, we investigate the effects of the cascade architecture of dilated convolutions and the deep network architecture of multi-resolution input images on the accuracy of semantic segmentation. We show that a cascade of dilated convolutions is not only able to efficiently capture larger context without increasing computational costs, but can also improve the localization performance. In addition, the deep network architecture for multi-resolution input images increases the accuracy of semantic segmentation by aggregating multi-scale contextual information. Furthermore, our fully convolutional neural network is coupled with a model of fully connected conditional random fields to further remove isolated false positives and improve the prediction along object boundaries. We present several experiments on two challenging image segmentation datasets, showing substantial improvements over strong baselines.", "title": "" }, { "docid": "d961bd734577dad36588f883e56c3a5d", "text": "Received Jan 5, 2018 Revised Feb 14, 2018 Accepted Feb 28, 2018 This paper proposes Makespan and Reliability based approach, a static sheduling strategy for distributed real time embedded systems that aims to optimize the Makespan and the reliability of an application. This scheduling problem is NP-hard and we rely on a heuristic algorithm to obtain efficiently approximate solutions. Two contributions have to be outlined: First, a hierarchical cooperation between heuristics ensuring to treat alternatively the objectives and second, an Adapatation Module allowing to improve solution exploration by extending the search space. It results a set of compromising solutions offering the designer the possibility to make choices in line with his (her) needs. The method was tested and experimental results are provided.", "title": "" }, { "docid": "6b214fdd60a1a4efe27258c2ab948086", "text": "Ambient Assisted Living (AAL) aims to create innovative technical solutions and services to support independent living among older adults, improve their quality of life and reduce the costs associated with health and social care. AAL systems provide health monitoring through sensor based technologies to preserve health and functional ability and facilitate social support for the ageing population. Human activity recognition (HAR) is an enabler for the development of robust AAL solutions, especially in safety critical environments. Therefore, HAR models applied within this domain (e.g. for fall detection or for providing contextual information to caregivers) need to be accurate to assist in developing reliable support systems. In this paper, we evaluate three machine learning algorithms, namely Support Vector Machine (SVM), a hybrid of Hidden Markov Models (HMM) and SVM (SVM-HMM) and Artificial Neural Networks (ANNs) applied on a dataset collected between the elderly and their caregiver counterparts. Detected activities will later serve as inputs to a bidirectional activity awareness system for increasing social connectedness. Results show high classification performances for all three algorithms. Specifically, the SVM-HMM hybrid demonstrates the best classification performance. In addition to this, we make our dataset publicly available for use by the machine learning community.", "title": "" }, { "docid": "f201e043022b02ecd763e2b6d751d21b", "text": "The paper presents an efficient and reliable approach to automatic people segmentation, tracking and counting, designed for a system with an overhead mounted (zenithal) camera. Upon the initial block-wise background subtraction, k-means clustering is used to enable the segmentation of single persons in the scene. The number of people in the scene is estimated as the maximal number of clusters with acceptable inter-cluster separation. Tracking of segmented people is addressed as a problem of dynamic cluster assignment between two consecutive frames and it is solved in a greedy fashion. Systems for people counting are applied to people surveillance and management and lately within the ambient intelligence solutions. Experimental results suggest that the proposed method is able to achieve very good results in terms of counting accuracy and execution speed.", "title": "" }, { "docid": "e3b91b1133a09d7c57947e2cd85a17c7", "text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.", "title": "" }, { "docid": "b420be5b34185e4604f22b038a605c92", "text": "Computer networks are inherently social networks, linking people, organizations, and knowledge. They are social institutions that should not be studied in isolation but as integrated into everyday lives. The proliferation of computer networks has facilitated a deemphasis on group solidarities at work and in the community and afforded a turn to networked societies that are loosely bounded and sparsely knit. The Internet increases people's social capital, increasing contact with friends and relatives who live nearby and far away. New tools must be developed to help people navigate and find knowledge in complex, fragmented, networked societies.", "title": "" }, { "docid": "29414157d8054f80db977a2b90992a23", "text": "Scatter search and its generalized form called path relinking are evolutionary methods that have recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, these methods use strategies for combining solution vectors that have proved effective for scheduling, routing, financial product design, neural network training, optimizing simulation and a variety of other problem areas. These approaches can be implemented in multiple ways, and offer numerous alternatives for exploiting their basic ideas. We identify a template for scatter search and path relinking methods that provides a convenient and \"user friendly\" basis for their implementation. The overall design can be summarized by a small number of key steps, leading to versions of scatter search and path relinking that are fully specified upon providing a handful of subroutines. Illustrative forms of these subroutines are described that make it possible to create methods for a wide range of optimization problems. Highlights of these components include new diversification generators for zero-one and permutation problems (extended by a mapping-byobjective technique that handles additional classes of problems), together with processes to avoid generating or incorporating duplicate solutions at various stages (related to the avoidance of cycling in tabu search) and a new method for creating improved solutions. *** UPDATED AND EXTENDED: February 1998 *** Previous version appeared in Lecture Notes in Computer Science, 1363, J.K. Hao, E. Lutton, E. Ronald, M. Schoenauer, D. Snyers (Eds.), 13-54, 1997. This research was supported in part by the Air Force Office of Scientific Research Grant #F49620-97-1-0271.", "title": "" }, { "docid": "d7780a122b51adc30f08eeb13af78bd1", "text": "Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the \"wear and tear\" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.", "title": "" }, { "docid": "efed670ac36ee6f4e084755b4b408467", "text": "In a variety of problem domains, it has been observed that the aggregate opinions of groups are often more accurate than those of the constituent individuals, a phenomenon that has been termed the \"wisdom of the crowd.\" Yet, perhaps surprisingly, there is still little consensus on how generally the phenomenon holds, how best to aggregate crowd judgements, and how social influence affects estimates. We investigate these questions by taking a meta wisdom of crowds approach. With a distributed team of over 100 student researchers across 17 institutions in the United States and India, we develop a large-scale online experiment to systematically study the wisdom of crowds effect for 1,000 different tasks in 50 subject domains. These tasks involve various types of knowledge (e.g., explicit knowledge, tacit knowledge, and prediction), question formats (e.g., multiple choice and point estimation), and inputs (e.g., text, audio, and video). To examine the effect of social influence, participants are randomly assigned to one of three different experiment conditions in which they see varying degrees of information on the responses of others. In this ongoing project, we are now preparing to recruit participants via Amazon?s Mechanical Turk.", "title": "" }, { "docid": "9409922d01a00695745939b47e6446a0", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" }, { "docid": "8d7467bf868d3a75821aa8f4f7513312", "text": "Search on PCs has become less efficient than searching the Web due to the increasing amount of stored data. In this paper we present an innovative Desktop search solution, which relies on extracted metadata, context information as well as additional background information for improving Desktop search results. We also present a practical application of this approach — the extensible Beagle toolbox. To prove the validity of our approach, we conducted a series of experiments. By comparing our results against the ones of a regular Desktop search solution — Beagle — we show an improved quality in search and overall performance.", "title": "" } ]
scidocsrr
bedac5851c99b9c9dcd146c536e0be4e
Single Image Dehazing via Multi-scale Convolutional Neural Networks
[ { "docid": "a620202abaa0f11d2d324b05a29986dd", "text": "Haze is an atmospheric phenomenon that significantly degrades the visibility of outdoor scenes. This is mainly due to the atmosphere particles that absorb and scatter the light. This paper introduces a novel single image approach that enhances the visibility of such degraded images. Our method is a fusion-based strategy that derives from two original hazy image inputs by applying a white balance and a contrast enhancing procedure. To blend effectively the information of the derived inputs to preserve the regions with good visibility, we filter their important features by computing three measures (weight maps): luminance, chromaticity, and saliency. To minimize artifacts introduced by the weight maps, our approach is designed in a multiscale fashion, using a Laplacian pyramid representation. We are the first to demonstrate the utility and effectiveness of a fusion-based technique for dehazing based on a single degraded image. The method performs in a per-pixel fashion, which is straightforward to implement. The experimental results demonstrate that the method yields results comparative to and even better than the more complex state-of-the-art techniques, having the advantage of being appropriate for real-time applications.", "title": "" }, { "docid": "c5427ac777eaa3ecf25cb96a124eddfe", "text": "One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.", "title": "" }, { "docid": "9323c74e39a677c28d1c082b12e1f587", "text": "Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Restoring the true scene colors (clear day image) from a single image of a weather-degraded scene remains a challenging task due to the inherent ambiguity between scene albedo and depth. In this paper, we introduce a novel probabilistic method that fully leverages natural statistics of both the albedo and depth of the scene to resolve this ambiguity. Our key idea is to model the image with a factorial Markov random field in which the. scene albedo and depth are. two statistically independent latent layers. We. show that we may exploit natural image and depth statistics as priors on these hidden layers and factorize a single foggy image via a canonical Expectation Maximization algorithm with alternating minimization. Experimental results show that the proposed method achieves more accurate restoration compared to state-of-the-art methods that focus on only recovering scene albedo or depth individually.", "title": "" } ]
[ { "docid": "3ec66ecce10b58d6338db0c5d694bd66", "text": "In this communication, we have proposed a compact planar antenna for isotropic radiation pattern in a wide operating band. The proposed antenna consists of four sequential rotated L-shaped monopoles that are fed by a compact uniform sequential-phase (SP) feeding network with equal amplitude and incremental 90 ° phase delay. Based on the rotated field method, a full spatial coverage with gain deviation less than 6 dB is achieved in a wide operating band from 2.3 to 2.61 GHz, also with well-impedance matching. A prototype of the proposed antenna has been built and tested. The measured results, including the reflection coefficient, gain, and radiation patterns, are analyzed and compared with the simulated results.", "title": "" }, { "docid": "e7a260bfb238d8b4f147ac9c2a029d1d", "text": "The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details.", "title": "" }, { "docid": "1d0352fc3237e8605c2d3a04237befa5", "text": "This paper discusses the investigation of bandwidth enhancement for microstrip rectangular patch antenna by putting up multiple slots etched on the patch. The technique is proposed in overcoming the nature characteristic of microstrip patch antenna which has narrowband bandwidth response. Here, the antenna is designed to work at center frequency around 1.6GHz for GPS application. The patch of antenna is fed by using a microstrip transmission line feeding network extended from a center pin of 500hm SMA connector. The number of slots etched on the patch is set to be 13 which are separated 3mm each other. The 13 slots which have different length for each are arranged in parallel with the feeding line. To shows the feasibility of proposed technique, the characteristics of microstrip patch antenna with multiple slots are compared to the conventional microstrip patch antenna. Both antennas are implemented using FR-4 Epoxy dielectric substrates with the dimension of 55mm × 80mm and the thickness of 1.6mm. From the characterization, although there are some slight differences of simulation and measurement results, in general the microstrip patch antenna with multiple slots demonstrates bandwidth enhancement up to 98.3% and 70.8% for simulated result and measured result, respectively.", "title": "" }, { "docid": "ada35607fa56214e5df8928008735353", "text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.", "title": "" }, { "docid": "7c287295e022480314d8a2627cd12cef", "text": "The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.", "title": "" }, { "docid": "fc9061348b46fc1bf7039fa5efcbcea1", "text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.", "title": "" }, { "docid": "30fe7d1b5d273a5ae53365f00ab84749", "text": "data-rich samples are essential for future research. But obtaining and storing these samples is not as straightforward as many researchers think. A cross the world, freezers and cabinet shelves are full of human samples. Biobanks — collections of biological material set aside for research — vary tremendously in size, scope and focus. Samples can be collected from the general population, from patients who have had surgery or a biopsy and from people who have recently died. Some collections date back decades. The Aboriginal genome, for instance, was sequenced from a lock of hair originally given to British ethnolo-gist Alfred Cort Haddon in the 1920s; he criss-crossed the world gathering samples that are now housed at the University of Cambridge, UK. Most collections contain dried or frozen blood, but tissues such as eye, brain and nail are also held. Some biobanks address different questions from others: a population-based biobank that collects dried blood and health data may be used to determine the genetic risk factors for breast cancer, whereas a disease biobank that collects tumour samples might be used to reveal different molecular forms of breast cancer. The number of tissue samples in US banks alone was estimated at more than 300 million at the turn of the century and is increasing by 20 million a year, according to a report 1 from the research organization RAND Corporation in Santa Monica, California. Those numbers are probably an underestimate, says Allison Hubel, director of the Biopreservation Core Resource at the University of Minnesota in Minneapolis. But many scientists still say they cannot obtain enough samples. A 2011 survey 2 of more than 700 cancer researchers found that 47% had trouble finding samples of sufficient quality. Because of this, 81% have reported limiting the scope of their work, and 60% said they question the findings of their studies. Whereas researchers would once have inspected biological specimens under a microscope or measured only a handful of chemical constituents, or analytes, now they want to profile hundreds of molecules, including DNA, RNA, proteins and metabolites. The Larger biobanks have invested in automated storage and retrieval systems to track samples and ensure that they are maintained at a constant temperature.", "title": "" }, { "docid": "764840c288985e0257413c94205d2bf2", "text": "Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.", "title": "" }, { "docid": "55de949330351b4a70fad626805df6b6", "text": "The 21164 is a new quad-issue, superscalar Alpha microprocessor that executes 1.2 billion instructions per second. The 300-MHz, 0.5-μm CMOS chip delivers an estimated 345/505 SPECint32/SPECfp92 performance. The design's high clock rate, low operational latency, and high-throughput/nonblocking memory systems contribute to this performance.", "title": "" }, { "docid": "f2e9083262c2680de3cf756e7960074a", "text": "Social commerce is a new development in e-commerce generated by the use of social media to empower customers to interact on the Internet. The recent advancements in ICTs and the emergence of Web 2.0 technologies along with the popularity of social media and social networking sites have seen the development of new social platforms. These platforms facilitate the use of social commerce. Drawing on literature from marketing and information systems (IS) the author proposes a new model to develop our underocial media ocial networking site rust LS-SEM standing of social commerce using a PLS-SEM methodology to test the model. Results show that Web 2.0 applications are attracting individuals to have interactions as well as generate content on the Internet. Consumers use social commerce constructs for these activities, which in turn increase the level of trust and intention to buy. Implications, limitations, discussion, and future research directions are discussed at the end of the paper. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d2712a4e774b0c49988bb2aecbeedca9", "text": "Elucidating the binding mode of carboxylate-containing ligands to gold nanoparticles (AuNPs) is crucial to understand their stabilizing role. A detailed picture of the three-dimensional structure and coordination modes of citrate, acetate, succinate and glutarate to AuNPs is obtained by 13C and 23Na solid-state NMR in combination with computational modelling and electron microscopy. The binding between the carboxylates and the AuNP surface is found to occur in three different modes. These three modes are simultaneously present at low citrate to gold ratios, while a monocarboxylate monodentate (1κO1) mode is favoured at high citrate:gold ratios. The surface AuNP atoms are found to be predominantly in the zero oxidation state after citrate coordination, although trace amounts of Auδ+ are observed. 23Na NMR experiments show that Na+ ions are present near the gold surface, indicating that carboxylate binding occurs as a 2e- L-type interaction for each oxygen atom involved. This approach has broad potential to probe the binding of a variety of ligands to metal nanoparticles.", "title": "" }, { "docid": "d1bd01a4760f08ebe3557557327108b4", "text": "This paper investigates the performance of our recently proposed precoding multiuser (MU) MIMO system in indoor visible light communications (VLC). The transmitted data of decentralized users are transmitted by light-emitting-diode (LED) arrays after precoding in a transmitter, by which the MU interference is eliminated. Thus, the complexity of user terminals could be reduced, which results in the reduction of power consumption. The limitation of block diagonalization precoding algorithm in VLC systems is investigated. The corresponding solution by utilizing optical detectors with different fields of view (FOV) is derived, and the impact of FOV to the proposed system is also analyzed. In this paper, we focus on BER and signal-to-noise-ratio performances of the proposed system with the consideration of the mobility of user terminals. Simulation results show that the majority of the indoor region can achieve 100 Mb/s at a BER of 10-6 when single LED chip's power is larger than 10 mW.", "title": "" }, { "docid": "1a0e65754fa4d88325e1360a292d4e5f", "text": "Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features/components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training.", "title": "" }, { "docid": "bbb80fe02979c9c3091abcbf096b7dac", "text": "This paper gives the details of typical network security model using static VLAN technology and TACACS+ AAA server, also discusses security issues, and their risks, caused by different types of vulnerability, threat, attacks and misconfiguration for VLAN technology, and how prevent it and protect the network by using TACACS+ AAA server, switch and router.", "title": "" }, { "docid": "33468c214408d645651871bd8018ed82", "text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.", "title": "" }, { "docid": "40fd577cdff0e5c769127c91a3053fee", "text": "Information Technology (IT) projects have a reputation of not delivering business requirements. Historical challenges like meeting cost, quality, and timeline targets remain despite the extensive experience most organizations have managing projects of all sizes. The profession continues to have high profile failures that make headlines, such as the recent healthcare.gov initiative. This research provides literary sources on agile methodology that can be used to help improve project processes and outcomes.", "title": "" }, { "docid": "475039af64d5357305059cbf1f08228b", "text": "The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine, and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of Linear Logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine. PACS: 03.67.Lx, 02.10.-v, 02.70.-c", "title": "" }, { "docid": "a33147bd85b4ecf4f2292e4406abfc26", "text": "Accident detection systems help reduce fatalities stemming from car accidents by decreasing the response time of emergency responders. Smartphones and their onboard sensors (such as GPS receivers and accelerometers) are promising platforms for constructing such systems. This paper provides three contributions to the study of using smartphone-based accident detection systems. First, we describe solutions to key issues associated with detecting traffic accidents, such as preventing false positives by utilizing mobile context information and polling onboard sensors to detect large accelerations. Second, we present the architecture of our prototype smartphone-based accident detection system and empirically analyze its ability to resist false positives as well as its capabilities for accident reconstruction. Third, we discuss how smartphone-based accident detection can reduce overall traffic congestion and increase the preparedness of emergency responders.", "title": "" }, { "docid": "a105e6bc9a3446603959dac61ab50065", "text": "Recent work has examined infrastructure-mediated sensing as a practical, low-cost, and unobtrusive approach to sensing human activity in the physical world. This approach is based on the idea that human activities (e.g., running a dishwasher, turning on a reading light, or walking through a doorway) can be sensed by their manifestations in an environment's existing infrastructures (e.g., a home's water, electrical, and HVAC infrastructures). This paper presents HydroSense, a low-cost and easily-installed single-point sensor of pressure within a home's water infrastructure. HydroSense supports both identification of activity at individual water fixtures within a home (e.g., a particular toilet, a kitchen sink, a particular shower) as well as estimation of the amount of water being used at each fixture. We evaluate our approach using data collected in ten homes. Our algorithms successfully identify fixture events with 97.9% aggregate accuracy and can estimate water usage with error rates that are comparable to empirical studies of traditional utility-supplied water meters. Our results both validate our approach and provide a basis for future improvements.", "title": "" }, { "docid": "b7914e542be8aeb5755106525916e86d", "text": "Waymo's self-driving cars contain a broad set of technologies that enable our cars to sense the vehicle surroundings, perceive and understand what is happening in the vehicle vicinity, and determine the safe and efficient actions that the vehicle should take. Many of these technologies are rooted in advanced semiconductor technologies, e.g. faster transistors that enable more compute or low noise designs that enable the faintest sensor signals to be perceived. This paper summarizes a few areas where semiconductor technologies have proven to be fundamentally enabling to self-driving capabilities. The paper also lays out some of the challenges facing advanced semiconductors in the automotive context, as well as some of the opportunities for future innovation.", "title": "" } ]
scidocsrr
608c993bdc4472f0fa55812c0a9c6345
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" } ]
[ { "docid": "6a61106a92b45ed837bbf45110f92eea", "text": "Since its introduction in 2000 in the time-triggered programming language Giotto, the Logical Execution Time (LET) paradigm has evolved from a highly controversial idea to a well-understood principle of real-time programming. This chapter provides an easy-to-read overview of LET programming languages and runtime systems as well as some LET-inspired models of computation. The presentation is intuitive, by example, citing the relevant literature including more formal treatment of the material for reference.", "title": "" }, { "docid": "17c54cad1666e22db0c5dd9c81d43b8b", "text": "With the prevalence of e-commence websites and the ease of online shopping, consumers are embracing huge amounts of various options in products. Undeniably, shopping is one of the most essential activities in our society and studying consumer’s shopping behavior is important for the industry as well as sociology and psychology. Not surprisingly, one of the most popular e-commerce categories is clothing business. There arises the needs for analysis of popular and attractive clothing features which could further boost many emerging applications, such as clothing recommendation and advertising. In this work, we design a novel system that consists of three major components: 1) exploring and organizing a large-scale clothing dataset from a online shopping website, 2) pruning and extracting images of best-selling products in clothing item data and user transaction history, and 3) utilizing a machine learning based approach to discovering clothing attributes as the representative and discriminative characteristics of popular clothing style elements. Through the experiments over a large-scale online clothing dataset, we demonstrate the effectiveness of our proposed system, and obtain useful insights on clothing consumption trends and profitable clothing features.", "title": "" }, { "docid": "d7ac0414b269202015d29ddaaa4bd436", "text": "Mobile manipulation tasks in shopfloor logistics require robots to grasp objects from various transport containers such as boxes and pallets. In this paper, we present an efficient processing pipeline that detects and localizes boxes and pallets in RGB-D images. Our method is based on edges in both the color image and the depth image and uses a RANSAC approach for reliably localizing the detected containers. Experiments show that the proposed method reliably detects and localizes both container types while guaranteeing low processing times.", "title": "" }, { "docid": "9d210dc8bc48e4ff9bf72c260f169ada", "text": "We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identiied by a determin-istic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences.", "title": "" }, { "docid": "740d130948c25d5cd2027645bab151a9", "text": "Ahstract-The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custom-built, cost-effective, Cartesian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that contributed to our competitiveness. We believe these aspects are crucial to building robust and effective robotic systems.", "title": "" }, { "docid": "bfd97b5576873345b0474a645ccda1d6", "text": "We present a direct monocular visual odometry system which runs in real-time on a smartphone. Being a direct method, it tracks and maps on the images themselves instead of extracted features such as keypoints. New images are tracked using direct image alignment, while geometry is represented in the form of a semi-dense depth map. Depth is estimated by filtering over many small-baseline, pixel-wise stereo comparisons. This leads to significantly less outliers and allows to map and use all image regions with sufficient gradient, including edges. We show how a simple world model for AR applications can be derived from semi-dense depth maps, and demonstrate the practical applicability in the context of an AR application in which simulated objects can collide with real geometry.", "title": "" }, { "docid": "a6fffd709fdc90e8135881c925e05c1c", "text": "A field of spoken dialog systems is a rapidly growing research area because the performance improvement of speech technologies motivates the possibility of building systems that a human can easily operate in order to access useful information via spoken languages. Among the components in a spoken dialog system, the dialog management plays major roles such as discourse analysis, database access, error handling, and system action prediction. This survey covers design issues and recent approaches to the dialog management techniques for modeling the dialogs. We also explain the user simulation techniques for automatic evaluation of spoken dialog systems.", "title": "" }, { "docid": "c4e94803ae52dbbf4ac58831ff381467", "text": "Dynamic Adaptive Streaming over HTTP (DASH) is broadly deployed on the Internet for live and on-demand video streaming services. Recently, a new version of HTTP was proposed, named HTTP/2. One of the objectives of HTTP/2 is to improve the end-user perceived latency compared to HTTP/1.1. HTTP/2 introduces the possibility for the server to push resources to the client. This paper focuses on using the HTTP/2 protocol and the server push feature to reduce the start-up delay in a DASH streaming session. In addition, the paper proposes a new approach for video adaptation, which consists in estimating the bandwidth, using WebSocket (WS) over HTTP/2, and in making partial adaptation on the server side. Obtained results show that, using the server push feature and WebSocket layered over HTTP/2 allow faster loading time and faster convergence to the nominal state. Proposed solution is studied in the context of a direct client-server HTTP/2 connection. Intermediate caches are not considered in this study.", "title": "" }, { "docid": "20dd21215f9dc6bd125b2af53500614d", "text": "In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a number of alternative reference sentences are constructed automatically for each candidate translation. The method produces lexical and lowlevel syntactic paraphrases that are relevant to the domain in hand, does not use external knowledge resources, and can be combined with a variety of automatic MT evaluation system.", "title": "" }, { "docid": "035feb63adbe5f83b691e8baf89629cc", "text": "In this article we study the problem of document image representation based on visual features. We propose a comprehensive experimental study that compares three types of visual document image representations: (1) traditional so-called shallow features, such as the RunLength and the Fisher-Vector descriptors, (2) deep features based on Convolutional Neural Networks, and (3) features extracted from hybrid architectures that take inspiration from the two previous ones. We evaluate these features in several tasks ( i.e. classification, clustering, and retrieval) and in different setups ( e.g. domain transfer) using several public and in-house datasets. Our results show that deep features generally outperform other types of features when there is no domain shift and the new task is closely related to the one used to train the model. However, when a large domain or task shift is present, the Fisher-Vector shallow features generalize better and often obtain the best results.", "title": "" }, { "docid": "1162833be969a71b3d9b837d7e6f4464", "text": "RaineR WaseR1,2* and Masakazu aono3,4 1Institut für Werkstoffe der Elektrotechnik 2, RWTH Aachen University, 52056 Aachen, Germany 2Institut für Festkörperforschung/CNI—Center of Nanoelectronics for Information Technology, Forschungszentrum Jülich, 52425 Jülich, Germany 3Nanomaterials Laboratories, National Institute for Material Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan 4ICORP/Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan *e-mail: r.waser@fz-juelich.de", "title": "" }, { "docid": "5fcda05ef200cd326ecb9c2412cf50b3", "text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.", "title": "" }, { "docid": "4753ea589bd7dd76d3fb08ba8dce65ff", "text": "Frequent Patterns are very important in knowledge discovery and data mining process such as mining of association rules, correlations etc. Prefix-tree based approach is one of the contemporary approaches for mining frequent patterns. FP-tree is a compact representation of transaction database that contains frequency information of all relevant Frequent Patterns (FP) in a dataset. Since the introduction of FP-growth algorithm for FP-tree construction, three major algorithms have been proposed, namely AFPIM, CATS tree, and CanTree, that have adopted FP-tree for incremental mining of frequent patterns. All of the three methods perform incremental mining by processing one transaction of the incremental database at a time and updating it to the FP-tree of the initial (original) database. Here in this paper we propose a novel method to take advantage of FP-tree representation of incremental transaction database for incremental mining. We propose “Batch Incremental Tree (BIT)” algorithm to merge two small consecutive duration FP-trees to obtain a FP-tree that is equivalent of FP-tree obtained when the entire database is processed at once from the beginning of the first duration", "title": "" }, { "docid": "6b6ae482da118cff7b1351694582ef35", "text": "In this thesis, we propose a unified framework for the pragmatics and the semantics of agent communication. Pragmatics deals with the way agents use communicative acts when conversing. It is related to the dynamics of agent interactions and to the way of connecting individual acts while building complete conversations. Semantics is interested in the meaning of these acts. It lays down the foundation for a concise and unambiguous meaning of agent messages. This framework aims at solving three main problems of agent communication: 1The absence of a link between the pragmatics and the semantics. 2The inflexibility of current agent communication protocols. 3The verification of agent communication mechanisms. The main contributions of this thesis are: 1A formal pragmatic approach based on social commitments and arguments. 2A new agent communication formalism called Commitment and Argument Network. 3A logical model defining the semantics of the elements used in the pragmatic approach. 4A tableau-based model checking technique for the verification of a kind of flexible protocols called dialogue game protocols. 5A new persuasion dialogue game protocol. The main idea of our pragmatic approach is that agent communication is considered as actions that agents perform on social commitments and arguments. The dynamics of agent conversation is represented by this notion of actions and by the evolution of these commitments and arguments. Our Commitment and Argument Network formalism based on this approach provides an external representation of agent communication dynamics. We argue that this formalism helps agents to participate in conversations in a flexible way because they can reason about their communicative acts using their argumentation systems and the current state of the conversation. Our logical model is a model-theoretic semantics for the pragmatic approach. It defines the meaning of the different communicative acts that we use in our pragmatic approach. It also expresses the meaning of some important speech acts and it captures the semantics of defeasible arguments. This logical model allows us to establish the link between the semantics and the pragmatics of agent communication. We address the problem of verifying dialogue game protocols using a tableau-based model checking technique. These protocols are specified in terms of our logical model. We argue that our model checking algorithm provides a technique, not only to verify if the dialogue game protocol satisfies a given property, but also if this protocol respects the underlying semantics of the communicative acts.", "title": "" }, { "docid": "a819deacd526107ae6370a031caa4e77", "text": "Historical and cultural building conservation is nowadays limited to high value buildings and only receive financial support when serious issues arise. HeritageCARE project is willing to shift this mentality, towards \"prevention is better than the cure\". New technologies are able to support such motivation by improving building inspector work during their inspection routine. In that regard, mixed reality will be explored to offer easy and efficient interaction while remaining focused on the task at hand. Moreover, integration with historical building information model will ensure capture of localized data and easier exchange with stakeholders in case of renovation projects. This paper presents an architecture for a mixed reality application working towards this goal and a first prototype.", "title": "" }, { "docid": "8654f6e707c77a0f46ee993f1f27a287", "text": "The DeepQ tricorder device developed by HTC from 2013 to 2016 was entered in the Qualcomm Tricorder XPRIZE competition and awarded the second prize in April 2017. This paper presents DeepQ»s three modules powered by artificial intelligence: symptom checker, optical sense, and vital sense. We depict both their initial design and ongoing enhancements.", "title": "" }, { "docid": "f06cf2892c85fc487d50c17a87061a0d", "text": "Decision-making invokes two fundamental axes of control: affect or valence, spanning reward and punishment, and effect or action, spanning invigoration and inhibition. We studied the acquisition of instrumental responding in healthy human volunteers in a task in which we orthogonalized action requirements and outcome valence. Subjects were much more successful in learning active choices in rewarded conditions, and passive choices in punished conditions. Using computational reinforcement-learning models, we teased apart contributions from putatively instrumental and Pavlovian components in the generation of the observed asymmetry during learning. Moreover, using model-based fMRI, we showed that BOLD signals in striatum and substantia nigra/ventral tegmental area (SN/VTA) correlated with instrumentally learnt action values, but with opposite signs for go and no-go choices. Finally, we showed that successful instrumental learning depends on engagement of bilateral inferior frontal gyrus. Our behavioral and computational data showed that instrumental learning is contingent on overcoming inherent and plastic Pavlovian biases, while our neuronal data showed this learning is linked to unique patterns of brain activity in regions implicated in action and inhibition respectively.", "title": "" }, { "docid": "89f7a2ddca32772a31a61d3276b4a0a7", "text": "This paper describes the design and implementation of control unit of a 16-bit processor that is implemented in Spartan-II FPGA device. The CPU (Central Processing Unit) is the “brain” of the computer. Its function is to execute the programs stored in the main memory by fetching their instructions, examining them, and executing them one after another. The CPU is composed of several distinct parts, like data path, control path and memory units. For operating the data path CONTROL UNIT is needed to generate the control signals automatically at each clock cycle. The proposed architecture illustrates behavioral and structural description styles of a 16-bit", "title": "" }, { "docid": "70e48e533c5ee411a013999f33e4ae3e", "text": "Traditional sentiment analysis mainly considers binary classifications of reviews, but in many real-world sentiment classification problems, nonbinary review ratings are more useful. This is especially true when consumers wish to compare two products, both of which are not negative. Previous work has addressed this problem by extracting various features from the review text for learning a predictor. Since the same word may have different sentiment effects when used by different reviewers on different products, we argue that it is necessary to model such reviewer and product dependent effects in order to predict review ratings more accurately. In this paper, we propose a novel learning framework to incorporate reviewer and product information into the text based learner for rating prediction. The reviewer, product and text features are modeled as a three-dimension tensor. Tensor factorization techniques can then be employed to reduce the data sparsity problems. We perform extensive experiments to demonstrate the effectiveness of our model, which has a significant improvement compared to state of the art methods, especially for reviews with unpopular products and inactive reviewers.", "title": "" } ]
scidocsrr
88133fc009fd35f3a7f47df4ce9ad01c
Design Activity Framework for Visualization Design
[ { "docid": "ed4dcf690914d0a16d2017409713ea5f", "text": "We argue that HCI has emerged as a design-oriented field of research, directed at large towards innovation, design, and construction of new kinds of information and interaction technology. But the understanding of such an attitude to research in terms of philosophical, theoretical, and methodological underpinnings seems however relatively poor within the field. This paper intends to specifically address what design 'is' and how it is related to HCI. First, three candidate accounts from design theory of what design 'is' are introduced; the conservative, the romantic, and the pragmatic. By examining the role of sketching in design, it is found that the designer becomes involved in a necessary dialogue, from which the design problem and its solution are worked out simultaneously as a closely coupled pair. In conclusion, it is proposed that we need to acknowledge, first, the role of design in HCI conduct, and second, the difference between the knowledge-generating Design-oriented Research and the artifact-generating conduct of Research-oriented Design.", "title": "" } ]
[ { "docid": "aa2b1a8d0cf511d5862f56b47d19bc6a", "text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:", "title": "" }, { "docid": "381ce2a247bfef93c67a3c3937a29b5a", "text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.", "title": "" }, { "docid": "ccf8e1f627af3fe1327a4fa73ac12125", "text": "One of the most common needs in manufacturing plants is rejecting products not coincident with the standards as anomalies. Accurate and automatic anomaly detection improves product reliability and reduces inspection cost. Probabilistic models have been employed to detect test samples with lower likelihoods as anomalies in unsupervised manner. Recently, a probabilistic model called deep generative model (DGM) has been proposed for end-to-end modeling of natural images and already achieved a certain success. However, anomaly detection of machine components with complicated structures is still challenging because they produce a wide variety of normal image patches with low likelihoods. For overcoming this difficulty, we propose unregularized score for the DGM. As its name implies, the unregularized score is the anomaly score of the DGM without the regularization terms. The unregularized score is robust to the inherent complexity of a sample and has a smaller risk of rejecting a sample appearing less frequently but being coincident with the standards.", "title": "" }, { "docid": "ca29896e6adcd09ebcb6456d1b7678fe", "text": "Causation looms large in legal and moral reasoning. People construct causal models of the social and physical world to understand what has happened, how and why, and to allocate responsibility and blame. This chapter explores people’s commonsense notion of causation, and shows how it underpins moral and legal judgments. As a guiding framework it uses the causal model framework (Pearl, 2000) rooted in structural models and counterfactuals, and shows how it can resolve many of the problems that beset standard butfor analyses. It argues that legal concepts of causation are closely related to everyday causal reasoning, and both are tailored to the practical concerns of responsibility attribution. Causal models are also critical when people evaluate evidence, both in terms of the stories they tell to make sense of evidence, and the methods they use to assess its credibility and reliability.", "title": "" }, { "docid": "64de73be55c4b594934b0d1bd6f47183", "text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.", "title": "" }, { "docid": "ba6873627b976fa1a3899303b40eae3c", "text": "Most plant seeds are dispersed in a dry, mature state. If these seeds are non-dormant and the environmental conditions are favourable, they will pass through the complex process of germination. In this review, recent progress made with state-of-the-art techniques including genome-wide gene expression analyses that provided deeper insight into the early phase of seed germination, which includes imbibition and the subsequent plateau phase of water uptake in which metabolism is reactivated, is summarized. The physiological state of a seed is determined, at least in part, by the stored mRNAs that are translated upon imbibition. Very early upon imbibition massive transcriptome changes occur, which are regulated by ambient temperature, light conditions, and plant hormones. The hormones abscisic acid and gibberellins play a major role in regulating early seed germination. The early germination phase of Arabidopsis thaliana culminates in testa rupture, which is followed by the late germination phase and endosperm rupture. An integrated view on the early phase of seed germination is provided and it is shown that it is characterized by dynamic biomechanical changes together with very early alterations in transcript, protein, and hormone levels that set the stage for the later events. Early seed germination thereby contributes to seed and seedling performance important for plant establishment in the natural and agricultural ecosystem.", "title": "" }, { "docid": "0f56b99bc1d2c9452786c05242c89150", "text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.", "title": "" }, { "docid": "0b9dd779ada0ed128c95822f647e0a00", "text": "In this paper, we propose a bigram based supervised method for extractive document summarization in the integer linear programming (ILP) framework. For each bigram, a regression model is used to estimate its frequency in the reference summary. The regression model uses a variety of indicative features and is trained discriminatively to minimize the distance between the estimated and the ground truth bigram frequency in the reference summary. During testing, the sentence selection problem is formulated as an ILP problem to maximize the bigram gains. We demonstrate that our system consistently outperforms the previous ILP method on different TAC data sets, and performs competitively compared to the best results in the TAC evaluations. We also conducted various analysis to show the impact of bigram selection, weight estimation, and ILP setup.", "title": "" }, { "docid": "e5667a65bc628b93a1d5b0e37bfb8694", "text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.", "title": "" }, { "docid": "e0fc5dabbc57100a1c726703e82be706", "text": "In this paper, we examined the effects of financial news on Ho Chi Minh Stock Exchange (HoSE) and we tried to predict the direction of VN30 Index after the news articles were published. In order to do this study, we got news articles from three big financial websites and we represented them as feature vectors. Recently, researchers have used machine learning technique to integrate with financial news in their prediction model. Actually, news articles are important factor that influences investors in a quick way so it is worth considering the news impact on predicting the stock market trends. Previous works focused only on market news or on the analysis of the stock quotes in the past to predict the stock market behavior in the future. We aim to build a stock trend prediction model using both stock news and stock prices of VN30 index that will be applied in Vietnam stock market while there has been a little focus on using news articles to predict the stock direction. Experiment results show that our proposed method achieved high accuracy in VN30 index trend prediction.", "title": "" }, { "docid": "1f0dbec4f21549780d25aa81401494c6", "text": "Parallel scientific applications require high-performanc e I/O support from underlying file systems. A comprehensive understanding of the expected workload is t herefore essential for the design of high-performance parallel file systems. We re-examine the w orkload characteristics in parallel computing environments in the light of recent technology ad vances and new applications. We analyze application traces from a cluster with hundreds o f nodes. On average, each application has only one or two typical request sizes. Large requests fro m several hundred kilobytes to several megabytes are very common. Although in some applications, s mall requests account for more than 90% of all requests, almost all of the I/O data are transferre d by large requests. All of these applications show bursty access patterns. More than 65% of write req uests have inter-arrival times within one millisecond in most applications. By running the same be nchmark on different file models, we also find that the write throughput of using an individual out p t file for each node exceeds that of using a shared file for all nodes by a factor of 5. This indicate s that current file systems are not well optimized for file sharing.", "title": "" }, { "docid": "b622c27ba400e349d2b1ad40c7fc90e1", "text": "In this work we examine the feasibility of quantitatively characterizing some aspects of security. In particular, we investigate if it is possible to predict the number of vulnerabilities that can potentially be present in a software system but may not have been found yet. We use several major operating systems as representatives of complex software systems. The data on vulnerabilities discovered in these systems are analyzed. We examine the results to determine if the density of vulnerabilities in a program is a useful measure. We also address the question about what fraction of software defects are security related, i.e., are vulnerabilities. We examine the dynamics of vulnerability discovery hypothesizing that it may lead us to an estimate of the magnitude of the undiscovered vulnerabilities still present in the system. We consider the vulnerability discovery rate to see if models can be developed to project future trends. Finally, we use the data for both commercial and opensource systems to determine whether the key observations are generally applicable. Our results indicate that the values of vulnerability densities fall within a range of values, just like the commonly used measure of defect density for general defects. Our examination also reveals that it is possible to model the vulnerability discovery using a logistic model that can sometimes be approximated by a linear model. a 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6386c0ef0d7cc5c33e379d9c4c2ca019", "text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.", "title": "" }, { "docid": "2838311a22810aa2b5e0747e06d87a9b", "text": "To build a fashion recommendation system, we need to help users retrieve fashionable items that are visually similar to a particular query, for reasons ranging from searching alternatives (i.e., substitutes), to generating stylish outfits that are visually consistent, among other applications. In domains like clothing and accessories, such considerations are particularly paramount as the visual appearance of items is a critical feature that guides users’ decisions. However, existing systems like Amazon and eBay still rely mainly on keyword search and recommending loosely consistent items (e.g. based on co-purchasing or browsing data), without an interface that makes use of visual information to serve the above needs. In this paper, we attempt to fill this gap by designing and implementing an image-based query system, called Fashionista, which provides a graphical interface to help users efficiently explore those items that are not only visually similar to a given query, but which are also fashionable, as determined by visually-aware recommendation approaches. Methodologically, Fashionista learns a low-dimensional visual space as well as the evolution of fashion trends from large corpora of binary feedback data such as purchase histories of Women’s Clothing & Accessories from Amazon, which we use for this demonstration.", "title": "" }, { "docid": "775969c0c6ad9224cdc9b73706cb5b4f", "text": "This paper discusses how hot carrier injection (HCI) can be exploited to create a trojan that will cause hardware failures. The trojan is produced not via additional logic circuitry but by controlled scenarios that maximize and accelerate the HCI effect in transistors. These scenarios range from manipulating the manufacturing process to varying the internal voltage distribution. This new type of trojan is difficult to test due to its gradual hardware degradation mechanism. This paper describes the HCI effect, detection techniques and discusses the possibility for maliciously induced HCI trojans.", "title": "" }, { "docid": "47f1d6df5ec3ff30d747fb1fcbc271a7", "text": "a r t i c l e i n f o Experimental studies routinely show that participants who play a violent game are more aggressive immediately following game play than participants who play a nonviolent game. The underlying assumption is that nonviolent games have no effect on aggression, whereas violent games increase it. The current studies demonstrate that, although violent game exposure increases aggression, nonviolent video game exposure decreases aggressive thoughts and feelings (Exp 1) and aggressive behavior (Exp 2). When participants assessed after a delay were compared to those measured immediately following game play, violent game players showed decreased aggressive thoughts, feelings and behavior, whereas nonviolent game players showed increases in these outcomes. Experiment 3 extended these findings by showing that exposure to nonviolent puzzle-solving games with no expressly prosocial content increases prosocial thoughts, relative to both violent game exposure and, on some measures, a no-game control condition. Implications of these findings for models of media effects are discussed. A major development in mass media over the last 25 years has been the advent and rapid growth of the video game industry. From the earliest arcade-based console games, video games have been immediately and immensely popular, particularly among young people and their subsequent introduction to the home market only served to further elevate their prevalence (Gentile, 2009). Given their popularity, social scientists have been concerned with the potential effects of video games on those who play them, focusing particularly on games with violent content. While a large percentage of games have always involved the destruction of enemies, recent advances in technology have enabled games to become steadily more realistic. Coupled with an increase in the number of adult players, these advances have enabled the development of games involving more and more graphic violence. Over the past several years, the majority of best-selling games have involved frequent and explicit acts of violence as a central gameplay theme (Smith, Lachlan, & Tamborini, 2003). A video game is essentially a simulated experience. Virtually every major theory of human aggression, including social learning theory, predicts that repeated simulation of antisocial behavior will produce an increase in antisocial behavior (e.g., aggression) and a decrease in prosocial behavior (e.g., helping) outside the simulated environment (i.e., in \" real life \"). In addition, an increase in the perceived realism of the simulation is posited to increase the strength of negative effects (Gentile & Anderson, 2003). Meta-analyses …", "title": "" }, { "docid": "72a5db33e2ba44880b3801987b399c3d", "text": "Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "318a4af201ed3563443dcbe89c90b6b4", "text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation", "title": "" }, { "docid": "12be3f9c1f02ad3f26462ab841a80165", "text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).", "title": "" }, { "docid": "0b9ae0bf6f6201249756d87a56f0005e", "text": "To reduce energy consumption and wastage, effective energy management at home is key and an integral part of the future Smart Grid. In this paper, we present the design and implementation of Green Home Service (GHS) for home energy management. Our approach addresses the key issues of home energy management in Smart Grid: a holistic management solution, improved device manageability, and an enabler of Demand-Response. We also present the scheduling algorithms in GHS for smart energy management and show the results in simulation studies.", "title": "" } ]
scidocsrr
57aeb2bf04b233be2e5223d44d313e4a
Collaborative Index Embedding for Image Retrieval
[ { "docid": "80f88101ea4d095a0919e64b7db9cadb", "text": "The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets.", "title": "" }, { "docid": "64cfa1478cb77087fe205e3786fc99b8", "text": "In state-of-the-art image retrieval systems, an image is represented by a bag of visual words obtained by quantizing high-dimensional local image descriptors, and scalable schemes inspired by text retrieval are then applied for large scale image indexing and retrieval. Bag-of-words representations, however: 1) reduce the discriminative power of image features due to feature quantization; and 2) ignore geometric relationships among visual words. Exploiting such geometric constraints, by estimating a 2D affine transformation between a query image and each candidate image, has been shown to greatly improve retrieval precision but at high computational cost. In this paper we present a novel scheme where image features are bundled into local groups. Each group of bundled features becomes much more discriminative than a single feature, and within each group simple and robust geometric constraints can be efficiently enforced. Experiments in Web image search, with a database of more than one million images, show that our scheme achieves a 49% improvement in average precision over the baseline bag-of-words approach. Retrieval performance is comparable to existing full geometric verification approaches while being much less computationally expensive. When combined with full geometric verification we achieve a 77% precision improvement over the baseline bag-of-words approach, and a 24% improvement over full geometric verification alone.", "title": "" } ]
[ { "docid": "5f5960cf7621f95687cbbac48dfdb0c5", "text": "We present the first controller that allows our small hexapod robot, RHex, to descend a wide variety of regular sized, “real-world” stairs. After selecting one of two sets of trajectories, depending on the slope of the stairs, our open-loop, clock-driven controllers require no further operator input nor task level feedback. Energetics for stair descent is captured via specific resistance values and compared to stair ascent and other behaviors. Even though the algorithms developed and validated in this paper were developed for a particular robot, the basic motion strategies, and the phase relationships between the contralateral leg pairs are likely applicable to other hexapod robots of similar size as well.", "title": "" }, { "docid": "b5cce2a39a51108f9191bdd3516646ca", "text": "The aim of component technology is the replacement of large monolithic applications with sets of smaller software components, whose particular functionality and interoperation can be adapted to users’ needs. However, the adaptation mechanisms of component software are still limited. Most proposals concentrate on adaptations that can be achieved either at compile time or at link time. Current support for dynamic component adaptation, i.e. unanticipated, incremental modifications of a component system at run-time, is not sufficient. This paper proposes object-based inheritance (also known as delegation) as a complement to purely forwarding-based object composition. It presents a typesafe integration of delegation into a class-based object model and shows how it overcomes the problems faced by forwarding-based component interaction, how it supports independent extensibility of components and unanticipated, dynamic component adaptation.", "title": "" }, { "docid": "5b392df7f03046bb8c15c8bdaa5a811f", "text": "The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not “sharp,” the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis.", "title": "" }, { "docid": "e5d523d8a1f584421dab2eeb269cd303", "text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.", "title": "" }, { "docid": "d3d8b000ba5ac3a79e3b2a97be1561a7", "text": "Software Defined Networking (SDN) is seen as one way to solve some problems of the Internet including security, managing complexity, multi-casting, load balancing, and energy efficiency. SDN is an architectural paradigm that separates the control plane of a networking device (e.g., a switch / router) from its data plane, making it feasible to control, monitor, and manage a network from a centralized node (the SDN controller). However, today there exists many SDN controllers including POX, FloodLight, and OpenDaylight. The question is, which of the controllers is to be selected and used? To find out the answer to this question, a decision making template is proposed in this paper to help researchers choose the SDN controller that best fits their needs. The method works as follows; first, several existing open-source controllers are analyzed to collect their properties. For selecting the suitable controller based on the derived requirements (for example, a “Java” interface must be provided by the controller), a matching mechanism is used to compare the properties of the controllers with the requirements. Additionally, for selecting the best controller based on optional requirements (for example, GUI will be extremely preferred over the age of the controller), a Multi-Criteria Decision Making (MCDM) method named Analytic Hierarchy Process (AHP) has been adapted by a monotonic interpolation / extrapolation mechanism which maps the values of the properties to a value in a pre-defined scale. By using the adapted AHP, the topmost five controllers have been compared and “Ryu” is selected to be the best controller based on our requirements.", "title": "" }, { "docid": "4702fceea318c326856cc2a7ae553e1f", "text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.", "title": "" }, { "docid": "e07f3799aa19444aedce7e7c8351bcd4", "text": "Diversity in the genetic profile between individuals and specific ethnic groups affects nutrient requirements, metabolism and response to nutritional and dietary interventions. Indeed, individuals respond differently to lifestyle interventions (diet, physical activity, smoking, etc.). The sequencing of the human genome and subsequent increased knowledge regarding human genetic variation is contributing to the emergence of personalized nutrition. These advances in genetic science are raising numerous questions regarding the mode that precision nutrition can contribute solutions to emerging problems in public health, by reducing the risk and prevalence of nutrition-related diseases. Current views on personalized nutrition encompass omics technologies (nutrigenomics, transcriptomics, epigenomics, foodomics, metabolomics, metagenomics, etc.), functional food development and challenges related to legal and ethical aspects, application in clinical practice, and population scope, in terms of guidelines and epidemiological factors. In this context, precision nutrition can be considered as occurring at three levels: (1) conventional nutrition based on general guidelines for population groups by age, gender and social determinants; (2) individualized nutrition that adds phenotypic information about the person's current nutritional status (e.g. anthropometry, biochemical and metabolic analysis, physical activity, among others), and (3) genotype-directed nutrition based on rare or common gene variation. Research and appropriate translation into medical practice and dietary recommendations must be based on a solid foundation of knowledge derived from studies on nutrigenetics and nutrigenomics. A scientific society, such as the International Society of Nutrigenetics/Nutrigenomics (ISNN), internationally devoted to the study of nutrigenetics/nutrigenomics, can indeed serve the commendable roles of (1) promoting science and favoring scientific communication and (2) permanently working as a 'clearing house' to prevent disqualifying logical jumps, correct or stop unwarranted claims, and prevent the creation of unwarranted expectations in patients and in the general public. In this statement, we are focusing on the scientific aspects of disciplines covering nutrigenetics and nutrigenomics issues. Genetic screening and the ethical, legal, social and economic aspects will be dealt with in subsequent statements of the Society.", "title": "" }, { "docid": "86826e10d531b8d487fada7a5c151a41", "text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.", "title": "" }, { "docid": "2b9c449164dce6261a8a6363b37d8c8e", "text": "Anterior component separation (ACS) with external oblique release for ventral hernia repair has a recurrence rate up to 32 %. Hernia recurrence after prior ACS represents a complex surgical challenge. In this context, we report our experience utilizing posterior component separation with transversus abdominis muscle release (PCS/TAR) and retromuscular mesh reinforcement. Patients with a history of recurrent hernia following ACS repaired with PCS/TAR were retrospectively identified from prospective databases collected at two large academic institutions. Patient demographics, hernia characteristics (using CT scan) and outcomes were evaluated. Twenty-nine patients with a history of ACS developed 22 (76 %) midline, 3 (10 %) lateral and 4 (14 %) concomitant recurrences. Contamination was present in 11 (38 %) of cases. All were repaired utilizing a PCS/TAR with retromuscular mesh placement (83 % synthetic, 17 % biologic) and fascial closure. Wound morbidity consisted of 13 (45 %) surgical site occurrences including 8 (28 %) surgical site infections. Five (17 %) patients required 90-day readmission, and two (7 %) were related to wound morbidity. One organ space infection with frank spillage of stool resulted in the only instance of mesh excision. This case also represents the only instance of recurrence (3 %) with a mean follow-up of 11 (range 3–36) months. Patients with a history of an ACS who develop a recurrence represent a challenging clinical scenario with limited options for surgical repair. A PCS/TAR hernia repair achieves acceptable outcomes and may in fact be the best approach available.", "title": "" }, { "docid": "9760e3676a7df5e185ec35089d06525e", "text": "This paper examines the sufficiency of existing e-Learning standards for facilitating and supporting the introduction of adaptive techniques in computer-based learning systems. To that end, the main representational and operational requirements of adaptive learning environments are examined and contrasted against current eLearning standards. The motivation behind this preliminary analysis is attainment of: interoperability between adaptive learning systems; reuse of adaptive learning materials; and, the facilitation of adaptively supported, distributed learning activities.", "title": "" }, { "docid": "f6feb6789c0c9d2d5c354e73d2aaf9ad", "text": "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.", "title": "" }, { "docid": "034713fa057b206703d9bcffd9efccd4", "text": "Researchers may describe different aspects of past scientific publications in their publications and the descriptions may keep changing in the evolution of science. The diverse and changing descriptions (i.e., citation context) on a publication characterize the impact and contributions of the past publication. In this article, we aim to provide an approach to understanding the changing and complex roles of a publication characterized by its citation context. We described a method to represent the publications’ dynamic roles in science community in different periods as a sequence of vectors by training temporal embedding models. The temporal representations can be used to quantify how much the roles of publications changed and interpret how they changed. Our study in the biomedical domain shows that our metric on the changes of publications’ roles is stable over time at the population level but significantly distinguish individuals. We also show the interpretability of our methods by a concrete example. Conference Topic Indicators, Methods and techniques, Act of citations, in-text citations and Content Citation Analysis", "title": "" }, { "docid": "06107b781329d004deb228e100d33d2d", "text": "This manuscript examines the measurement instrument developed from the ability model of EI (Mayer and Salovey, 1997), the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer, Salovey and Caruso, 2002). The four subtests, scoring methods, psychometric properties, reliability, and factor structure of the MSCEIT are discussed, with a special focus on the discriminant, convergent, predictive, and incremental validity of the test. The authors review associations between MSCEIT scores and important outcomes such as academic performance, cognitive processes, psychological well-being, depression, anxiety, prosocial and maladaptive behavior, and leadership and organizational behavior. Findings regarding the low correlations between MSCEIT scores and self-report measures of EI also are presented. In the conclusion the authors' provide potential directions for future research on emotional intelligence.", "title": "" }, { "docid": "32ce2215040d6315f1442719b0fc353a", "text": "Introduction. Internal nasal valve incompetence (INVI) has been treated with various surgical methods. Large, single surgeon case series are lacking, meaning that the evidence supporting a particular technique has been deficient. We present a case series using alar batten grafts to reconstruct the internal nasal valve, all performed by the senior author. Methods. Over a 7-year period, 107 patients with nasal obstruction caused by INVI underwent alar batten grafting. Preoperative assessment included the use of nasal strips to evaluate symptom improvement. Visual analogue scale (VAS) assessment of nasal blockage (NB) and quality of life (QOL) both pre- and postoperatively were performed and analysed with the Wilcoxon signed rank test. Results. Sixty-seven patients responded to both pre- and postoperative questionnaires. Ninety-one percent reported an improvement in NB and 88% an improvement in QOL. The greatest improvement was seen at 6 months (median VAS 15 mm and 88 mm resp., with a P value of <0.05 for both). Nasal strips were used preoperatively and are a useful tool in predicting patient operative success in both NB and QOL (odds ratio 2.15 and 2.58, resp.). Conclusions. Alar batten graft insertion as a single technique is a valid technique in treating INVI and produces good outcomes.", "title": "" }, { "docid": "504cb4e0f2b054f4e0b90fd7d9ab2253", "text": "A monolithic radio frequency power amplifier for 1.9- 2.6 GHz has been realized in a 0.25 µm SiGe-bipolar technology. The balanced 2-stage push-pull power amplifier uses two on-chip transformers as input-balun and for interstage matching and is operating down to supply voltages as low as 1 V. A microstrip line balun acts as output matching network. At 1 V, 1.5 V, 2 V supply voltages output powers of 20 dBm, 23.5 dBm, 26 dBm are achieved at 2.45 GHz. The respective power added efficiency is 36%, 49.5%, 53%. The small-signal gain is 33 dB.", "title": "" }, { "docid": "9c887109d71605053ecb1732a1989a35", "text": "In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.", "title": "" }, { "docid": "ea959ccd4eb6b6ac1d2acd2bfde7c633", "text": "This paper proposes a mixed-initiative feature engineering approach using explicit knowledge captured in a knowledge graph complemented by a novel interactive visualization method. Using the explicitly captured relations and dependencies between concepts and their properties, feature engineering is enabled in a semi-automatic way. Furthermore, the results (and decisions) obtained throughout the process can be utilized for refining the features and the knowledge graph. Analytical requirements can then be conveniently captured for feature engineering -- enabling integrated semantics-driven data analysis and machine learning.", "title": "" }, { "docid": "56dabbcf36d734211acc0b4a53f23255", "text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f43aef1428a2c481fc97a25c17f4bdb4", "text": "It is thought by cognitive scientists and typographers alike, that lower-case text is more legible than upper-case. Yet lower-case letters are, on average, smaller in height and width than upper-case characters, which suggests an upper-case advantage. Using a single unaltered font and all upper-, all lower-, and mixed-case text, we assessed size thresholds for words and random strings, and reading speeds for text with normal and visually impaired participants. Lower-case thresholds were roughly 0.1 log unit higher than upper. Reading speeds were higher for upper- than for mixed-case text at sizes twice acuity size; at larger sizes, the upper-case advantage disappeared. Results suggest that upper-case is more legible than the other case styles, especially for visually-impaired readers, because smaller letter sizes can be used than with the other case styles, with no diminution of legibility.", "title": "" }, { "docid": "095dd4efbb23bc91b72dea1cd1c627ab", "text": "Cell-cell communication is critical across an assortment of physiological and pathological processes. Extracellular vesicles (EVs) represent an integral facet of intercellular communication largely through the transfer of functional cargo such as proteins, messenger RNAs (mRNAs), microRNA (miRNAs), DNAs and lipids. EVs, especially exosomes and shed microvesicles, represent an important delivery medium in the tumour micro-environment through the reciprocal dissemination of signals between cancer and resident stromal cells to facilitate tumorigenesis and metastasis. An important step of the metastatic cascade is the reprogramming of cancer cells from an epithelial to mesenchymal phenotype (epithelial-mesenchymal transition, EMT), which is associated with increased aggressiveness, invasiveness and metastatic potential. There is now increasing evidence demonstrating that EVs released by cells undergoing EMT are reprogrammed (protein and RNA content) during this process. This review summarises current knowledge of EV-mediated functional transfer of proteins and RNA species (mRNA, miRNA, long non-coding RNA) between cells in cancer biology and the EMT process. An in-depth understanding of EVs associated with EMT, with emphasis on molecular composition (proteins and RNA species), will provide fundamental insights into cancer biology.", "title": "" } ]
scidocsrr
e9ca5c76db76105bcbde6adade74d2d9
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
[ { "docid": "07ef2766f22ac6c5b298e3f833cd88b5", "text": "A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.", "title": "" }, { "docid": "4f511a669a510153aa233d90da4e406a", "text": "In many visual surveillance applications the task of person detection and localization can be solved easier by using thermal long-wave infrared (LWIR) cameras which are less affected by changing illumination or background texture than visual-optical cameras. Especially in outdoor scenes where usually only few hot spots appear in thermal infrared imagery, humans can be detected more reliably due to their prominent infrared signature. We propose a two-stage person recognition approach for LWIR images: (1) the application of Maximally Stable Extremal Regions (MSER) to detect hot spots instead of background subtraction or sliding window and (2) the verification of the detected hot spots using a Discrete Cosine Transform (DCT) based descriptor and a modified Random Naïve Bayes (RNB) classifier. The main contributions are the novel modified RNB classifier and the generality of our method. We achieve high detection rates for several different LWIR datasets with low resolution videos in real-time. While many papers in this topic are dealing with strong constraints such as considering only one dataset, assuming a stationary camera, or detecting only moving persons, we aim at avoiding such constraints to make our approach applicable with moving platforms such as Unmanned Ground Vehicles (UGV).", "title": "" } ]
[ { "docid": "9c707afc8a0312ebab0ebd1b7fcb4c47", "text": "This paper develops analytical principles for torque ripple reduction in interior permanent magnet (IPM) synchronous machines. The significance of slot harmonics and the benefits of stators with odd number of slots per pole pair are highlighted. Based on these valuable analytical insights, this paper proposes coordination of the selection of stators with odd number of slots per pole pair and IPM rotors with multiple layers of flux barriers in order to reduce torque ripple. The effectiveness of using stators with odd number of slots per pole pair in reducing torque ripple is validated by applying a finite-element-based Monte Carlo optimization method to four IPM machine topologies, which are combinations of two stator topologies (even or odd number of slots per pole pair) and two IPM rotor topologies (one- or two-layer). It is demonstrated that the torque ripple can be reduced to less than 5% by selecting a stator with an odd number of slots per pole pair and the IPM rotor with optimized barrier configurations, without using stator/rotor skewing or rotor pole shaping.", "title": "" }, { "docid": "8e19813c7257c8d8d73867b9a4f9fa8d", "text": "Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.", "title": "" }, { "docid": "523983cad60a81e0e6694c8d90ab9c3d", "text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.", "title": "" }, { "docid": "c052f693b65a0f3189fc1e9f4df11162", "text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.", "title": "" }, { "docid": "11ce5f5e7c6249165ba2a5d8c3249c9f", "text": "BACKGROUND & AIMS\nHepatitis C virus (HCV) infection is a significant global health issue that leads to 350,000 preventable deaths annually due to associated cirrhosis and hepatocellular carcinoma (HCC). Immigrants and refugees (migrants) originating from intermediate/high HCV endemic countries are likely at increased risk for HCV infection due to HCV exposure in their countries of origin. The aim of this study was to estimate the HCV seroprevalence of the migrant population living in low HCV prevalence countries.\n\n\nMETHODS\nFour electronic databases were searched from database inception until June 17, 2014 for studies reporting the prevalence of HCV antibodies among migrants. Seroprevalence estimates were pooled with a random-effect model and were stratified by age group, region of origin and migration status and a meta-regression was modeled to explore heterogeneity.\n\n\nRESULTS\nData from 50 studies representing 38,635 migrants from all world regions were included. The overall anti-HCV prevalence (representing previous and current infections) was 1.9% (95% CI, 1.4-2.7%, I2 96.1). Older age and region of origin, particularly Sub-Saharan Africa, Asia, and Eastern Europe were the strongest predictors of HCV seroprevalence. The estimated HCV seroprevalence of migrants from these regions was >2% and is higher than that reported for most host populations.\n\n\nCONCLUSION\nAdult migrants originating from Asia, Sub-Saharan Africa and Eastern Europe are at increased risk for HCV and may benefit from targeted HCV screening.", "title": "" }, { "docid": "8e2006ca72dbc6be6592e21418b7f3ba", "text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.", "title": "" }, { "docid": "458d93cc710417f22ccf5fdd1c6c0a71", "text": "This paper presents a low-profile planar dipole antenna with omnidirectional radiation pattern and filtering response. The proposed antenna consists of a microstrip-to-slotline transition structure as the feeding network and a planar dipole as the radiator. Filtering response is obtained by adding nonradiative elements, including a coupled U-shaped microstrip line and two I-shaped slots, to the feeding network. Within the operating passband, the added nonradiative elements do not work, and thus the in-band radiation performance of the dipole antenna is nearly not affected. However, at the side stopbands, the added elements resonate and prevent the signal passing through the feeding network to the dipole antenna, suppressing the out-of-band radiation significantly. As a result, both satisfactory filtering and radiation performances are obtained. For demonstration, an omnidirectional filtering dipole antenna is implemented. Single-band bandpass filtering responses in both the reflection coefficient and realized gain are obtained. The measured in-band gain is ~2.5 dBi, whereas the out-of-band radiation suppression is more than 15 dB.", "title": "" }, { "docid": "115ed03ccee62fafc1606e6f6fdba1ce", "text": "High voltage SF6 circuit breaker must meet the breaking requirement for large short-circuit current, and ensure absence of breakdown after breaking small current. A 126kV high voltage SF6 circuit breaker was used as the research object in this paper. Based on the calculation results of non-equilibrium arc plasma material parameters, the distribution of pressure, temperature and density were calculated during the breaking progress. The electric field distribution was calculated in the course of flow movement, considering the influence of space charge on dielectric voltage. The change rule of the dielectric recovery progress was given based on the stream theory. The dynamic breakdown test circuit was built to measure the values of breakdown voltage under different open distance. The simulation results and experimental data are analyzed and the results show that: 1) Dielectric recovery speed (175kV/ms) is significantly faster than the voltage recovery rate (37.7kV/ms) during the arc extinguishing process. 2) The shorter the small current arcing time, the smaller the breakdown margin, so it is necessary to keep the arcing time longer than 0.5ms to ensure a large breakdown margin. 3) The calculated results are in good agreement with the experimental results. Since the breakdown voltage is less than the TRV in some test points, restrike may occur within 0.5ms after breaking, so arc extinguishment should be avoid in this time range.", "title": "" }, { "docid": "09f19a5e4751dc3ee4aa38817aafd3cf", "text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013", "title": "" }, { "docid": "14dc4a684d4c9ea310ae8b8b47dee3f6", "text": "Computational models in psychology are precise, fully explicit scientific hypotheses. Over the past 15 years, probabilistic modeling of human cognition has yielded quantitative theories of a wide variety of reasoning and learning phenomena. Recently, Marcus and Davis (2013) critique several examples of this work, using these critiques to question the basic validity of the probabilistic approach. Contra the broad rhetoric of their article, the points made by Marcus and Davis—while useful to consider—do not indicate systematic problems with the probabilistic modeling enterprise. Relevant and robust 3 Computational models in psychology are precise, fully explicit scientific hypotheses. Probabilistic models in particular formalize hypotheses about the beliefs of agents—their knowledge and assumptions about the world—using the structured collection of probabilities referred to as priors, likelihoods, etc. The probability calculus then describes inferences that can be drawn by combining these beliefs with new evidence, without the need to commit to a process-level explanation of how these inferences are performed (Marr, 1982). Over the past 15 years, probabilistic modeling of human cognition has yielded quantitative theories of a wide variety of phenomena (Tenenbaum, Kemp, Griffiths, & Goodman, 2011). Marcus and Davis (2013, henceforth, M&D) critique several examples of this work, using these critiques to question the basic validity of the probabilistic models approach, based on the existence of alternative models and potentially inconsistent data. Contra the broad rhetoric of their article, the points made by M&D—while useful to consider—do not indicate systematic problems with the probabilistic modeling enterprise. Several objections stem from a fundamental confusion about the status of optimality in probabilistic modeling, which has been discussed in responses to other critiques (see: Griffiths, Chater, Norris, & Pouget, 2012; Frank, 2013). Briefly: an optimal analysis is not the optimal analysis for a task or domain. Different probabilistic models instantiate different psychological hypotheses. Optimality provides a bridging assumption between these hypotheses and human behavior; one that can be re-examined or overturned as the data warrant. Model selection. M&D argue that individual probabilistic models require a host of potentially problematic modeling choices. Indeed, probabilistic models are created via a series of choices concerning priors, likelihoods, response functions, etc. Each of these choices embodies a proposal about cognition, and these proposals will often be wrong. The Relevant and robust 4 identification of model assumptions that result in a mismatch to empirical data allows these assumptions to be replaced or refined. Systematic iteration to achieve a better model is part of the normal progress of science. But if choices are made post-hoc, a model can be overfit to the particulars of the empirical data. M&D suggest that certain of our models suffer from this issue. For instance, they show that data on pragmatic inference (Frank & Goodman, 2012) are inconsistent with an alternative variant of the proposed model that uses a hard-max rather than a soft-max function, and ask whether the choice of soft-max was dependent on the data. The soft-max rule is foundational in economics, decision-theory, and cognitive psychology (Luce, 1959, 1977), and we first selected it for this problem based on a completely independent set of experiments (Frank, Goodman, Lai, & Tenenbaum, 2009). So it’s hard to see how a claim of overfitting is warranted here. Modelers must balance unification with exploration of model assumptions across tasks, but this issue is a general one for all computational work, and does not constitute a systematic problem with the probabilistic approach. Task selection. M&D suggested that probabilistic modelers report results on only the narrow range of tasks on which their models succeed. But their critique focused on a few high-profile, short reports that represented our first attempts to engage with important domains of cognition. Such papers necessarily have less in-depth engagement with empirical data than more extensive and mature work, though they also exemplify the applicability of probabilistic modeling to domains previously viewed as too complex for quantitative approaches. There is broader empirical adequacy to probabilistic models of cognition than M&D imply. If M&D had surveyed the literature they would have found substantial additional Relevant and robust 5 evidence for the models they reviewed—and more has accrued since their critique. For example, M&D critiqued Griffiths and Tenenbaum’s (2006) analysis of everyday predictions for failing to provide independent assessments of the contributions of priors and likelihoods, precisely what was done in several later and much longer papers (Griffiths & Tenenbaum, 2011; Lewandowsky, Griffiths, & Kalish, 2009). They similarly critiqued the particular tasks selected by Battaglia, Hamrick, and Tenenbaum (2013) without discussing the growing literature testing similar “noisy Newtonian” models on other phenomena (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2012; Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2014; Sanborn, Mansinghka, & Griffiths, 2013; Smith, Dechter, Tenenbaum, & Vul, 2013; Téglás et al., 2011). Smith, Battaglia, and Vul (2013) even directly address exactly the challenge M&D posed regarding classic findings of errors in physical intuitions. In other domains, such as concept learning and inductive inference, where there is an extensive experimental tradition, probabilistic models have engaged with diverse empirical data collected by multiple labs over many years (e.g. Goodman, Tenenbaum, Feldman, & Griffiths, 2008; Kemp & Tenenbaum, 2009). M&D also insinuate empirical problems that they do not test. For instance, in criticizing the choice of dependent measure used by Frank and Goodman (2012), they posit that a forced-choice task would yield a qualitatively different pattern (discrete rather than graded responding). In fact, a forced-choice version of the task produces graded patterns of responding across a wide variety of conditions (Stiller, Goodman, & Frank, 2011, 2014; Vogel, Emilsson, Frank, Jurafsky, & Potts, 2014). Conclusions. We agree with M&D that there are real and important challenges for probabilistic models of cognition, as there will be for any approach to modeling a system as complex as the human mind. To us, the most pressing challenges include understanding the Relevant and robust 6 relationship to lower levels of psychological analysis and neural implementation, integrating additional formal tools, clarifying the philosophical status of the models, extending to new domains of cognition, and, yes: engaging with additional empirical data in the current domains while unifying specific model choices into broader principles. As M&D state, “ultimately, the Bayesian approach should be seen as a useful tool”—one that we believe has already proven its robustness and relevance by allowing us to form and test quantitatively accurate psychological hypotheses. Relevant and robust 7", "title": "" }, { "docid": "9b40db1e69a3ad1cc2a1289791e82ae1", "text": "As a nascent area of study, gamification has attracted the interest of researchers in several fields, but such researchers have scarcely focused on creating a theoretical foundation for gamification research. Gamification involves using gamelike features in non-game contexts to motivate users and improve performance outcomes. As a boundary-spanning subject by nature, gamification has drawn the interest of scholars from diverse communities, such as information systems, education, marketing, computer science, and business administration. To establish a theoretical foundation, we need to clearly define and explain gamification in comparison with similar concepts and areas of research. Likewise, we need to define the scope of the domain and develop a research agenda that explicitly considers theory’s important role. In this review paper, we set forth the pre-theoretical structures necessary for theory building in this area. Accordingly, we engaged an interdisciplinary group of discussants to evaluate and select the most relevant theories for gamification. Moreover, we developed exemplary research questions to help create a research agenda for gamification. We conclude that using a multi-theoretical perspective in creating a research agenda should help and encourage IS researchers to take a lead role in this promising and emerging area.", "title": "" }, { "docid": "e3d0a58ddcffabb26d5e059d3ae6b370", "text": "HCI ( Human Computer Interaction ) studies the ways humans use digital or computational machines, systems or infrastructures. The study of the barriers encountered when users interact with the various interfaces is critical to improving their use, as well as their experience. Access and information processing is carried out today from multiple devices (computers, tablets, phones... ) which is essential to maintain a multichannel consistency. This complexity increases with environments in which we do not have much experience as users, where interaction with the machine is a challenge even in phases of research: virtual reality environments, augmented reality, or viewing and handling of large amounts of data, where the simplicity and ease of use are critical.", "title": "" }, { "docid": "575208e6df214fa4378fa18be48af51d", "text": "A parser based on logic programming language (DCG) has very useful features; perspicuity, power, generality and so on. However, it does have some drawbacks in which it cannot deal with CFG with left recursive rules, for example. To overcome these drawbacks, a Bottom-Up parser embedded in Prolog (BUP) has been developed. In BUP, CFG rules are translated into Prolog clauses which work as a bottom-up left corner parser with top-down expectation. BUP is augmented by introducing a “link” relation to reduce the size of a search space. Furthermore, BUP can be revised to maintain partial parsing results to avoid computational duplication. A BUP translator and a BUP tracer which support the development of grammar rules are described.", "title": "" }, { "docid": "9e3263866208bbc6a9019b3c859d2a66", "text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.", "title": "" }, { "docid": "597f097d5206fc259224b905d4d20e20", "text": "W e present here a QT database designed j b r evaluation of algorithms that detect waveform boundaries in the EGG. T h e dataabase consists of 105 fifteen-minute excerpts of two-channel ECG Holter recordings, chosen to include a broad variety of QRS and ST-T morphologies. Waveform bounda,ries for a subset of beats in, these recordings have been manually determined by expert annotators using a n interactive graphic disp1a.y to view both signals simultaneously and to insert the annotations. Examples of each m,orvhologg were inchded in this subset of uniaotated beats; at least 30 beats in each record, 3622 beats in all, were manually a:anotated in Ihe database. In 11 records, two indepen,dent sets of ennotations have been inchded, to a.llow inter-observer variability slwdies. T h e Q T Databnse is available on a CD-ROM in the format previously used for the MIT-BJH Arrhythmia Database ayad the Euro-pean ST-T Database, from which some of the recordings in the &T Database have been obtained.", "title": "" }, { "docid": "11bcb70c341366c170452e8dc77eb07a", "text": "Industrial software systems are known to be used for performing critical tasks in numerous fields. Faulty conditions in such systems can cause system outages that could lead to losses. In order to prevent potential system faults, it is important that anomalous conditions that lead to these faults are detected effectively. Nevertheless, the high complexity of the system components makes anomaly detection a high dimensional machine learning problem. This paper presents the application of a deep learning neural network known as Variational Autoencoder (VAE), as the solution to this problem. We show that, when used in an unsupervised manner, VAE outperforms the well-known clustering technique DBSCAN. Moreover, this paper shows that higher recall can be achieved using the semi-supervised one class learning of VAE, which uses only the normal data to train the model. Additionally, we show that one class learning of VAE outperforms semi-supervised one class SVM when training data consist of only a very small amount of anomalous samples. When a tree based ensemble technique is adopted for feature selection, the obtained results evidently demonstrate that the performance of the VAE is highly positively correlated with the selected feature set.", "title": "" }, { "docid": "e91dd3f9e832de48a27048a0efa1b67a", "text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.", "title": "" }, { "docid": "aba638a83116131a62dcce30a7470252", "text": "A general method is proposed to automatically generate a DfT solution aiming at the detection of catastrophic faults in analog and mixed-signal integrated circuits. The approach consists in modifying the topology of the circuit by pulling up (down) nodes and then probing differentiating node voltages. The method generates a set of optimal hardware implementations addressing the multi-objective problem such that the fault coverage is maximized and the silicon overhead is minimized. The new method was applied to a real-case industrial circuit, demonstrating a nearly 100 percent coverage at the expense of an area increase of about 5 percent.", "title": "" }, { "docid": "d911ccb1bbb761cbfee3e961b8732534", "text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.", "title": "" }, { "docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd", "text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.", "title": "" } ]
scidocsrr
724e378cb74f9d72d6c7f3e6c6814af8
A 12 bit 50 MS/s CMOS Nyquist A/D Converter With a Fully Differential Class-AB Switched Op-Amp
[ { "docid": "2c5f0763b6c4888babc04af50bb89aaf", "text": "A 1.8-V 14-b 12-MS/s pseudo-differential pipeline analog-to-digital converter (ADC) using a passive capacitor error-averaging technique and a nested CMOS gain-boosting technique is described. The converter is optimized for low-voltage low-power applications by applying an optimum stage-scaling algorithm at the architectural level and an opamp and comparator sharing technique at the circuit level. Prototyped in a 0.18-/spl mu/m 6M-1P CMOS process, this converter achieves a peak signal-to-noise plus distortion ratio (SNDR) of 75.5 dB and a 103-dB spurious-free dynamic range (SFDR) without trimming, calibration, or dithering. With a 1-MHz analog input, the maximum differential nonlinearity is 0.47 LSB and the maximum integral nonlinearity is 0.54 LSB. The large analog bandwidth of the front-end sample-and-hold circuit is achieved using bootstrapped thin-oxide transistors as switches, resulting in an SFDR of 97 dB when a 40-MHz full-scale input is digitized. The ADC occupies an active area of 10 mm/sup 2/ and dissipates 98 mW.", "title": "" } ]
[ { "docid": "c14b9f8bb1fe8914ca4a07742b476824", "text": "This paper presents an improved TIQ comparator based 3-bit Flash ADC. A modification is suggested for improvement in switching power consumption by eliminating halt stage from 2PASC based TIQ comparator used for ADC. It has been found that switching power consumption is reduced in comparison with other types of FLASH ADC. Switching power dissipation of 27.35pW at 1MHz input signal frequency for 3-bit flash ADC is obtained. The simulation has been carried out at TSMC 180nm Technology in LTspice.", "title": "" }, { "docid": "4be24e1990a6002864bf5c26e75f43a2", "text": "This paper seeks to answer three questions. First, which drives the success of a platform, installed base, platform quality or consumer expectations? Second, when does a monopoly emerge in a platform-based market? Finally, when is a platform-based market socially efficient? We analyze a dynamic model where an entrant with superior quality competes with an incumbent platform, and examine long-run market outcomes. We find that the answers to these questions depend critically on two parameters: the strength of indirect network effects and consumers’ discount factor of future applications. In addition, contrary to the popular belief that indirect network effects protect incumbents and are the source of market inefficiency, we find that under certain conditions, indirect network effects could enhance entrants’ quality advantage and market outcomes hence could be more efficient with stronger indirect network effects. We empirically examine the competition between the Xbox and PlayStation 2 consoles. We find that Xbox has a small quality advantage over PlayStation 2. In addition, the strength of indirect network effects and consumers’ discount factor in this market are within the range in which platform success is driven by quality advantage and the market is potentially efficient. Counterfactual experiments suggest that PlayStation 2 could have driven Xbox out of the market had the strength of indirect network effects more than doubled or had consumers’ discount factor increased by fifty percent.", "title": "" }, { "docid": "a2845e100c20153f19e32b4e713ebbaa", "text": "The efficiency of the ground penetrating radar (GPR) system significantly depends on the antenna performance as signal has to propagate through lossy and inhomogeneous media. In this research work a resistively loaded compact Bow-tie antenna which can operate through a wide bandwidth of 4.1 GHz is proposed. The sharp corners of the slot antenna are rounded so as to minimize the end-fire reflections. The proposed antenna employs a resistive loading technique through a thin sheet of graphite to attain the ultra-wide bandwidth. The simulated results obtained from CST Microwave Studio v14 and HFSS v14 show a good amount of agreement for the antenna performance parameters. The proposed antenna has potential to apply for the GPR applications as it provides improved radiation efficiency, enhanced bandwidth, gain, directivity and reduced end-fire reflections.", "title": "" }, { "docid": "d5c4e44514186fa1d82545a107e87c94", "text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.", "title": "" }, { "docid": "8e1b10ebb48b86ce151ab44dc0473829", "text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.", "title": "" }, { "docid": "95af5413e04341770887a74faa7c8405", "text": "Two experiments investigate the effects of language comprehension on affordances. Participants read a sentence composed by either an observation or an action verb (Look at/Grasp) followed by an object name. They had to decide whether the visual object following the sentence was the same as the one mentioned in the sentence. Objects graspable with either a precision or a power grip were presented in an orientation affording action (canonical) or not. Action sentences were faster than observation sentences, and power grip objects were faster than precision grip objects. Moreover, faster RTs were obtained when orientation afforded action. Results indicate that the simulation activated during language comprehension leads to the formation of a \"motor prototype\" of the object. This motor prototype encodes information on temporary/canonical and stable affordances (e.g., orientation, size), which can be possibly referred to different cognitive and neural systems (dorsal, ventral systems).", "title": "" }, { "docid": "7cad8fccadff2d8faa8a372c6237469e", "text": "In the spirit of the tremendous success of deep Convolutional Neural Networks as generic feature extractors from images, we propose Timenet : a multilayered recurrent neural network (RNN) trained in an unsupervised manner to extract features from time series. Fixed-dimensional vector representations or embeddings of variable-length sentences have been shown to be useful for a variety of document classification tasks. Timenet is the encoder network of an auto-encoder based on sequence-to-sequence models that transforms varying length time series to fixed-dimensional vector representations. Once Timenet is trained on diverse sets of time series, it can then be used as a generic off-the-shelf feature extractor for time series. We train Timenet on time series from 24 datasets belonging to various domains from the UCR Time Series Classification Archive, and then evaluate embeddings from Timenet for classification on 30 other datasets not used for training the Timenet. We observe that a classifier learnt over the embeddings obtained from a pre-trained Timenet yields significantly better performance compared to (i) a classifier learnt over the embeddings obtained from the encoder network of a domain-specific auto-encoder, as well as (ii) a nearest neighbor classifier based on the well-known and effective Dynamic Time Warping (DTW) distance measure. We also observe that a classifier trained on embeddings from Timenet give competitive results in comparison to a DTW-based classifier even when using significantly smaller set of labeled training data, providing further evidence that Timenet embeddings are robust. Finally, t-SNE visualizations of Timenet embeddings show that time series from different classes form well-separated clusters.", "title": "" }, { "docid": "d69b8c991e66ff274af63198dba2ee01", "text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.", "title": "" }, { "docid": "41f386c9cab08ce2d265ba6522b5c5d5", "text": "Fascioliasis is a zoonosis actually considered as a foodborne trematode disease priority by the World Health Organization. Our study presents three cases of F. hepatica infection diagnosed by direct, indirect and/or imaging diagnostic techniques, showing the need of the combined use of them. In order to overcome some difficulties of the presently available methods we show for the first time the application of molecular tools to improve human fascioliasis diagnosis by employing a PCR protocol based on a repetitive element as target sequence. In conclusion, diagnosis of human fascioliasis has to be carried out by the combination of diagnostic techniques that allow the detection of infection in different disease phases, different epidemiological situations and known/new transmission patterns in the actual scenario.", "title": "" }, { "docid": "bf407b28b40b8c5e770c81dc46a3efa1", "text": "Now a days, Permanent Magnet Synchronous Motor (PMSM) is designed not only to be more powerful but also with lower mass and lower moment of inertia. Due to its high power density and smaller size, PMSM has in recent years evolved as the preferred solution for speed and position control drives on machine tools and robots. One of the efficient control strategies of PMSM is VectorControl (or Field oriented control).The rotor position is necessary to achieve the vector control drive system of Permanent Magnet Synchronous Motor. In this project, the resolver sensor detecting the rotor position of PMSM is focused. The outstanding features of this sensor are its robust structure and noise insensitivity. The resolver algorithm is proposed and implemented in the vector control drive system of PMSM. The proposed scheme has to be verified by simulation and using MATLAB/SIMULINK.", "title": "" }, { "docid": "d46b144c83298544efa7ee2fd956aa43", "text": "This study aimed to measure the correlations between reading strategies, learning styles and reading comprehension of the Saudi EFL college learners' English reading comprehension. This study used a survey and two IELTS reading passages that vary in difficulty levels. The purpose was to show how two different reading strategies affect EFL students' reading comprehension. The study further examines the correlations between learning styles and reading strategies, and whether this affects the students' comprehension in a sample of seventy-five EFL Saudi college students enrolled in the English Department. Participants were randomly assigned to two groups: an oral reading group (n = 37) and a silent reading group (n = 38). The learning strategies were 'visual learner' and 'auditory learner', with three performance grades, 'low', 'average' and 'high'; while the reading methods were 'oral' and 'silent'. The findings showed that the variation of reading strategies, namely oral reading versus silent reading strategies, did not produce any statistically significant differences on EFL learners' reading comprehension. Findings also showed that high visual learners did not perform significantly differently from the silent reading group or the oral reading group. There were no statistically significant differences between silent reading participants and oral reading participants in their performance on either text from the IELTS. More detailed findings were also presented and discussed against a background of prior research. Pedagogical implications were drawn, and recommendations for further research were proposed.", "title": "" }, { "docid": "8a8b33eabebb6d53d74ae97f8081bf7b", "text": "Social networks are inevitable part of modern life. A class of social networks is those with both positive (friendship or trust) and negative (enmity or distrust) links. Ranking nodes in signed networks remains a hot topic in computer science. In this manuscript, we review different ranking algorithms to rank the nodes in signed networks, and apply them to the sign prediction problem. Ranking scores are used to obtain reputation and optimism, which are used as features in the sign prediction problem. Reputation of a node shows patterns of voting towards the node and its optimism demonstrates how optimistic a node thinks about others. To assess the performance of different ranking algorithms, we apply them on three signed networks including Epinions, Slashdot and Wikipedia. In this paper, we introduce three novel ranking algorithms for signed networks and compare their ability in predicting signs of edges with already existing ones. We use logistic regression as the predictor and the reputation and optimism values for the trustee and trustor as features (that are obtained based on different ranking algorithms). We find that ranking algorithms resulting in correlated ranking scores, leads to almost the same prediction accuracy. Furthermore, our analysis identifies a number of ranking algorithms that result in higher prediction accuracy compared to others.", "title": "" }, { "docid": "36b310b4fcd58c54879ebcddb537eafe", "text": "Semantic similarity of text plays an important role in many NLP tasks. It requires using both local information like lexical semantics and structural information like syntactic structures. Recent progress in word representation provides good resources for lexical semantics, and advances in natural language analysis tools make it possible to efficiently generate syntactic and semantic annotations. However, how to combine them to capture the semantics of text is still an open question. Here, we propose a new alignment-based approach to learn semantic similarity. It uses a hybrid representation, attributed relational graphs, to encode lexical, syntactic and semantic information. Alignment of two such graphs combines local and structural information to support similarity estimation. To improve alignment, we introduced structural constraints inspired by a cognitive theory of similarity and analogy. Usually only similarity labels are given in training data and the true alignments are unknown, so we address the learning problem using two approaches: alignment as feature extraction and alignment as latent variable. Our approach is evaluated on the paraphrase identification task and achieved results competitive with the state-of-theart.", "title": "" }, { "docid": "3ce6c3b6a23e713bf9af419ce2d7ded3", "text": "Two measures of financial performance that are being applied increasingly in investor-owned and not-for-profit healthcare organizations are market value added (MVA) and economic value added (EVA). Unlike traditional profitability measures, both MVA and EVA measures take into account the cost of equity capital. MVA is most appropriate for investor-owned healthcare organizations and EVA is the best measure for not-for-profit organizations. As healthcare financial managers become more familiar with MVA and EVA and understand their potential, these two measures may become more widely accepted accounting tools for assessing the financial performance of investor-owned and not-for-profit healthcare organizations.", "title": "" }, { "docid": "79a8281500227799d18d4f841af08795", "text": "Fluctuating power is of serious concern in grid connected wind systems and energy storage systems are being developed to help alleviate this. This paper describes how additional energy storage can be provided within the existing wind turbine system by allowing the turbine speed to vary over a wider range. It also addresses the stability issue due to the modified control requirements. A control algorithm is proposed for a typical doubly fed induction generator (DFIG) arrangement and a simulation model is used to assess the ability of the method to smooth the output power. The disadvantage of the method is that there is a reduction in energy capture relative to a maximum power tracking algorithm. This aspect is evaluated using a typical turbine characteristic and wind profile and is shown to decrease by less than 1%. In contrast power fluctuations at intermediate frequency are reduced by typically 90%.", "title": "" }, { "docid": "f0ea632dbed03f3dd02ec75eca8fe363", "text": "The existing cell phone certification process uses a plastic model of the head called the Specific Anthropomorphic Mannequin (SAM), representing the top 10% of U.S. military recruits in 1989 and greatly underestimating the Specific Absorption Rate (SAR) for typical mobile phone users, especially children. A superior computer simulation certification process has been approved by the Federal Communications Commission (FCC) but is not employed to certify cell phones. In the United States, the FCC determines maximum allowed exposures. Many countries, especially European Union members, use the \"guidelines\" of International Commission on Non-Ionizing Radiation Protection (ICNIRP), a non governmental agency. Radiofrequency (RF) exposure to a head smaller than SAM will absorb a relatively higher SAR. Also, SAM uses a fluid having the average electrical properties of the head that cannot indicate differential absorption of specific brain tissue, nor absorption in children or smaller adults. The SAR for a 10-year old is up to 153% higher than the SAR for the SAM model. When electrical properties are considered, a child's head's absorption can be over two times greater, and absorption of the skull's bone marrow can be ten times greater than adults. Therefore, a new certification process is needed that incorporates different modes of use, head sizes, and tissue properties. Anatomically based models should be employed in revising safety standards for these ubiquitous modern devices and standards should be set by accountable, independent groups.", "title": "" }, { "docid": "6dd3764687fa2f319b3162694be9fd62", "text": "Pump, compressor and fan systems often have a notable energy savings potential, which can be identified by monitoring their operation for instance by a frequency converter and model-based estimation methods. In such cases, sensorless determination of the system operating state relies on the accurate estimation of the motor rotational speed and shaft torque, which is commonly available in vector- and direct-torque-controlled frequency converters. However, frequency converter manufacturers seldom publish the expected estimation accuracies for the rotational speed and shaft torque. In this paper, the accuracy of these motor estimates is studied by laboratory measurements for a vector-controlled frequency converter both in the steady and dynamical states. In addition, the effect of the flux optimization feature on the estimation accuracy is studied. Finally, the impact of erroneous motor estimates on the flow rate estimation is demonstrated in the paper.", "title": "" }, { "docid": "65f2651ec987ece0de560d9ac65e06a8", "text": "This paper describes neural network models that we prepared for the author profiling task of PAN@CLEF 2017. In previous PAN series, statistical models using a machine learning method with a variety of features have shown superior performances in author profiling tasks. We decided to tackle the author profiling task using neural networks. Neural networks have recently shown promising results in NLP tasks. Our models integrate word information and character information with multiple neural network layers. The proposed models have marked joint accuracies of 64–86% in the gender identification and the language variety identification of four languages.", "title": "" }, { "docid": "1fad899c589b70f65fac2cce2b814ffd", "text": "This paper proposes a model and an architecture for designing intelligent tutoring system using Bayesian Networks. The design model of an intelligent tutoring system is directed towards the separation between the domain knowledge and the tutor shell. The architecture is composed by a user model, a knowledge base, an adaptation module, a pedagogical module and a presentation module. Bayesian Networks are used to assess user’s state of knowledge and preferences, in order to suggest pedagogical options and recommend future steps in the tutor. The proposed architecture is implemented in the Internet, enabling its use as an e-learning tool. An example of an intelligent tutoring system is shown for illustration purposes.", "title": "" } ]
scidocsrr
2699e85e05b760fba206729117a912db
Segmentation Accuracy for Offline Arabic Handwritten Recognition Based on Bounding Box Algorithm
[ { "docid": "cb26bb277afc6d521c4c5960b35ed77d", "text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.", "title": "" } ]
[ { "docid": "ebc77c29a8f761edb5e4ca588b2e6fb5", "text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.", "title": "" }, { "docid": "603729787583f2c8e06444719275f82f", "text": "This study intended to explore the development of self-regulation in a flipped classroom setting. Problem based learning activities were carried out in flipped classrooms to promote self-regulation. A total of 30 undergraduate students from Mechatronic department participated in the study. Self-regulation skills were discussed through students’ and the instructor’s experiences including their opinions and behaviours. Qualitative data was collected with an observation form, discussion messages and interviews with selected participants. As a result, in terms of self-regulated learning, the goal setting and planning, task strategies and help seeking skills of the students were high in the face to face learning designed with problem based activities through flipped classroom model, their goal setting and planning, task strategies and help seeking skills were appeared moderately. In the home sessions, environment structuring, goal setting and planning skills were developed in high level while task strategies, help seeking, time management, monitoring, selfefficacy and self-evaluation skills were moderate and monitoring skills was lower. Consequently, it is hoped that the study may provide some suggestions for using problem based activities in flipped learning.", "title": "" }, { "docid": "b90329f58038c62e7d82cbb1d1030a23", "text": "W 2.0 provides gathering places for Internet users in blogs, forums, and chat rooms. These gathering places leave footprints in the form of colossal amounts of data regarding consumers’ thoughts, beliefs, experiences, and even interactions. In this paper, we propose an approach for firms to explore online user-generated content and “listen” to what customers write about their and their competitors’ products. Our objective is to convert the user-generated content to market structures and competitive landscape insights. The difficulty in obtaining such market-structure insights from online user-generated content is that consumers’ postings are often not easy to syndicate. To address these issues, we employ a text-mining approach and combine it with semantic network analysis tools. We demonstrate this approach using two cases—sedan cars and diabetes drugs—generating market-structure perceptual maps and meaningful insights without interviewing a single consumer. We compare a market structure based on user-generated content data with a market structure derived from more traditional sales and survey-based data to establish validity and highlight meaningful differences.", "title": "" }, { "docid": "6d97dd3dfd09df7637127395e170246a", "text": "Localization Results Face Landmark Localization: Dataset ESR SDM ERT LBF cGPRT DDN (Ours) 300-W 7.58 7.52 6.40 6.32 5.71 5.65 Table 1: Mean relative error (%) on 300W. Human Body Part Localization: Method Head Shoulder Elbow Wrist Hip Knee Ankle Mean Pishchulin et al. 87.2 56.7 46.7 38.0 61.0 57.5 52.7 57.1 Tompson et al. 90.6 79.2 67.9 63.4 69.5 71.0 64.2 72.0 Chen & Yuille 91.8 78.2 71.8 65.5 73.3 70.2 63.4 73.4 DDN (Ours) 87.2 88.2 82.4 76.3 91.4 85.8 78.7 84.3 Table 2: PCK at 0.2 on LSP dataset. Bird Part Localization: α Methods Ba Be By Bt Cn Fo Le Ll Lw Na Re h 0.02 Ning et al. 9.4 12.7 8.2 9.8 12.2 13.2 11.3 7.8 6.7 11.5 12.5 Ours 18.8 12.8 14.2 15.9 15.9 16.2 20.3 7.1 8.3 13.8 19.7 0.05 Ning et al. 46.8 62.5 40.7 45.1 59.8 63.7 66.3 33.7 31.7 54.3 63.8 Ours 66.4 49.2 56.4 60.4 61.0 60.0 66.9 32.3 35.8 53.1 66.3 Table 3: PCK at 0.02 and 0.05 on CUB200-2011.", "title": "" }, { "docid": "50a5ff2fdfac4f15f1b16c964b000717", "text": "A variety of news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing the implicit \"social\" factors (i.e., the potential influential experts in news reading community) among news readers to facilitate news personalization. In this paper, we investigate the feasibility of integrating content-based methods, collaborative filtering and information diffusion models by employing probabilistic matrix factorization techniques. We propose PRemiSE, a novel Personalized news Recommendation framework via implicit Social Experts, in which the opinions of potential influencers on virtual social networks extracted from implicit feedbacks are treated as auxiliary resources for recommendation. Empirical results demonstrate the efficacy and effectiveness of our method, particularly, on handling the so-called cold-start problem.", "title": "" }, { "docid": "2b6b1fef68dede7066dddb4b111e1828", "text": "Collecting labeling information of time-to-event analysis is naturally very time consuming, i.e., one has to wait for the occurrence of the event of interest, which may not always be observed for every instance. By taking advantage of censored instances, survival analysis methods internally consider more samples than standard regression methods, which partially alleviates this data insufficiency problem. Whereas most existing survival analysis models merely focus on a single survival prediction task, when there are multiple related survival prediction tasks, we may benefit from the tasks relatedness. Simultaneously learning multiple related tasks, multi-task learning (MTL) provides a paradigm to alleviate data insufficiency by bridging data from all tasks and improves generalization performance of all tasks involved. Even though MTL has been extensively studied, there is no existing work investigating MTL for survival analysis. In this paper, we propose a novel multi-task survival analysis framework that takes advantage of both censored instances and task relatedness. Specifically, based on two common used task relatedness assumptions, i.e., low-rank assumption and cluster structure assumption, we formulate two concrete models, COX-TRACE and COX-cCMTL, under the proposed framework, respectively. We develop efficient algorithms and demonstrate the performance of the proposed multi-task survival analysis models on the The Cancer Genome Atlas (TCGA) dataset. Our results show that the proposed approaches can significantly improve the prediction performance in survival analysis and can also discover some inherent relationships among different cancer types.", "title": "" }, { "docid": "7fc3dfcc8fa43c36938f41877a65bed7", "text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1", "title": "" }, { "docid": "2ee0647fd07ad5cb2bb881cea1081d89", "text": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks.", "title": "" }, { "docid": "54d54094acea1900e183144d32b1910f", "text": "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities.\n In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.", "title": "" }, { "docid": "402f48eb841ccff79ad85d29a842a19a", "text": "The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol’ variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (i) properties of random, stratified and Latin hypercube sampling, (ii) comparisons of random and Latin hypercube sampling, (iii) operations involving Latin hypercube sampling (i.e., correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (iv) uncertainty analysis (i.e,, cumulative distribution functions, complementary cumulative distribution functions, box plots), (v) sensitivity analysis (i.e., scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (vi) analyses involving stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty.", "title": "" }, { "docid": "092bf4ee1626553206ee9b434cda957b", "text": ".......................................................................................................... 3 Introduction ...................................................................................................... 4 Methods........................................................................................................... 7 Procedure ..................................................................................................... 7 Inclusion and exclusion criteria ..................................................................... 8 Data extraction and quality assessment ....................................................... 8 Results ............................................................................................................ 9 Included studies ........................................................................................... 9 Quality of included articles .......................................................................... 13 Excluded studies ........................................................................................ 15 Fig. 1 CONSORT 2010 Flow Diagram ....................................................... 16 Table 1: Primary studies ............................................................................. 17 Table2: Secondary studies ......................................................................... 18 Discussion ..................................................................................................... 19 Conclusion ..................................................................................................... 22 Acknowledgements ....................................................................................... 22 References .................................................................................................... 23 Appendix ....................................................................................................... 32", "title": "" }, { "docid": "9409882dd0cf21ef9eddd7681811bd9f", "text": "Recently, the Particle Swarm Optimization (PSO) technique has gained much attention in the field of time series forecasting. Although PSO trained Artificial Neural Networks (ANNs) performed reasonably well in stationary time series forecasting, their effectiveness in tracking the structure of non-stationary data (especially those which contain trends or seasonal patterns) is yet to be justified. In this paper, we have trained neural networks with two types of PSO (Trelea1 and Trelea2) for forecasting seasonal time series data. To assess their performances, experiments are conducted on three well-known real world seasonal time series. Obtained forecast errors in terms of three common performance measures, viz. MSE, MAE and MAPE for each dataset are compared with those obtained by the Seasonal ANN (SANN) model, trained with a standard backpropagation algorithm. Comparisons demonstrate that training with PSO-Trelea1 and PSO-Trelea2 produced significantly better results than the standard backpropagation rule.", "title": "" }, { "docid": "3b88cd186023cc5d4a44314cdb521d0e", "text": "RATIONALE, AIMS AND OBJECTIVES\nThis article aims to provide evidence to guide multidisciplinary clinical practitioners towards successful initiation and long-term maintenance of oral feeding in preterm infants, directed by the individual infant maturity.\n\n\nMETHOD\nA comprehensive review of primary research, explorative work, existing guidelines, and evidence-based opinions regarding the transition to oral feeding in preterm infants was studied to compile this document.\n\n\nRESULTS\nCurrent clinical hospital practices are described and challenged and the principles of cue-based feeding are explored. \"Traditional\" feeding regimes use criteria, such as the infant's weight, gestational age and being free of illness, and even caregiver intuition to initiate or delay oral feeding. However, these criteria could compromise the infant and increase anxiety levels and frustration for parents and caregivers. Cue-based feeding, opposed to volume-driven feeding, lead to improved feeding success, including increased weight gain, shorter hospital stay, fewer adverse events, without increasing staff workload while simultaneously improving parents' skills regarding infant feeding. Although research is available on cue-based feeding, an easy-to-use clinical guide for practitioners could not be found. A cue-based infant feeding regime, for clinical decision making on providing opportunities to support feeding success in preterm infants, is provided in this article as a framework for clinical reasoning.\n\n\nCONCLUSIONS\nCue-based feeding of preterm infants requires care providers who are trained in and sensitive to infant cues, to ensure optimal feeding success. An easy-to-use clinical guideline is presented for implementation by multidisciplinary team members. This evidence-based guideline aims to improve feeding outcomes for the newborn infant and to facilitate the tasks of nurses and caregivers.", "title": "" }, { "docid": "63da0b3d1bc7d6aedd5356b8cdf67b24", "text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.", "title": "" }, { "docid": "719ca13e95b9b4a1fc68772746e436d9", "text": "The increased chance of deception in computer-mediated communication and the potential risk of taking action based on deceptive information calls for automatic detection of deception. To achieve the ultimate goal of automatic prediction of deception, we selected four common classification methods and empirically compared their performance in predicting deception. The deception and truth data were collected during two experimental studies. The results suggest that all of the four methods were promising for predicting deception with cues to deception. Among them, neural networks exhibited consistent performance and were robust across test settings. The comparisons also highlighted the importance of selecting important input variables and removing noise in an attempt to enhance the performance of classification methods. The selected cues offer both methodological and theoretical contributions to the body of deception and information systems research.", "title": "" }, { "docid": "20d754528009ebce458eaa748312b2fe", "text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.", "title": "" }, { "docid": "be1bfd488f90deca658937dd20ee0915", "text": "This research examined the effects of hands-free cell phone conversations on simulated driving. The authors found that these conversations impaired driver's reactions to vehicles braking in front of them. The authors assessed whether this impairment could be attributed to a withdrawal of attention from the visual scene, yielding a form of inattention blindness. Cell phone conversations impaired explicit recognition memory for roadside billboards. Eye-tracking data indicated that this was due to reduced attention to foveal information. This interpretation was bolstered by data showing that cell phone conversations impaired implicit perceptual memory for items presented at fixation. The data suggest that the impairment of driving performance produced by cell phone conversations is mediated, at least in part, by reduced attention to visual inputs.", "title": "" }, { "docid": "8d13a4f52c9a72a2f53b6633f7fb4053", "text": "The hippocampal-entorhinal system encodes a map of space that guides spatial navigation. Goal-directed behaviour outside of spatial navigation similarly requires a representation of abstract forms of relational knowledge. This information relies on the same neural system, but it is not known whether the organisational principles governing continuous maps may extend to the implicit encoding of discrete, non-spatial graphs. Here, we show that the human hippocampal-entorhinal system can represent relationships between objects using a metric that depends on associative strength. We reconstruct a map-like knowledge structure directly from a hippocampal-entorhinal functional magnetic resonance imaging adaptation signal in a situation where relationships are non-spatial rather than spatial, discrete rather than continuous, and unavailable to conscious awareness. Notably, the measure that best predicted a behavioural signature of implicit knowledge and blood oxygen level-dependent adaptation was a weighted sum of future states, akin to the successor representation that has been proposed to account for place and grid-cell firing patterns.", "title": "" }, { "docid": "e4cfcd8bd577fc04480c62bbc6e94a41", "text": "Background and Objective: Binaural interaction component has been seen to be effective in assessing the binaural interaction process in normal hearing individuals. However, there is a lack of literature regarding the effects of SNHL on the Binaural Interaction Component of ABR. Hence, it is necessary to study binaural interaction occurs at the brainstem when there is an associated hearing impairment. Methods: Three groups of participants in the age range of 30 to 55 years were taken for study i.e. one control group and two experimental groups (symmetrical and asymmetrical hearing loss). The binaural interaction component was determined by subtracting the binaurally evoked auditory potentials from the sum of the monaural auditory evoked potentials: BIC= [{left monaural + right monaural)-binaural}. The latency and amplitude of V peak was estimated for click evoked ABR for monaural and binaural recordings. Results: One way ANOVA revealed a significant difference for binaural interaction component in terms of latency between different groups. One-way ANOVA also showed no significant difference seen between the three different groups in terms of amplitude. Conclusion: The binaural interaction component of auditory brainstem response can be used to evaluate the binaural interaction in symmetrical and asymmetrical hearing loss. This will be helpful to circumvent the effect of peripheral hearing loss in binaural processing of the auditory system. Additionally the test does not require any behavioral cooperation from the client, hence can be administered easily.", "title": "" }, { "docid": "abf3e75c6f714e4c2e2a02f9dd00117b", "text": "Recent work has shown that collaborative filter-based recommender systems can be improved by incorporating side information, such as natural language reviews, as a way of regularizing the derived product representations. Motivated by the success of this approach, we introduce two different models of reviews and study their effect on collaborative filtering performance. While the previous state-of-the-art approach is based on a latent Dirichlet allocation (LDA) model of reviews, the models we explore are neural network based: a bag-of-words product-of-experts model and a recurrent neural network. We demonstrate that the increased flexibility offered by the product-of-experts model allowed it to achieve state-of-the-art performance on the Amazon review dataset, outperforming the LDA-based approach. However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.", "title": "" } ]
scidocsrr
572c36b6d0375eaae2531b3efe40d4df
A Multidimensional Critical State Analysis for Detecting Intrusions in SCADA Systems
[ { "docid": "763a343484dd8b9deb6522e7220e58c1", "text": "Modern critical infrastructures are continually exposed to new threats due to the vulnerabilities and architectural weaknesses introduced by the extensive use of information and communications technologies (ICT). Of particular significance are the vulnerabilities in the communication protocols used in supervisory control and data acquisition (SCADA) systems that are commonly employed to control industrial processes. This paper presents the results of our research on the impact of traditional ICT malware on SCADA systems. In addition, it discusses the potential damaging effects of computer malware created for SCADA", "title": "" } ]
[ { "docid": "5a7324f328a7b5db8c3cb1cc9b606cbc", "text": "We consider a multiple-block separable convex programming problem, where the objective function is the sum of m individual convex functions without overlapping variables, and the constraints are linear, aside from side constraints. Based on the combination of the classical Gauss–Seidel and the Jacobian decompositions of the augmented Lagrangian function, we propose a partially parallel splitting method, which differs from existing augmented Lagrangian based splitting methods in the sense that such an approach simplifies the iterative scheme significantly by removing the potentially expensive correction step. Furthermore, a relaxation step, whose computational cost is negligible, can be incorporated into the proposed method to improve its practical performance. Theoretically, we establish global convergence of the new method in the framework of proximal point algorithm and worst-case nonasymptotic O(1/t) convergence rate results in both ergodic and nonergodic senses, where t counts the iteration. The efficiency of the proposed method is further demonstrated through numerical results on robust PCA, i.e., factorizing from incomplete information of an B Junfeng Yang jfyang@nju.edu.cn Liusheng Hou houlsheng@163.com Hongjin He hehjmath@hdu.edu.cn 1 School of Mathematics and Information Technology, Key Laboratory of Trust Cloud Computing and Big Data Analysis, Nanjing Xiaozhuang University, Nanjing 211171, China 2 Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China 3 Department of Mathematics, Nanjing University, Nanjing 210093, China", "title": "" }, { "docid": "c91cc6de1e26d9ac9b5ba03ba67fa9b9", "text": "As in most of the renewable energy sources it is not possible to generate high voltage directly, the study of high gain dc-dc converters is an emerging area of research. This paper presents a high step-up dc-dc converter based on current-fed Cockcroft-Walton multiplier. This converter not only steps up the voltage gain but also eliminates the use of high frequency transformer which adds to cost and design complexity. N-stage Cockcroft-Walton has been utilized to increase the voltage gain in place of a transformer. This converter also provides dual input operation, interleaved mode and maximum power point tracking control (if solar panel is used as input). This converter is utilized for resistive load and a pulsed power supply and the effect is studied in high voltage application. Simulation has been performed by designing a converter of 450 W, 400 V with single source and two stage of Cockcroft-Walton multiplier and interleaved mode of operation is performed. Design parameters as well as simulation results are presented and verified in this paper.", "title": "" }, { "docid": "c4f706ff9ceb514e101641a816ba7662", "text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classi€cation systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from di‚erent classes are further apart, resulting in statistically signi€cant improvement when compared to other approaches on three datasets from two di‚erent domains.", "title": "" }, { "docid": "d055bc9f5c7feb9712ec72f8050f5fd8", "text": "An intelligent observer looks at the world and sees not only what is, but what is moving and what can be moved. In other words, the observer sees how the present state of the world can transform in the future. We propose a model that predicts future images by learning to represent the present state and its transformation given only a sequence of images. To do so, we introduce an architecture with a latent state composed of two components designed to capture (i) the present image state and (ii) the transformation between present and future states, respectively. We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator. We describe how this model can be integrated into an encoder-decoder convolutional neural network (CNN) architecture that uses weighted residual connections to integrate representations of the past with representations of the future. Qualitatively, our approach generates image sequences that are stable and capture realistic motion over multiple predicted frames, without requiring adversarial training. Quantitatively, our method achieves prediction results comparable to state-of-the-art results on standard image prediction benchmarks (Moving MNIST, KTH, and UCF101).", "title": "" }, { "docid": "92ec1f93124ddfa1faa1d7a3ab371935", "text": "We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.", "title": "" }, { "docid": "01be341cfcfe218896c795d769c66e69", "text": "This letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over frequency selective fading (FSF) channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure. Simulation results verify the good performance of the proposed solution.", "title": "" }, { "docid": "ac6e7d8ee24d6e38765d43c85106b237", "text": "The drivers behind microplastic (up to 5mm in diameter) consumption by animals are uncertain and impacts on foundational species are poorly understood. We investigated consumption of weathered, unfouled, biofouled, pre-production and microbe-free National Institute of Standards plastic by a scleractinian coral that relies on chemosensory cues for feeding. Experiment one found that corals ingested many plastic types while mostly ignoring organic-free sand, suggesting that plastic contains phagostimulents. Experiment two found that corals ingested more plastic that wasn't covered in a microbial biofilm than plastics that were biofilmed. Additionally, corals retained ~8% of ingested plastic for 24h or more and retained particles appeared stuck in corals, with consequences for energetics, pollutant toxicity and trophic transfer. The potential for chemoreception to drive plastic consumption in marine taxa has implications for conservation.", "title": "" }, { "docid": "d39843f342646e4d338ab92bb7391d76", "text": "In this paper, a double-axis planar micro-fluxgate magnetic sensor and its front-end circuitry are presented. The ferromagnetic core material, i.e., the Vitrovac 6025 X, has been deposited on top of the coils with the dc-magnetron sputtering technique, which is a new type of procedure with respect to the existing solutions in the field of fluxgate sensors. This procedure allows us to obtain a core with the good magnetic properties of an amorphous ferromagnetic material, which is typical of a core with 25-mum thickness, but with a thickness of only 1 mum, which is typical of an electrodeposited core. The micro-Fluxgate has been realized in a 0.5- mum CMOS process using copper metal lines to realize the excitation coil and aluminum metal lines for the sensing coil, whereas the integrated interface circuitry for exciting and reading out the sensor has been realized in a 0.35-mum CMOS technology. Applying a triangular excitation current of 18 mA peak at 100 kHz, the magnetic sensitivity achieved is about 10 LSB/muT [using a 13-bit analog-to-digital converter (ADC)], which is suitable for detecting the Earth's magnetic field (plusmn60 muT), whereas the linearity error is 3% of the full scale. The maximum angle error of the sensor evaluating the Earth magnetic field is 2deg. The power consumption of the sensor is about 13.7 mW. The total power consumption of the system is about 90 mW.", "title": "" }, { "docid": "49e52c99226766f626dca492fd22ce70", "text": "Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs. As a result, using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption. On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) on various sequential models including sequence classification and language modeling. We demonstrate that our method achieves competitive results on the aforementioned tasks while using binary/ternary weights during the runtime. On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights. Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12× memory saving and 10× inference speedup compared to the full-precision implementation on an ASIC platform.", "title": "" }, { "docid": "ce8f000fa9a9ec51b8b2b63e98cec5fb", "text": "The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems.", "title": "" }, { "docid": "eff89cfd6056509c13eb8ce8463f8d30", "text": "Bioactivity of oregano methanolic extracts and essential oils is well known. Nonetheless, reports using aqueous extracts are scarce, mainly decoction or infusion preparations used for therapeutic applications. Herein, the antioxidant and antibacterial activities, and phenolic compounds of the infusion, decoction and hydroalcoholic extract of oregano were evaluated and compared. The antioxidant activity is related with phenolic compounds, mostly flavonoids, since decoction presented the highest concentration of flavonoids and total phenolic compounds, followed by infusion and hydroalcoholic extract. The samples were effective against gram-negative and gram-positive bacteria. It is important to address that the hydroalcoholic extract showed the highest efficacy against Escherichia coli. This study demonstrates that the decoction could be used for antioxidant purposes, while the hydroalcoholic extract could be incorporated in formulations for antimicrobial features. Moreover, the use of infusion/decoction can avoid the toxic effects showed by oregano essential oil, widely reported for its antioxidant and antimicrobial properties.", "title": "" }, { "docid": "a70f90ce39e1c3fc771412ca87adbad1", "text": "The concept of death has evolved as technology has progressed. This has forced medicine and society to redefine its ancient cardiorespiratory centred diagnosis to a neurocentric diagnosis of death. The apparent consensus about the definition of death has not yet appeased all controversy. Ethical, moral and religious concerns continue to surface and include a prevailing malaise about possible expansions of the definition of death to encompass the vegetative state or about the feared bias of formulating criteria so as to facilitate organ transplantation.", "title": "" }, { "docid": "1c367cad26436a059e56d000ac0db3c4", "text": "We propose a goal-driven web navigation as a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site, which is represented as a graph consisting of web pages as nodes and hyperlinks as directed edges, to find a web page in which a query appears. The agent is required to have sophisticated high-level reasoning based on natural languages and efficient sequential decision making capability to succeed. We release a software tool, called WebNav, that automatically transforms a website into this goal-driven web navigation task, and as an example, we make WikiNav, a dataset constructed from the English Wikipedia containing approximately 5 million articles and more than 12 million queries for training. We evaluate two different agents based on neural networks on the WikiNav and provide the human performance. Our results show the difficulty of the task for both humans and machines. With this benchmark, we expect faster progress in developing artificial agents with natural language understanding and planning skills.", "title": "" }, { "docid": "98f8994f1ad9315f168878ff40c29afc", "text": "OBJECTIVE\nSuicide remains a major global public health issue for young people. The reach and accessibility of online and social media-based interventions herald a unique opportunity for suicide prevention. To date, the large body of research into suicide prevention has been undertaken atheoretically. This paper provides a rationale and theoretical framework (based on the interpersonal theory of suicide), and draws on our experiences of developing and testing online and social media-based interventions.\n\n\nMETHOD\nThe implementation of three distinct online and social media-based intervention studies, undertaken with young people at risk of suicide, are discussed. We highlight the ways that these interventions can serve to bolster social connectedness in young people, and outline key aspects of intervention implementation and moderation.\n\n\nRESULTS\nInsights regarding the implementation of these studies include careful protocol development mindful of risk and ethical issues, establishment of suitably qualified teams to oversee development and delivery of the intervention, and utilisation of key aspects of human support (i.e., moderation) to encourage longer-term intervention engagement.\n\n\nCONCLUSIONS\nOnline and social media-based interventions provide an opportunity to enhance feelings of connectedness in young people, a key component of the interpersonal theory of suicide. Our experience has shown that such interventions can be feasibly and safely conducted with young people at risk of suicide. Further studies, with controlled designs, are required to demonstrate intervention efficacy.", "title": "" }, { "docid": "a3b1e2499142514614a7ab01d1227827", "text": "In this paper, we propose a simple but robust scheme to detect denial of service attacks (including distributed denial of service attacks) by monitoring the increase of new IP addresses. Unlike previous proposals for bandwidth attack detection schemes which are based on monitoring the traffic volume, our scheme is very effective for highly distributed denial of service attacks. Our scheme exploits an inherent feature of DDoS attacks, which makes it hard for the attacker to counter this detection scheme by changing their attack signature. Our scheme uses a sequential nonparametric change point detection method to improve the detection accuracy without requiring a detailed model of normal and attack traffic. We demonstrate that we can achieve high detection accuracy on a range of different network packet traces.", "title": "" }, { "docid": "97ec541daef17eb4ff0772e34ee4de48", "text": "Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.", "title": "" }, { "docid": "a61b2fc98a6754ede38865479a2d0b6f", "text": "Virtualization is a hot topic in the technology world. The technology enables a single computer to run multiple operating systems simultaneously. It lets companies use a single server for multiple tasks that would normally have to run on multiple servers, each running a different OS. Now, vendors are releasing products based on two lightweight virtualization approaches that also let a single operating system run several instances of the same OS or different OSs. However, today's new virtualization approaches do not try to emulate an entire hardware environment, as traditional virtualization does. They thus require fewer CPU and memory resources, which is why the technology is called \"lightweight\" virtualization. However, lightweight virtualization still faces several barriers to widespread adoption.", "title": "" }, { "docid": "be36c00e3545e66c4a09dac3ad7fa893", "text": "The fewest-line axial map, often simply referred to as the `axial map', is one of the primary tools of space syntax. Its natural language definition has allowed researchers to draw consistent maps that present a concise description of architectural space; it has been established that graph measures obtained from the map are useful for the analysis of pedestrian movement patterns and activities related to such movement: for example, the location of services or of crime. However, the definition has proved difficult to translate into formal language by mathematicians and algorithmic implementers alike. This has meant that space syntax has been criticised for a lack of rigour in the definition of one of its fundamental representations. Here we clarify the original definition of the fewest-line axial map and show that it can be implemented algorithmically. We show that the original definition leads to maps similar to those currently drawn by hand, and we demonstrate that the differences between the two may be accounted for in terms of the detail of the algorithm used. We propose that the analytical power of the axial map in empirical studies derives from the efficient representation of key properties of the spatial configuration that it captures. DOI:10.1068/b31097 Figure 1. The original hand-drawn axial map of Gassin, France, vectorised from figure 28 in Hillier and Hanson (1984, page 91), with detail of `stringy' (axial) and `beady' (convex) extensions of a point, after figure 27. who examined the relationship of whole buildings in urban plans. Although these methods, and later ones (including others introduced by Hillier and Hanson themselves) are similar in that they construct a graph of spatial components, it is the axial map that has captured the imagination of many architects, designers, and urban planners, and become the mainstay of space syntax tools. Graph measures obtained from axial maps have been used to analyse successfully the effect of configuration of space on pedestrian movement in urban areas (Hillier et al, 1993; Peponis et al, 1989), traffic flows (Penn et al, 1998), crime distribution (Hillier and Shu, 2000), and land values (Desyllas, 2000), amongst many others (see de Holanda, 1999; Hanson, 2003; Peponis et al, 2001; UCL, 1997). Extensive research into axial maps has also led to their considerable usage in architectural and planning practice in the United Kingdom (and also elsewhere), particularly related to pedestrianisation as, for example, in the recent remodelling of London's Trafalgar Square (Space Syntax Limited, 2003). The axial map was introduced after observation of real systems and experimentation with generative algorithms. Hillier and Hanson (1984) noted that urban space in particular seems to comprise two fundamental elementsö`stringiness' and `beadiness'ö such that the space of the systems tends to resemble beads on a string (see inset in figure 1). They write: `̀We can define `stringiness' as being to do with the extension of space in one dimension, whereas `beadiness' is to do with the extension of space in two dimensions'' (page 91). Hence, the epistemology of their methodology involves the investigation of how space is constructed in terms of configurations of interconnected beads and strings. To this end, they develop a more formal definition of the elements in which strings become `axial lines' and beads c̀onvex spaces'. The definition they give is one that is easily understood by human researchers, but which, it has transpired, is difficult to translate into a computational approach: `̀An axial map of the open space structure of the settlement will be the least set of [axial] lines which pass through each convex space and makes all axial links'' (Hillier and Hanson, 1984, pages 91 ^ 92). The term axial line is defined as the longest line (1) that can be drawn through an arbitrary point in the spatial configuration (see inset figure 1). Similarly, the term convex space is a `fully fat' convex polygon around a point (see inset figure 1). To `make all axial links' is to ensure that all axial lines are connected together if they possibly can be. However, it has recently been pointed out that, for a computational implementation, this apparently rigorous definition contains a problem (Batty and Rana, 2004; Ratti, 2004): the set of axial lines that fulfil these criteria cannot be precisely defined. Hillier and Hanson have, in fact, become victim to their own precision. As a researcher recently remarked online: `̀ do geographers spend so long defining road centre lines [as the space syntax community spends debating axial lines]?'' The answer is, of course not. The map of the open space is a cartographer's artefact, and includes only features that he or she considers important. The basis for a road-centre line is that it should simply follow the centre of the road. If one then asks questions such as `at exactly what point is a road considered a road and not a path?' we descend into an ultimately pointless debate. Batty and Rana (2004) sidestep this debate, and suggest that the definition of an axial line be broadened to include a range of differently specified sets of lines to be studied for their own interest. However, this approach (1) Sometimes the `longest visibility line' is referred to; however, the axial line as Hillier and Hanson define it is a purely geometrical entity. 426 A Turner, A Penn, B Hillier", "title": "" }, { "docid": "df67916f4a7aefb9efbde1af0de417b5", "text": "In this paper we present an approach to music genre classification which converts an audio signal into spectrograms and extracts texture features from these time-frequency images which are then used for modeling music genres in a classification system. The texture features are based on Local Binary Pattern, a structural texture operator that has been successful in recent image classification research. Experiments are performed with two well-known datasets: the Latin Music Database (LMD), and the ISMIR 2004 dataset. The proposed approach takes into account some different zoning mechanisms to perform local feature extraction. Results obtained with and without local feature extraction are compared. We compare the performance of texture features with that of commonly used audio content based features (i.e. from the MARSYAS framework), and show that texture features always outperforms the audio content based features. We also compare our results with results from the literature. On the LMD, the performance of our approach reaches about 82.33%, above the best result obtained in the MIREX 2010 competition on that dataset. On the ISMIR 2004 database, the best result obtained is about 80.65%, i.e. below the best result on that dataset found in the literature. & 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9a2cfa65fe07d99b354e6f772282ff13", "text": "Destiny is, to date, the most expensive digital game ever released with a total operating budget of over half a billion US dollars. It stands as one of the main examples of AAA titles, the term used for the largest and most heavily marketed game productions in the games industry. Destiny is a blend of a shooter game and massively multi-player online game, and has attracted dozens of millions of players. As a persistent game title, predicting retention and churn in Destiny is crucial to the running operations of the game, but prediction has not been attempted for this type of game in the past. In this paper, we present a discussion of the challenge of predicting churn in Destiny, evaluate the area under curve (ROC) of behavioral features, and use Hidden Markov Models to develop a churn prediction model for the game.", "title": "" } ]
scidocsrr
0d74eef95a725a278b617c8c2670b3de
Anonymity Properties of the Bitcoin P2P Network
[ { "docid": "49911f2cf2d6dbef9545c1cb56648128", "text": "Bitcoin is a digital currency which relies on a distributed set of miners to mint coins and on a peer-to-peer network to broadcast transactions. The identities of Bitcoin users are hidden behind pseudonyms (public keys) which are recommended to be changed frequently in order to increase transaction unlinkability.\n We present an efficient method to deanonymize Bitcoin users, which allows to link user pseudonyms to the IP addresses where the transactions are generated. Our techniques work for the most common and the most challenging scenario when users are behind NATs or firewalls of their ISPs. They allow to link transactions of a user behind a NAT and to distinguish connections and transactions of different users behind the same NAT. We also show that a natural countermeasure of using Tor or other anonymity services can be cut-off by abusing anti-DoS countermeasures of the Bitcoin network. Our attacks require only a few machines and have been experimentally verified. The estimated success rate is between 11% and 60% depending on how stealthy an attacker wants to be. We propose several countermeasures to mitigate these new attacks.", "title": "" }, { "docid": "f2c6a7f205f1aa6b550418cd7e93f7d2", "text": "This paper addresses the problem of a single rumor source detection with multiple observations, from a statistical point of view of a spreading over a network, based on the susceptible-infectious model. For tree networks, multiple sequential observations for one single instance of rumor spreading cannot improve over the initial snapshot observation. The situation dramatically improves for multiple independent observations. We propose a unified inference framework based on the union rumor centrality, and provide explicit detection performance for degree-regular tree networks. Surprisingly, even with merely two observations, the detection probability at least doubles that of a single observation, and further approaches one, i.e., reliable detection, with increasing degree. This indicates that a richer diversity enhances detectability. For general graphs, a detection algorithm using a breadth-first search strategy is also proposed and evaluated. Besides rumor source detection, our results can be used in network forensics to combat recurring epidemic-like information spreading such as online anomaly and fraudulent email spams.", "title": "" } ]
[ { "docid": "62e900f89427e4b97f64919a3cb0d537", "text": "This paper introduces the SpamBayes classification engine and outlines the most important features and techniques which contribute to its success. The importance of using the indeterminate ‘unsure’ classification produced by the chi-squared combining technique is explained. It outlines a Robinson/Woodhead/Peters technique of ‘tiling’ unigrams and bigrams to produce better results than relying solely on either or other methods of using both unigrams and bigrams. It discusses methods of training the classifier, and evaluates the success of different methods. The paper focuses on highlighting techniques that might aid other classification systems rather than attempting to demonstrate the effectiveness of the SpamBayes classification engine.", "title": "" }, { "docid": "b42f3575dad9615a40f491291661e7c5", "text": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.", "title": "" }, { "docid": "4174c1d49ff8755c6b82c2b453918d29", "text": "Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.", "title": "" }, { "docid": "0e4b6b1839c31cc9c2254f722a171245", "text": "Modern web applications consist of a significant amount of client- side code, written in JavaScript, HTML, and CSS. In this paper, we present a study of common challenges and misconceptions among web developers, by mining related questions asked on Stack Over- flow. We use unsupervised learning to categorize the mined questions and define a ranking algorithm to rank all the Stack Overflow questions based on their importance. We analyze the top 50 questions qualitatively. The results indicate that (1) the overall share of web development related discussions is increasing among developers, (2) browser related discussions are prevalent; however, this share is decreasing with time, (3) form validation and other DOM related discussions have been discussed consistently over time, (4) web related discussions are becoming more prevalent in mobile development, and (5) developers face implementation issues with new HTML5 features such as Canvas. We examine the implications of the results on the development, research, and standardization communities.", "title": "" }, { "docid": "e05e91be6ca5423d795f17be8a1cec10", "text": "A novel active gate driver (AGD) for silicon carbide (SiC) MOSFET is studied in this paper. The gate driver (GD) increases the gate resistance value during the voltage plateau area of the gate-source voltage, in both turn-on and turn-off transitions. The proposed AGD is validated in both simulation and experimental environments and in hard-switching conditions. The simulation is evaluated in MATLAB/Simulink with 100 kHz of switching frequency and 600 V of dc-bus, whereas, the experimental part was realised at 100 kHz and 100 V of dc-bus. The results show that the gate driver can reduce the over-voltage and ringing, with low switching losses.", "title": "" }, { "docid": "be3c8186c6e818e7cdba74cc4e7148e2", "text": "A network latency emulator allows IT architects to thoroughly investigate how network latencies impact workload performance. Software-based emulation tools have been widely used by researchers and engineers. It is possible to use commodity server computers for emulation and set up an emulation environment quickly without outstanding hardware cost. However, existing software-based tools built in the network stack of an operating system are not capable of supporting the bandwidth of today's standard interconnects (e.g., 10GbE) and emulating sub-milliseconds latencies likely caused by network virtualization in a datacenter. In this paper, we propose a network latency emulator (DEMU) supporting broad bandwidth traffic with sub-milliseconds accuracy, which is based on an emerging packet processing framework, DPDK. It avoids the overhead of the network stack by directly interacting with NIC hardware. Through experiments, we confirmed that DEMU can emulate latencies on the order of 10 µs for short-packet traffic at the line rate of 10GbE. The standard deviation of inserted delays was only 2–3 µs. This is a significant improvement from a network emulator built in the Linux Kernel (i.e., NetEm), which loses more than 50% of its packets for the same 10GbE traffic. For 1 Gbps traffic, the latency deviation of NetEm was approximately 20 µs, while that of our mechanism was 2 orders of magnitude smaller (i.e., only 0.3 µs).", "title": "" }, { "docid": "d9c9b9bdfa8333320097b5a4f97c8663", "text": "This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children learn to solve linear equations and perfect their skills over a 6-day period. Functional MRI data show that: (a) a motor region tracks the output of equation solutions, (b) a prefrontal region tracks the retrieval of declarative information, (c) a parietal region tracks the transformation of mental representations of the equation, (d) an anterior cingulate region tracks the setting of goal information to control the information flow, and (e) a caudate region tracks the firing of productions in the ACT-R model. The article concludes with an architectural comparison of the competence children display in this task and the competence that monkeys have shown in tasks that require manipulations of sequences of elements.", "title": "" }, { "docid": "2302cc10902d2389e2eb342aa5ca0584", "text": "The modelling of the parameters that influence the continuous evaporation of an alcoholic extract was considered using Doehlert matrices. The work was performed with a wiped falling film evaporator that allowed us to study the influence of the pressure, temperature, feed flow and dry matter of the feed solution on the dry matter contents of the resulting concentrate, and the productivity of the process. The Doehlert shells were used to model the influential parameters. The pattern obtained from the experimental results was checked allowing for some dysfunction in the unit. The evaporator was modified and a new model applied; the experimental results were then in agreement with the equations. The model was finally determined and successfully checked in order to obtain an 8% dry matter concentrate with the best productivity; the results fit in with the industrial constraints of subsequent processes.", "title": "" }, { "docid": "3da8cb73f3770a803ca43b8e2a694ccc", "text": "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.", "title": "" }, { "docid": "7ac57f2d521a4db22e203c232a126ac4", "text": ".................................................................................................................................. iii ACKNOWLEDGEMENTS ............................................................................................................ v TABLE OF CONTENTS .............................................................................................................. vii LIST OF TABLES ....................................................................................................................... viii LIST OF FIGURES ....................................................................................................................... ix CHAPTER 1: INTRODUCTION ................................................................................................... 1 CHAPTER 2: REVIEW OF RELATED LITERATURE ............................................................... 4 Flexibility Interventions .............................................................................................................. 4 Athletic Performance Interventions .......................................................................................... 18 Recovery Interventions ............................................................................................................. 29 Methodology & Supporting Arguments ................................................................................... 35 CHAPTER 3: METHODOLOGY ................................................................................................ 37 CHAPTER 4: RESULTS .............................................................................................................. 43 CHAPTER 5: DISCUSSION ........................................................................................................ 48 APPENDIX A: PRE-RESEARCH QUESTIONNAIRE .............................................................. 54 APPENDIX B: NUMERIC PRESSURE SCALE ........................................................................ 55 APPENDIX C: DATA COLLECTION FIGURES ...................................................................... 56 REFERENCES ............................................................................................................................. 58 CURRICULUM VITAE ............................................................................................................... 61", "title": "" }, { "docid": "9d5ba6f0beb2c9f03ea29f8fc35d51bb", "text": "Independent component analysis (ICA) is a promising analysis method that is being increasingly applied to fMRI data. A principal advantage of this approach is its applicability to cognitive paradigms for which detailed models of brain activity are not available. Independent component analysis has been successfully utilized to analyze single-subject fMRI data sets, and an extension of this work would be to provide for group inferences. However, unlike univariate methods (e.g., regression analysis, Kolmogorov-Smirnov statistics), ICA does not naturally generalize to a method suitable for drawing inferences about groups of subjects. We introduce a novel approach for drawing group inferences using ICA of fMRI data, and present its application to a simple visual paradigm that alternately stimulates the left or right visual field. Our group ICA analysis revealed task-related components in left and right visual cortex, a transiently task-related component in bilateral occipital/parietal cortex, and a non-task-related component in bilateral visual association cortex. We address issues involved in the use of ICA as an fMRI analysis method such as: (1) How many components should be calculated? (2) How are these components to be combined across subjects? (3) How should the final results be thresholded and/or presented? We show that the methodology we present provides answers to these questions and lay out a process for making group inferences from fMRI data using independent component analysis.", "title": "" }, { "docid": "6f176e780d94a8fa8c5b1d6d364c4363", "text": "Current uses of smartwatches are focused solely around the wearer's content, viewed by the wearer alone. When worn on a wrist, however, watches are often visible to many other people, making it easy to quickly glance at their displays. We explore the possibility of extending smartwatch interactions to turn personal wearables into more public displays. We begin opening up this area by investigating fundamental aspects of this interaction form, such as the social acceptability and noticeability of looking at someone else's watch, as well as the likelihood of a watch face being visible to others. We then sketch out interaction dimensions as a design space, evaluating each aspect via a web-based study and a deployment of three potential designs. We conclude with a discussion of the findings, implications of the approach and ways in which designers in this space can approach public wrist-worn wearables.", "title": "" }, { "docid": "ae9fb1b7ff6821dd29945f768426d7fc", "text": "Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients.", "title": "" }, { "docid": "8172b901dca0ee5cab2a2439ec5f0376", "text": "Manually designed workflows can be error-prone and inefficient. Workflow provenance contains fine-grained data processing information that can be used to detect workflow design problems. In this paper, we propose a provenance-driven workflow analysis framework that exploits both prospective and retrospective provenance. We show how provenance information can help the user gain a deeper understanding of a workflow and provide the user with insights into how to improve workflow design.", "title": "" }, { "docid": "a804d188b4fd2b89efaf072d96ef1023", "text": "Current state-of-the-art sports statistics compare players and teams to league average performance. For example, metrics such as “Wins-above-Replacement” (WAR) in baseball [1], “Expected Point Value” (EPV) in basketball [2] and “Expected Goal Value” (EGV) in soccer [3] and hockey [4] are now commonplace in performance analysis. Such measures allow us to answer the question “how does this player or team compare to the league average?” Even “personalized metrics” which can answer how a “player’s or team’s current performance compares to its expected performance” have been used to better analyze and improve prediction of future outcomes [5].", "title": "" }, { "docid": "444c3a4eb179604e96fb39b68f999143", "text": "Reduced heart rate variability carries an adverse prognosis in patients who have survived an acute myocardial infarction. This article reviews the physiology, technical problems of assessment, and clinical relevance of heart rate variability. The sympathovagal influence and the clinical assessment of heart rate variability are discussed. Methods measuring heart rate variability are classified into four groups, and the advantages and disadvantages of each group are described. Concentration is on risk stratification of postmyocardial infarction patients. The evidence suggests that heart rate variability is the single most important predictor of those patients who are at high risk of sudden death or serious ventricular arrhythmias.", "title": "" }, { "docid": "01bb8e6af86aa1545958a411653e014c", "text": "Estimating the tempo of a musical piece is a complex problem, which has received an increasing amount of attention in the past few years. The problem consists of estimating the number of beats per minute (bpm) at which the music is played and identifying exactly when these beats occur. Commercial devices already exist that attempt to extract a musical instrument digital interface (MIDI) clock from an audio signal, indicating both the tempo and the actual location of the beat. Such MIDI clocks can then be used to synchronize other devices (such as drum machines and audio effects) to the audio source, enabling a new range of \" beat-synchronized \" audio processing. Beat detection can also simplify the usually tedious process of manipulating audio material in audio-editing software. Cut and paste operations are made considerably easier if markers are positioned at each beat or at bar boundaries. Looping a drum track over two bars becomes trivial once the location of the beats is known. A third range of applications is the fairly new area of automatic playlist generation, where a computer is given the task to choose a series of audio tracks from a track database in a way similar to what a human deejay would do. The track tempo is a very important selection criterion in this context , as deejays will tend to string tracks with similar tempi back to back. Furthermore, deejays also tend to perform beat-synchronous crossfading between successive tracks manually, slowing down or speeding up one of the tracks so that the beats in the two tracks line up exactly during the crossfade. This can easily be done automatically once the beats are located in the two tracks. The tempo detection systems commercially available appear to be fairly unsophisticated, as they rely mostly on the presence of a strong and regular bass-drum kick at every beat, an assumption that holds mostly with modern musical genres such as techno or drums and bass. For music with a less pronounced tempo such techniques fail miserably and more sophisticated algorithms are needed. This paper describes an off-line tempo detection algorithm , able to estimate a time-varying tempo from an audio track stored, for example, on an audio CD or on a computer hard disk. The technique works in three successive steps: 1) an \" energy flux \" signal is extracted from the track, 2) at each tempo-analysis time, several …", "title": "" }, { "docid": "78b7987361afd8c7814ee416c81a311b", "text": "This paper presents the characterization of various types of SubMiniature version A (SMA) connectors. The characterization is performed by measurements in frequency and time domain. The SMA connectors are mounted on microstrip (MS) and conductor-backed coplanar waveguide (CPW-CB) manufactured on high-frequency (HF) laminates. The designed characteristic impedance of the transmission lines is 50 Ω and deviation from the designed characteristic impedance is measured. The measurement results suggest that for a given combination of the transmission line and SMA connector, the discontinuity in terms of characteristic impedance can be significantly improved by choosing the right connector type.", "title": "" }, { "docid": "abeccd593d90415c843385fe6ef7608f", "text": "A1 Functional advantages of cell-type heterogeneity in neural circuits Tatyana O. Sharpee A2 Mesoscopic modeling of propagating waves in visual cortex Alain Destexhe A3 Dynamics and biomarkers of mental disorders Mitsuo Kawato F1 Precise recruitment of spiking output at theta frequencies requires dendritic h-channels in multi-compartment models of oriens-lacunosum/moleculare hippocampal interneurons Vladislav Sekulić, Frances K. Skinner F2 Kernel methods in reconstruction of current sources from extracellular potentials for single cells and the whole brains Daniel K. Wójcik, Chaitanya Chintaluri, Dorottya Cserpán, Zoltán Somogyvári F3 The synchronized periods depend on intracellular transcriptional repression mechanisms in circadian clocks. Jae Kyoung Kim, Zachary P. Kilpatrick, Matthew R. Bennett, Kresimir Josić O1 Assessing irregularity and coordination of spiking-bursting rhythms in central pattern generators Irene Elices, David Arroyo, Rafael Levi, Francisco B. Rodriguez, Pablo Varona O2 Regulation of top-down processing by cortically-projecting parvalbumin positive neurons in basal forebrain Eunjin Hwang, Bowon Kim, Hio-Been Han, Tae Kim, James T. McKenna, Ritchie E. Brown, Robert W. McCarley, Jee Hyun Choi O3 Modeling auditory stream segregation, build-up and bistability James Rankin, Pamela Osborn Popp, John Rinzel O4 Strong competition between tonotopic neural ensembles explains pitch-related dynamics of auditory cortex evoked fields Alejandro Tabas, André Rupp, Emili Balaguer-Ballester O5 A simple model of retinal response to multi-electrode stimulation Matias I. Maturana, David B. Grayden, Shaun L. Cloherty, Tatiana Kameneva, Michael R. Ibbotson, Hamish Meffin O6 Noise correlations in V4 area correlate with behavioral performance in visual discrimination task Veronika Koren, Timm Lochmann, Valentin Dragoi, Klaus Obermayer O7 Input-location dependent gain modulation in cerebellar nucleus neurons Maria Psarrou, Maria Schilstra, Neil Davey, Benjamin Torben-Nielsen, Volker Steuber O8 Analytic solution of cable energy function for cortical axons and dendrites Huiwen Ju, Jiao Yu, Michael L. Hines, Liang Chen, Yuguo Yu O9 C. elegans interactome: interactive visualization of Caenorhabditis elegans worm neuronal network Jimin Kim, Will Leahy, Eli Shlizerman O10 Is the model any good? Objective criteria for computational neuroscience model selection Justas Birgiolas, Richard C. Gerkin, Sharon M. Crook O11 Cooperation and competition of gamma oscillation mechanisms Atthaphon Viriyopase, Raoul-Martin Memmesheimer, Stan Gielen O12 A discrete structure of the brain waves Yuri Dabaghian, Justin DeVito, Luca Perotti O13 Direction-specific silencing of the Drosophila gaze stabilization system Anmo J. Kim, Lisa M. Fenk, Cheng Lyu, Gaby Maimon O14 What does the fruit fly think about values? A model of olfactory associative learning Chang Zhao, Yves Widmer, Simon Sprecher,Walter Senn O15 Effects of ionic diffusion on power spectra of local field potentials (LFP) Geir Halnes, Tuomo Mäki-Marttunen, Daniel Keller, Klas H. Pettersen,Ole A. Andreassen, Gaute T. Einevoll O16 Large-scale cortical models towards understanding relationship between brain structure abnormalities and cognitive deficits Yasunori Yamada O17 Spatial coarse-graining the brain: origin of minicolumns Moira L. Steyn-Ross, D. Alistair Steyn-Ross O18 Modeling large-scale cortical networks with laminar structure Jorge F. Mejias, John D. Murray, Henry Kennedy, Xiao-Jing Wang O19 Information filtering by partial synchronous spikes in a neural population Alexandra Kruscha, Jan Grewe, Jan Benda, Benjamin Lindner O20 Decoding context-dependent olfactory valence in Drosophila Laurent Badel, Kazumi Ohta, Yoshiko Tsuchimoto, Hokto Kazama P1 Neural network as a scale-free network: the role of a hub B. Kahng P2 Hemodynamic responses to emotions and decisions using near-infrared spectroscopy optical imaging Nicoladie D. Tam P3 Phase space analysis of hemodynamic responses to intentional movement directions using functional near-infrared spectroscopy (fNIRS) optical imaging technique Nicoladie D.Tam, Luca Pollonini, George Zouridakis P4 Modeling jamming avoidance of weakly electric fish Jaehyun Soh, DaeEun Kim P5 Synergy and redundancy of retinal ganglion cells in prediction Minsu Yoo, S. E. Palmer P6 A neural field model with a third dimension representing cortical depth Viviana Culmone, Ingo Bojak P7 Network analysis of a probabilistic connectivity model of the Xenopus tadpole spinal cord Andrea Ferrario, Robert Merrison-Hort, Roman Borisyuk P8 The recognition dynamics in the brain Chang Sub Kim P9 Multivariate spike train analysis using a positive definite kernel Taro Tezuka P10 Synchronization of burst periods may govern slow brain dynamics during general anesthesia Pangyu Joo P11 The ionic basis of heterogeneity affects stochastic synchrony Young-Ah Rho, Shawn D. Burton, G. Bard Ermentrout, Jaeseung Jeong, Nathaniel N. Urban P12 Circular statistics of noise in spike trains with a periodic component Petr Marsalek P14 Representations of directions in EEG-BCI using Gaussian readouts Hoon-Hee Kim, Seok-hyun Moon, Do-won Lee, Sung-beom Lee, Ji-yong Lee, Jaeseung Jeong P15 Action selection and reinforcement learning in basal ganglia during reaching movements Yaroslav I. Molkov, Khaldoun Hamade, Wondimu Teka, William H. Barnett, Taegyo Kim, Sergey Markin, Ilya A. Rybak P17 Axon guidance: modeling axonal growth in T-Junction assay Csaba Forro, Harald Dermutz, László Demkó, János Vörös P19 Transient cell assembly networks encode persistent spatial memories Yuri Dabaghian, Andrey Babichev P20 Theory of population coupling and applications to describe high order correlations in large populations of interacting neurons Haiping Huang P21 Design of biologically-realistic simulations for motor control Sergio Verduzco-Flores P22 Towards understanding the functional impact of the behavioural variability of neurons Filipa Dos Santos, Peter Andras P23 Different oscillatory dynamics underlying gamma entrainment deficits in schizophrenia Christoph Metzner, Achim Schweikard, Bartosz Zurowski P24 Memory recall and spike frequency adaptation James P. Roach, Leonard M. Sander, Michal R. Zochowski P25 Stability of neural networks and memory consolidation preferentially occur near criticality Quinton M. Skilling, Nicolette Ognjanovski, Sara J. Aton, Michal Zochowski P26 Stochastic Oscillation in Self-Organized Critical States of Small Systems: Sensitive Resting State in Neural Systems Sheng-Jun Wang, Guang Ouyang, Jing Guang, Mingsha Zhang, K. Y. Michael Wong, Changsong Zhou P27 Neurofield: a C++ library for fast simulation of 2D neural field models Peter A. Robinson, Paula Sanz-Leon, Peter M. Drysdale, Felix Fung, Romesh G. Abeysuriya, Chris J. Rennie, Xuelong Zhao P28 Action-based grounding: Beyond encoding/decoding in neural code Yoonsuck Choe, Huei-Fang Yang P29 Neural computation in a dynamical system with multiple time scales Yuanyuan Mi, Xiaohan Lin, Si Wu P30 Maximum entropy models for 3D layouts of orientation selectivity Joscha Liedtke, Manuel Schottdorf, Fred Wolf P31 A behavioral assay for probing computations underlying curiosity in rodents Yoriko Yamamura, Jeffery R. Wickens P32 Using statistical sampling to balance error function contributions to optimization of conductance-based models Timothy Rumbell, Julia Ramsey, Amy Reyes, Danel Draguljić, Patrick R. Hof, Jennifer Luebke, Christina M. Weaver P33 Exploration and implementation of a self-growing and self-organizing neuron network building algorithm Hu He, Xu Yang, Hailin Ma, Zhiheng Xu, Yuzhe Wang P34 Disrupted resting state brain network in obese subjects: a data-driven graph theory analysis Kwangyeol Baek, Laurel S. Morris, Prantik Kundu, Valerie Voon P35 Dynamics of cooperative excitatory and inhibitory plasticity Everton J. Agnes, Tim P. Vogels P36 Frequency-dependent oscillatory signal gating in feed-forward networks of integrate-and-fire neurons William F. Podlaski, Tim P. Vogels P37 Phenomenological neural model for adaptation of neurons in area IT Martin Giese, Pradeep Kuravi, Rufin Vogels P38 ICGenealogy: towards a common topology of neuronal ion channel function and genealogy in model and experiment Alexander Seeholzer, William Podlaski, Rajnish Ranjan, Tim Vogels P39 Temporal input discrimination from the interaction between dynamic synapses and neural subthreshold oscillations Joaquin J. Torres, Fabiano Baroni, Roberto Latorre, Pablo Varona P40 Different roles for transient and sustained activity during active visual processing Bart Gips, Eric Lowet, Mark J. Roberts, Peter de Weerd, Ole Jensen, Jan van der Eerden P41 Scale-free functional networks of 2D Ising model are highly robust against structural defects: neuroscience implications Abdorreza Goodarzinick, Mohammad D. Niry, Alireza Valizadeh P42 High frequency neuron can facilitate propagation of signal in neural networks Aref Pariz, Shervin S. Parsi, Alireza Valizadeh P43 Investigating the effect of Alzheimer’s disease related amyloidopathy on gamma oscillations in the CA1 region of the hippocampus Julia M. Warburton, Lucia Marucci, Francesco Tamagnini, Jon Brown, Krasimira Tsaneva-Atanasova P44 Long-tailed distributions of inhibitory and excitatory weights in a balanced network with eSTDP and iSTDP Florence I. Kleberg, Jochen Triesch P45 Simulation of EMG recording from hand muscle due to TMS of motor cortex Bahar Moezzi, Nicolangelo Iannella, Natalie Schaworonkow, Lukas Plogmacher, Mitchell R. Goldsworthy, Brenton Hordacre, Mark D. McDonnell, Michael C. Ridding, Jochen Triesch P46 Structure and dynamics of axon network formed in primary cell culture Martin Zapotocky, Daniel Smit, Coralie Fouquet, Alain Trembleau P47 Efficient signal processing and sampling in random networks that generate variability Sakyasingha Dasgupta, Isao Nishikawa, Kazuyuki Aihara, Taro Toyoizumi P48 Modeling the effect of riluzole on bursting in respiratory neural networks Daniel T", "title": "" }, { "docid": "c90f5a4a34bb7998208c4c134bbab327", "text": "Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. SyntaxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.", "title": "" } ]
scidocsrr
5b39828c1b3144c7d50aae3c90509a04
Advertising ecotourism on the internet: commodifying environment and culture
[ { "docid": "654b7a674977969237301cd874bda5d1", "text": "This paper and its successor examine the gap between ecotourism theory as revealed in the literature and ecotourism practice as indicated by its on-site application. A framework is suggested which, if implemented through appropriate management, can help to achieve a balance between conservation and development through the promotion of synergistic relationships between natural areas, local populations and tourism. The framework can also be used to assess the status of ecotourism at particular sites. ( 1999 Published by Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "4c012653b3e5f1ba1cc057c091283503", "text": "In this paper we present an approach for segmenting objects in videos taken in complex scenes with multiple and different targets. The method does not make any specific assumptions about the videos and relies on how objects are perceived by humans according to Gestalt laws. Initially, we rapidly generate a coarse foreground segmentation, which provides predictions about motion regions by analyzing how superpixel segmentation changes in consecutive frames. We then exploit these location priors to refine the initial segmentation by optimizing an energy function based on appearance and perceptual organization, only on regions where motion is observed. We evaluated our method on complex and challenging video sequences and it showed significant performance improvements over recent state-of-the-art methods, being also fast enough to be used for “on-the-fly” processing.", "title": "" }, { "docid": "26f71c28c1346e80bac0e39d84e99206", "text": "The objective of the article is to highlight various roles of glutamic acid like endogenic anticancer agent, conjugates to anticancer agents, and derivatives of glutamic acid as possible anticancer agents. Besides these emphases are given especially for two endogenous derivatives of glutamic acid such as glutamine and glutamate. Glutamine is a derivative of glutamic acid and is formed in the body from glutamic acid and ammonia in an energy requiring reaction catalyzed by glutamine synthase. It also possesses anticancer activity. So the transportation and metabolism of glutamine are also discussed for better understanding the role of glutamic acid. Glutamates are the carboxylate anions and salts of glutamic acid. Here the roles of various enzymes required for the metabolism of glutamates are also discussed.", "title": "" }, { "docid": "4e11d69f17272fdeaf03be2db4b7e982", "text": "We present a method for spotting words in the wild, i.e., in real images taken in unconstrained environments. Text found in the wild has a surprising range of difficulty. At one end of the spectrum, Optical Character Recognition (OCR) applied to scanned pages of well formatted printed text is one of the most successful applications of computer vision to date. At the other extreme lie visual CAPTCHAs – text that is constructed explicitly to fool computer vision algorithms. Both tasks involve recognizing text, yet one is nearly solved while the other remains extremely challenging. In this work, we argue that the appearance of words in the wild spans this range of difficulties and propose a new word recognition approach based on state-of-the-art methods from generic object recognition, in which we consider object categories to be the words themselves. We compare performance of leading OCR engines – one open source and one proprietary – with our new approach on the ICDAR Robust Reading data set and a new word spotting data set we introduce in this paper: the Street View Text data set. We show improvements of up to 16% on the data sets, demonstrating the feasibility of a new approach to a seemingly old problem.", "title": "" }, { "docid": "65f4e93ac371d72b93c40f4fe9215805", "text": "Trie memory is a way of storing and retrieving information. ~ It is applicable to information that consists of function-argument (or item-term) pairs--information conventionally stored in unordered lists, ordered lists, or pigeonholes. The main advantages of trie memory over the other memoIw plans just mentioned are shorter access time, greater ease of addition or up-dating, greater convenience in handling arguments of diverse lengths, and the ability to take advantage of redundancies in the information stored. The main disadvantage is relative inefficiency in using storage space, but this inefficiency is not great when the store is large. In this paper several paradigms of trie memory are described and compared with other memory paradigms, their advantages and disadvantages are examined in detail, and applications are discussed. Many essential features of trie memory were mentioned by de la Briandais [1] in a paper presented to the Western Joint Computer Conference in 1959. The present development is essentially independent of his, having been described in memorandum form in January 1959 [2], and it is fuller in that it considers additional paradigms (finitedimensional trie memories) and includes experimental results bearing on the efficiency of utilization of storage space.", "title": "" }, { "docid": "4b5fcc37951db384c83b67b476ae8cdc", "text": "The purpose of this paper is to examine the applicability of Internal Marketing (IM) factors in relation to the Internal Service Quality (ISQ) of the company. In this study, Pos Malaysia Berhad, a Malaysian postal company, was selected to investigate how the Internal Marketing element and the anticipated Internal Service Quality was implemented in the company. The research applies six internal marketing practices namely, employee motivation, effective communication, employee selection, employment development, support system and healthy work environment A total of 103 respondents were surveyed as well as a series of interviews were conducted. The data collected was analysed quantitatively by using the factor analysis and multiple regression analysis. The results indicated that there was a positive relationship between the elements of the Internal Marketing and the anticipated Internal Service Quality in variable magnitude. Research findings showed that the support system brought about the most changes in Internal Service Quality.", "title": "" }, { "docid": "442504997ef102d664081b390ff09dd3", "text": "An intelligent traffic management system (E-Traffic Warden) is proposed, using image processing techniques along with smart traffic control algorithm. Traffic recognition was achieved using cascade classifier for vehicle recognition utilizing Open CV and Visual Studio C/C++. The classifier was trained on 700 positive samples and 1140 negative samples. The results show that the accuracy of vehicle detection is approximately 93 percent. The count of vehicles at all approaches of intersection is used to estimate traffic. Traffic build up is then avoided or resolved by passing the extracted data to traffic control algorithm. The control algorithm shows approximately 86% improvement over Fixed-Delay controller in worst case scenarios.", "title": "" }, { "docid": "d0f0fb3b2e78b8e8a549b0c0b23978a4", "text": "In flash memory-based storage, a Flash Translation Layer (FTL) manages the mapping between the logical addresses of a file system and the physical addresses of the flash memory. When a journaling file system is set up on the FTL, the consistency of the file system is guaranteed by duplications of the same file system changes in both the journal region of the file system and the home locations of the changes. However, these duplications inevitably degrade the performance of the file system. In this article we present an efficient FTL, called JFTL, based on a journal remapping technique. The FTL uses an address mapping method to write all the data to a new region in a process known as an out-of-place update. Because of this process, the existing data in flash memory is not overwritten by such an update. By using this characteristic of the FTL, the JFTL remaps addresses of the logged file system changes to addresses of the home locations of the changes, instead of writing the changes once more to flash memory. Thus, the JFTL efficiently eliminates redundant data in the flash memory as well as preserving the consistency of the journaling file system. Our experiments confirm that, when associated with a writeback or ordered mode of a conventional EXT3 file system, the JFTL enhances the performance of EXT3 by up to 20%. Furthermore, when the JFTL operates with a journaled mode of EXT3, there is almost a twofold performance gain in many cases. Moreover, the recovery performance of the JFTL is much better than that of the FTL.", "title": "" }, { "docid": "d3ae7f70b1d3fb1fbbf5fe9cd1a33bc8", "text": "Due to significant advances in SAT technology in the last years, its use for solving constraint satisfaction problems has been gaining wide acceptance. Solvers for satisfiability modulo theories (SMT) generalize SAT solving by adding the ability to handle arithmetic and other theories. Although there are results pointing out the adequacy of SMT solvers for solving CSPs, there are no available tools to extensively explore such adequacy. For this reason, in this paper we introduce a tool for translating FLATZINC (MINIZINC intermediate code) instances of CSPs to the standard SMT-LIB language. We provide extensive performance comparisons between state-of-the-art SMT solvers and most of the available FLATZINC solvers on standard FLATZINC problems. The obtained results suggest that state-of-the-art SMT solvers can be effectively used to solve CSPs.", "title": "" }, { "docid": "82ff2197019f2fbe6285349b4ed43ac7", "text": "OBJECTIVES\nUsing data from a regional census of high school students, we have documented the prevalence of cyberbullying and school bullying victimization and their associations with psychological distress.\n\n\nMETHODS\nIn the fall of 2008, 20,406 ninth- through twelfth-grade students in MetroWest Massachusetts completed surveys assessing their bullying victimization and psychological distress, including depressive symptoms, self-injury, and suicidality.\n\n\nRESULTS\nA total of 15.8% of students reported cyberbullying and 25.9% reported school bullying in the past 12 months. A majority (59.7%) of cyberbullying victims were also school bullying victims; 36.3% of school bullying victims were also cyberbullying victims. Victimization was higher among nonheterosexually identified youths. Victims report lower school performance and school attachment. Controlled analyses indicated that distress was highest among victims of both cyberbullying and school bullying (adjusted odds ratios [AORs] were from 4.38 for depressive symptoms to 5.35 for suicide attempts requiring medical treatment). Victims of either form of bullying alone also reported elevated levels of distress.\n\n\nCONCLUSIONS\nOur findings confirm the need for prevention efforts that address both forms of bullying and their relation to school performance and mental health.", "title": "" }, { "docid": "1872c5cc4638a525517940e606e9db2f", "text": "Cyclic Redundancy Check is playing a vital role in the networking environment to detect the errors. With challenging speed of transmitting data and to synchronize with speed, it’s necessary to increase speed of CRC generation. This paper presents 64 bits parallel CRC architecture based on F-matrix with order of generator polynomial is 32. Implemented design is hardware efficient and requires 50% less cycles to generate CRC with same order of generator polynomial. CRC32 bit is used in Ethernet frame for error detection. The whole design is functionally developed and verified using Xilinx ISE 12.3i Simulator.", "title": "" }, { "docid": "18caf39ce8802f69a463cc1a4b276679", "text": "In this thesis we describe the formal verification of a fully IEEE compliant floating point unit (FPU). The hardware is verified on the gate-level against a formalization of the IEEE standard. The verification is performed using the theorem proving system PVS. The FPU supports both single and double precision floating point numbers, normal and denormal numbers, all four IEEE rounding modes, and exceptions as required by the standard. Beside the verification of the combinatorial correctness of the FPUs we pipeline the FPUs to allow the integration into an out-of-order processor. We formally define the correctness criterion the pipelines must obey in order to work properly within the processor. We then describe a new methodology based on combining model checking and theorem proving for the verification of the pipelines.", "title": "" }, { "docid": "930cc322737ea975cd077dcec2935f4d", "text": "Metaphor is one of the most studied and widespread figures of speech and an essential element of individual style. In this paper we look at metaphor identification in Adjective-Noun pairs. We show that using a single neural network combined with pre-trained vector embeddings can outperform the state of the art in terms of accuracy. In specific, the approach presented in this paper is based on two ideas: a) transfer learning via using pre-trained vectors representing adjective noun pairs, and b) a neural network as a model of composition that predicts a metaphoricity score as output. We present several different architectures for our system and evaluate their performances. Variations on dataset size and on the kinds of embeddings are also investigated. We show considerable improvement over the previous approaches both in terms of accuracy and w.r.t the size of annotated training data.", "title": "" }, { "docid": "bdd56cd8b9ec6dcdc6ff87fa5bed80ac", "text": "The battery is a fundamental component of electric vehicles, which represent a step forward towards sustainable mobility. Lithium chemistry is now acknowledged as the technology of choice for energy storage in electric vehicles. However, several research points are still open. They include the best choice of the cell materials and the development of electronic circuits and algorithms for a more effective battery utilization. This paper initially reviews the most interesting modeling approaches for predicting the battery performance and discusses the demanding requirements and standards that apply to ICs and systems for battery management. Then, a general and flexible architecture for battery management implementation and the main techniques for state-of-charge estimation and charge balancing are reported. Finally, we describe the design and implementation of an innovative BMS, which incorporates an almost fully-integrated active charge equalizer.", "title": "" }, { "docid": "29b257283d31750828e4ccd0fbadd1dc", "text": "A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. A maximum-ratio combining receiver normally performs worse than a zero-forcing receiver. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option.", "title": "" }, { "docid": "6800b7749dcc39020de70a25c167cab1", "text": "Emotion recognition from speech has emerged as an important research area in the recent past. In this regard, review of existing work on emotional speech processing is useful for carrying out further research. In this paper, the recent literature on speech emotion recognition has been presented considering the issues related to emotional speech corpora, different types of speech features and models used for recognition of emotions from speech. Thirty two representative speech databases are reviewed in this work from point of view of their language, number of speakers, number of emotions, and purpose of collection. The issues related to emotional speech databases used in emotional speech recognition are also briefly discussed. Literature on different features used in the task of emotion recognition from speech is presented. The importance of choosing different classification models has been discussed along with the review. The important issues to be considered for further emotion recognition research in general and in specific to the Indian context have been highlighted where ever necessary.", "title": "" }, { "docid": "4129d2906d3d3d96363ff0812c8be692", "text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.", "title": "" }, { "docid": "9f2db5cf1ee0cfd0250e68bdbc78b434", "text": "A novel transverse equivalent network is developed in this letter to efficiently analyze a recently proposed leaky-wave antenna in substrate integrated waveguide (SIW) technology. For this purpose, precise modeling of the SIW posts for any distance between vias is essential to obtain accurate results. A detailed parametric study is performed resulting in leaky-mode dispersion curves as a function of the main geometrical dimensions of the antenna. Finally, design curves that directly provide the requested dimensions to synthesize the desired scanning response and leakage rate are reported and validated with experiments.", "title": "" }, { "docid": "cd8730540d95c482b27b01f725777737", "text": "We introduce Baseline: a library for reproducible deep learning research and fast model development for NLP. The library provides easily extensible abstractions and implementations for data loading, model development, training and export of deep learning architectures. It also provides implementations for simple, high-performance, deep learning models for various NLP tasks, against which newly developed models can be compared. Deep learning experiments are hard to reproduce, Baseline provides functionalities to track them. The goal is to allow a researcher to focus on model development, delegating the repetitive tasks to the library.", "title": "" }, { "docid": "6eace0f6216d17b9041f1bed42459c40", "text": "Predicting possible code-switching points can help develop more accurate methods for automatically processing mixed-language text, such as multilingual language models for speech recognition systems and syntactic analyzers. We present in this paper exploratory results on learning to predict potential codeswitching points in Spanish-English. We trained different learning algorithms using a transcription of code-switched discourse. To evaluate the performance of the classifiers, we used two different criteria: 1) measuring precision, recall, and F-measure of the predictions against the reference in the transcription, and 2) rating the naturalness of artificially generated code-switched sentences. Average scores for the code-switched sentences generated by our machine learning approach were close to the scores of those generated by humans.", "title": "" }, { "docid": "c6360e4f9704d362d37d9da6146bd51e", "text": "There are several tools and models found in machine learning that can be used to forecast a certain time series; however, it is not always clear which model is appropriate for selection, as different models are suited for different types of data, and domain-specific transformations and considerations are usually required. This research aims to examine the issue by modeling four types of machineand deep learning algorithms support vector machine, random forest, feed-forward neural network, and a LSTM neural network on a high-variance, multivariate time series to forecast trend changes one time step in the future, accounting for lag. The models were trained on clinical trial data of patients in an alcohol addiction treatment plan provided by a Uppsala-based company. The results showed moderate performance differences, with a concern that the models were performing a random walk or naive forecast. Further analysis was able to prove that at least one model, the feed-forward neural network, was not undergoing this and was able to make meaningful forecasts one time step into the future. In addition, the research also examined the effect of optimization processes by comparing a grid search, a random search, and a Bayesian optimization process. In all cases, the grid search found the lowest minima, though its slow runtimes were consistently beaten by Bayesian optimization, which contained only slightly lower performances than the grid search. Key words— Data science, alcohl abuse, time series, forecasting, machine learning, deep learning, neural networks, regression", "title": "" } ]
scidocsrr
38204d4181479627389dde284177293b
Fully Homomorphic Encryption for Classification in Machine Learning
[ { "docid": "afd3bdd971c272583c1a24b3e1a331b6", "text": "Machine learning classification is used for numerous tasks nowadays, such as medical or genomics predictions, spam detection, face recognition, and financial predictions. Due to privacy concerns, in some of these applications, it is important that the data and the classifier remain confidential. In this work, we construct three major classification protocols that satisfy this privacy constraint: hyperplane decision, Naïve Bayes, and decision trees. We also enable these protocols to be combined with AdaBoost. At the basis of these constructions is a new library of building blocks, which enables constructing a wide range of privacy-preserving classifiers; we demonstrate how this library can be used to construct other classifiers than the three mentioned above, such as a multiplexer and a face detection classifier. We implemented and evaluated our library and our classifiers. Our protocols are efficient, taking milliseconds to a few seconds to perform a classification when running on real medical datasets.", "title": "" } ]
[ { "docid": "f051a9937aa9e48524a75c24ec496526", "text": "A new voltage-programmed pixel circuit using hydrogenated amorphous silicon (a-Si:H) thin-film transistors (TFTs) for active-matrix organic light-emitting diodes (AMOLEDs) is presented. In addition to compensating for the shift in threshold voltage of TFTs, the circuit is capable of compensating for OLED luminance degradation by employing the shift in OLED voltage as a feedback of OLED degradation", "title": "" }, { "docid": "1d78bd02fbf7be1bac964ff934c766de", "text": "Recently, some publications indicated that the generative modeling approaches, i.e., topic models, achieved appreciated performance on multi-label classification, especially for skewed data sets. In this paper, we develop two supervised topic models for multi-label classification problems. The two models, i.e., Frequency-LDA (FLDA) and Dependency-Frequency-LDA (DFLDA), extend Latent Dirichlet Allocation (LDA) via two observations, i.e., the frequencies of the labels and the dependencies among different labels. We train the models by the Gibbs sampler algorithm. The experiment results on well known collections demonstrate that our two models outperform the state-of-the-art approaches. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "972fe2e08a7317a674115e361eed898f", "text": "The topic of emotions in the workplace is beginning to garner closer attention by researchers and theorists. The study of emotional labor addresses the stress of managing emotions when the work role demands that certain expressions be shown to customers. However, there has been no overarching framework to guide this work, and the previous studies have often disagreed on the definition and operationalization of emotional labor. The purposes of this article are as follows: to review and compare previous perspectives of emotional labor, to provide a definition of emotional labor that integrates these perspectives, to discuss emotion regulation as a guiding theory for understanding the mechanisms of emotional labor, and to present a model of emotional labor that includes individual differences (such as emotional intelligence) and organizational factors (such as supervisor support).", "title": "" }, { "docid": "4ebe34d74eef1753dffdffc75ca266d2", "text": "Serial robots and parallel robots have their own pros and cons. While hybrid robots consisting of both of them are possible and expected to retain their merits and minimize the disadvantages. The Delta-RST presented here is such a hybrid robot built up by integrating a 3-DoFs traditional Delta parallel structure and a 3-DoFs RST robotic wrist. In this paper, we focus on its kinematics analysis and its applications in industry. Firstly, the robotic system of the Delta-RST will be described briefly. Then the complete and systemic kinematics of this kind of robot will be presented in detail, followed by simulations and applications to demonstrate the correctness of the analysis, as well as the effectiveness of the developed robotic system. The closed-form kinematic analysis results are universal for similar hybrid robots constructing with the Delta parallel mechanism and serial chains. Copyright © 2014 IFSA Publishing, S. L.", "title": "" }, { "docid": "258da62ca5b12f01de336c6db3acfd8c", "text": "The explosive growth of the internet and electronic publishing has led to a huge number of scientific documents being available to users, however, they are usually inaccessible to those with visual impairments and often only partially compatible with software and modern hardware such as tablets and e-readers. In this paper we revisit Maxtract, a tool for analysing and converting documents into accessible formats, and combine it with two advanced segmentation techniques, statistical line identification and machine learning formula identification. We show how these advanced techniques improve the quality of both Maxtract's underlying document analysis and its output. We re-run and compare experimental results over a number of datasets, presenting a qualitative review of the improved output and drawing conclusions.", "title": "" }, { "docid": "86e4fa3a9cc7dd6298785f40dae556b6", "text": "Stochastic block model (SBM) and its variants are popular models used in community detection for network data. In this paper, we propose a feature adjusted stochastic block model (FASBM) to capture the impact of node features on the network links as well as to detect the residual community structure beyond that explained by the node features. The proposed model can accommodate multiple node features and estimate the form of feature impacts from the data. Moreover, unlike many existing algorithms that are limited to binary-valued interactions, the proposed FASBM model and inference approaches are easily applied to relational data that generates from any exponential family distribution. We illustrate the methods on simulated networks and on two real world networks: a brain network and an US air-transportation network.", "title": "" }, { "docid": "341b6ae3f5cf08b89fb573522ceeaba1", "text": "Neural parsers have benefited from automatically labeled data via dependencycontext word embeddings. We investigate training character embeddings on a word-based context in a similar way, showing that the simple method significantly improves state-of-the-art neural word segmentation models, beating tritraining baselines for leveraging autosegmented data.", "title": "" }, { "docid": "775e3aa5bd4991f227d239e01faf7fad", "text": "We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules.", "title": "" }, { "docid": "b25fb8842e1261235ed772d671c30c28", "text": "In this manuscript various components of research are listed and briefly discussed. The topics considered in this write-up cover a part of the research methodology paper of Master of Philosophy (M.Phil.) course and Doctor of Philosophy (Ph.D.) course. The manuscript is intended for students and research scholars of science subjects such as mathematics, physics, chemistry, statistics, biology and computer science. Various stages of research are discussed in detail. Special care has been taken to motivate the young researchers to take up challenging problems. Ten assignment works are given. For the benefit of young researchers a short interview with three eminent scientists is included at the end of the manuscript. Research is a logical and systematic search for new and useful information on a particular topic. It is an investigation of finding solutions to scientific and social problems through objective and systematic analysis. It is a search for knowledge, that is, a discovery of hidden truths. Here knowledge means information about matters. The information might be collected from different sources like experience, human beings, books, journals, nature, etc. A research can lead to new contributions to the existing knowledge. Only through research is it possible to make progress in a field. Research is done with the help of study, experiment, observation, analysis, comparison and reasoning. Research is in fact ubiquitous. For example, we know that cigarette smoking is injurious to health; heroine is addictive; cow dung is a useful source of biogas; malaria is due to the virus protozoan plasmod-ium; AIDS (Acquired Immuno Deficiency Syndrome) is due to the virus HIV (Human Immuno deficiency Virus). How did we know all these? We became aware of all these information only through research. More precisely, it seeks predictions of events and explanations, relationships and theories for them. The prime objectives of research are (1) to discover new facts (2) to verify and test important facts (3) to analyse an event or process or phenomenon to identify the cause and effect relationship (4) to develop new scientific tools, concepts and theories to solve and understand scientific and nonsci-entific problems (5) to find solutions to scientific, nonscientific and social problems and (6) to overcome or solve the problems occurring in our every day life. This is a fundamentally important question. No person would like to do research unless there are some motivating factors. Some of the motivations are the following: (1) to get a research …", "title": "" }, { "docid": "e8792ced13f1be61d031e2b150cc5cf6", "text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.", "title": "" }, { "docid": "f32ff72da2f90ed0e5279815b0fb10e0", "text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.", "title": "" }, { "docid": "1da19f806430077f7ad957dbeb0cb8d1", "text": "BACKGROUND\nTo date, periorbital melanosis is an ill-defined entity. The condition has been stated to be darkening of the skin around the eyes, dark circles, infraorbital darkening and so on.\n\n\nAIMS\nThis study was aimed at exploring the nature of pigmentation in periorbital melanosis.\n\n\nMETHODS\nOne hundred consecutive patients of periorbital melanosis were examined and investigated to define periorbital melanosis. Extent of periorbital melanosis was determined by clinical examination. Wood's lamp examination was performed in all the patients to determine the depth of pigmentation. A 2-mm punch biopsy was carried out in 17 of 100 patients.\n\n\nRESULTS\nIn 92 (92%) patients periorbital melanosis was an extension of pigmentary demarcation line over the face (PDL-F).\n\n\nCONCLUSION\nPeriorbital melanosis and pigmentary demarcation line of the face are not two different conditions; rather they are two different manifestations of the same disease.", "title": "" }, { "docid": "dad8a6a867af23ecb41d210dcbb3c529", "text": "Net neutrality represents the idea that Internet users are entitled to service that does not discriminate on the basis of source, destination, or ownership of Internet traffic. The United States Congress is considering legislation on net neutrality, and debate over the issue has generated intense lobbying. Congressional action will substantially affect the evolution of the Internet and of future Internet research. In this article, we argue that neither the pro nor anti net neutrality positions are consistent with the philosophy of Internet architecture. We develop a net neutrality policy founded on a segmentation of Internet services into infrastructure services and application services, based on the Internet's layered architecture. Our net neutrality policy restricts an Internet service Provider's ability to engage in anticompetitive behavior while simultaneously ensuring that it can use desirable forms of network management. We illustrate the effect of this policy by discussing acceptable and unacceptable uses of network management.", "title": "" }, { "docid": "bf272aa2413f1bc186149e814604fb03", "text": "Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories.", "title": "" }, { "docid": "00f2bb2dd3840379c2442c018407b1c8", "text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.", "title": "" }, { "docid": "65bc99201599ec17347d3fe0857cd39a", "text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.", "title": "" }, { "docid": "a6bacaaf64c69a0f205a402873dc665c", "text": "In the incremental conductance (INC) technique, the terminal voltage of the solar array is always adjustable with a power converter system and MPPT controller actions to harvest maximum energy as cost effective and sizeable solutions of power crisis during the load demand. A hard switching buck converter is chosen to achieve the objective and performance improvements of the systems. The efficiency of solar photo voltaic (SPV) systems is smart to capture the maximum available power from different surrounding situations such as solar irradiance and temperature.", "title": "" }, { "docid": "5626f7c767ae20c3b58d2e8fb2b93ba7", "text": "The presentation starts with a philosophical discussion about computer vision in general. The aim is to put the scope of the book into its wider context, and to emphasize why the notion of scale is crucial when dealing with measured signals, such as image data. An overview of different approaches to multi-scale representation is presented, and a number of special properties of scale-space are pointed out. Then, it is shown how a mathematical theory can be formulated for describing image structures at different scales. By starting from a set of axioms imposed on the first stages of processing, it is possible to derive a set of canonical operators, which turn out to be derivatives of Gaussian kernels at different scales. The problem of applying this theory computationally is extensively treated. A scale-space theory is formulated for discrete signals, and it demonstrated how this representation can be used as a basis for expressing a large number of visual operations. Examples are smoothed derivatives in general, as well as different types of detectors for image features, such as edges, blobs, and junctions. In fact, the resulting scheme for feature detection induced by the presented theory is very simple, both conceptually and in terms of practical implementations. Typically, an object contains structures at many different scales, but locally it is not unusual that some of these \"stand out\" and seem to be more significant than others. A problem that we give special attention to concerns how to find such locally stable scales, or rather how to generate hypotheses about interesting structures for further processing. It is shown how the scale-space theory, based on a representation called the scale-space primal sketch, allows us to extract regions of interest from an image without prior information about what the image can be expected to contain. Such regions, combined with knowledge about the scales at which they occur constitute qualitative information, which can be used for guiding and simplifying other low-level processes. Experiments on different types of real and synthetic images demonstrate how the suggested approach can be used for different visual tasks, such as image segmentation, edge detection, junction detection, and focusof-attention. This work is complemented by a mathematical treatment showing how the behaviour of different types of image structures in scalespace can be analysed theoretically.", "title": "" }, { "docid": "1468a09c57b2d83181de06236386d323", "text": "This article provides an overview of the pathogenesis of type 2 diabetes mellitus. Discussion begins by describing normal glucose homeostasis and ingestion of a typical meal and then discusses glucose homeostasis in diabetes. Topics covered include insulin secretion in type 2 diabetes mellitus and insulin resistance, the site of insulin resistance, the interaction between insulin sensitivity and secretion, the role of adipocytes in the pathogenesis of type 2 diabetes, cellular mechanisms of insulin resistance including glucose transport and phosphorylation, glycogen and synthesis,glucose and oxidation, glycolysis, and insulin signaling.", "title": "" }, { "docid": "d34759a882df6bc482b64530999bcda3", "text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.", "title": "" } ]
scidocsrr
0eefd65508612e85e96f9a984d4621ae
Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning
[ { "docid": "edd6fb76f672e00b14935094cb0242d0", "text": "Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for taskoriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.", "title": "" }, { "docid": "a0d6e020f230e872957ae00ed258b2b1", "text": "This paper presents an end-to-end framework for task-oriented dialog systems using a variant of Deep Recurrent QNetworks (DRQN). The model is able to interface with a relational database and jointly learn policies for both language understanding and dialog strategy. Moreover, we propose a hybrid algorithm that combines the strength of reinforcement learning and supervised learning to achieve faster learning speed. We evaluated the proposed model on a 20 Question Game conversational game simulator. Results show that the proposed method outperforms the modular-based baseline and learns a distributed representation of the latent dialog state.", "title": "" }, { "docid": "fec5391c20850ceea7b470c9a9faa09c", "text": "When rewards are sparse and action spaces large, Q-learning with -greedy exploration can be inefficient. This poses problems for otherwise promising applications such as task-oriented dialogue systems, where the primary reward signal, indicating successful completion of a task, requires a complex sequence of appropriate actions. Under these circumstances, a randomly exploring agent might never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialogue systems. First, we introduce an exploration technique based on Thompson sampling, drawing Monte Carlo samples from a Bayes-by-backprop neural network, demonstrating marked improvement over common approaches such as -greedy and Boltzmann exploration. Second, we show that spiking the replay buffer with experiences from a small number of successful episodes, as are easy to harvest for dialogue tasks, can make Q-learning feasible when it might otherwise fail.", "title": "" } ]
[ { "docid": "9a6fba937355e35d5e156d5fd3e1e0c9", "text": "We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models.", "title": "" }, { "docid": "9ece8dd1905fe0cba49d0fa8c1b21c62", "text": "This paper describes the origins and history of multiple resource theory in accounting for di€ erences in dual task interference. One particular application of the theory, the 4-dimensional multiple resources model, is described in detail, positing that there will be greater interference between two tasks to the extent that they share stages (perceptual/cognitive vs response) sensory modalities (auditory vs visual), codes (visual vs spatial) and channels of visual information (focal vs ambient). A computational rendering of this model is then presented. Examples are given of how the model predicts interference di€ erences in operational environments. Finally, three challenges to the model are outlined regarding task demand coding, task allocation and visual resource competition.", "title": "" }, { "docid": "c91578cf52a01e23bd8229d02d2d9a07", "text": "This paper explores the effectiveness of machine learning techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. To this end, a number of experiments have been conducted using representative learning algorithms, which were trained using a data set of 164 fraud and non-fraud Greek firms in the recent period 2001-2002. The decision of which particular method to choose is a complicated problem. A good alternative to choosing only one method is to create a hybrid forecasting system incorporating a number of possible solution methods as components (an ensemble of classifiers). For this purpose, we have implemented a hybrid decision support system that combines the representative algorithms using a stacking variant methodology and achieves better performance than any examined simple and ensemble method. To sum up, this study indicates that the investigation of financial information can be used in the identification of FFS and underline the importance of financial ratios. Keywords—Machine learning, stacking, classifier.", "title": "" }, { "docid": "4a1de61e9e74aa43a4e0bf195250ef72", "text": "We present in this paper a system for converting PDF legacy documents into structured XML format. This conversion system first extracts the different streams contained in PDF files (text, bitmap and vectorial images) and then applies different components in order to express in XML the logically structured documents. Some of these components are traditional in Document Analysis, other more specific to PDF. We also present a graphical user interface in order to check, correct and validate the analysis of the components. We eventually report on two real user cases where this system was applied on.", "title": "" }, { "docid": "db3bb02dde6c818b173cf12c9c7440b7", "text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.", "title": "" }, { "docid": "d18fc16268e6853cef5002c147ae9827", "text": "Ant Colony Extended (ACE) is a novel algorithm belonging to the general Ant Colony Optimisation (ACO) framework. Two specific features of ACE are: The division of tasks between two kinds of ants, namely patrollers and foragers, and the implementation of a regulation policy to control the number of each kind of ant during the searching process. This paper explores the performance of ACE in the context of the Travelling Salesman Problem (TSP), a classical combinatorial optimisation problem. The results are compared with the results of two well known ACO algorithms: ACS and MMAS.", "title": "" }, { "docid": "493c45304bd5b7dd1142ace56e94e421", "text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.", "title": "" }, { "docid": "87f0a390580c452d77fcfc7040352832", "text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:", "title": "" }, { "docid": "291cd57d99ae4e334af0e7caff06fa6c", "text": "This paper presents a lane detection method to screen data using the vanishing point according to the perspective feature of the camera. The line data is obtained after Canny and Hough transform of the raw image. The filter conditions are created according to the vanishing point and other location features. The algorithm saves the detected lane and vanishing points in near history. The algorithm clusters and integrates to determine the detection output according to the historical data. Finally, according to the output, a new vanishing point is fitted for the next circuit. The experiments indicate that the algorithm detects the lane accurately and is of parameters self-adaptability and robustness, while improving the utilization of the internal data.", "title": "" }, { "docid": "5cd48ee461748d989c40f8e0f0aa9581", "text": "Being able to identify which rhetorical relations (e.g., contrast or explanation) hold between spans of text is important for many natural language processing applications. Using machine learning to obtain a classifier which can distinguish between different relations typically depends on the availability of manually labelled training data, which is very time-consuming to create. However, rhetorical relations are sometimes lexically marked, i.e., signalled by discourse markers (e.g., because, but, consequently etc.), and it has been suggested (Marcu and Echihabi, 2002) that the presence of these cues in some examples can be exploited to label them automatically with the corresponding relation. The discourse markers are then removed and the automatically labelled data are used to train a classifier to determine relations even when no discourse marker is present (based on other linguistic cues such as word co-occurrences). In this paper, we investigate empirically how feasible this approach is. In particular, we test whether automatically labelled, lexically marked examples are really suitable training material for classifiers that are then applied to unmarked examples. Our results suggest that training on this type of data may not be such a good strategy, as models trained in this way do not seem to generalise very well to unmarked data. Furthermore, we found some evidence that this behaviour is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples).", "title": "" }, { "docid": "0d7816bde9b27e9b82797653d3e068b1", "text": "We introduce an ultrasonic sensor system that measures artificial potential fields (APF’s) directly. The APF is derived from the traveling-times of the transmitted pulses. Advantages of the sensor are that it needs only three transducers, that its design is simple, and that it measures a quantity that can be used directly for simple navigation, such as collision avoidance.", "title": "" }, { "docid": "f89b282f58ac28975285a24194c209f2", "text": "Creating pixel art is a laborious process that requires artists to place individual pixels by hand. Although many image editors provide vector-to-raster conversions, the results produced do not meet the standards of pixel art: artifacts such as jaggies or broken lines frequently occur. We describe a novel Pixelation algorithm that rasterizes vector line art while adhering to established conventions used by pixel artists. We compare our results through a user study to those generated by Adobe Illustrator and Photoshop, as well as hand-drawn samples by both amateur and professional pixel artists.", "title": "" }, { "docid": "6d380dc3fe08d117c090120b3398157b", "text": "Conversational interfaces are likely to become more efficient, intuitive and engaging way for human-computer interaction than today’s text or touch-based interfaces. Current research efforts concerning conversational interfaces focus primarily on question answering functionality, thereby neglecting support for search activities beyond targeted information lookup. Users engage in exploratory search when they are unfamiliar with the domain of their goal, unsure about the ways to achieve their goals, or unsure about their goals in the first place. Exploratory search is often supported by approaches from information visualization. However, such approaches cannot be directly translated to the setting of conversational search. In this paper we investigate the affordances of interactive storytelling as a tool to enable exploratory search within the framework of a conversational interface. Interactive storytelling provides a way to navigate a document collection in the pace and order a user prefers. In our vision, interactive storytelling is to be coupled with a dialogue-based system that provides verbal explanations and responsive design. We discuss challenges and sketch the research agenda required to put this vision into life.", "title": "" }, { "docid": "ef9437b03a95fc2de438fe32bd2e32b9", "text": "and Creative Modeling Modeling is not simply a process of response mimicry as commonly believed. Modeled judgments and actions may differ in specific content but embody the same rule. For example, a model may deal with moral dilemmas that differ widely in the nature of the activity but apply the same moral standard to them. Modeled activities thus convey rules for generative and innovative behavior. This higher level learning is achieved through abstract modeling. Once observers extract the rules underlying the modeled activities they can generate new behaviors that go beyond what they have seen or heard. Creativeness rarely springs entirely from individual inventiveness. A lot of modeling goes on in creativity. By refining preexisting innovations, synthesizing them into new ways and adding novel elements to them something new is created. When exposed to models of differing styles of thinking and behaving, observers vary in what they adopt from the different sources and thereby create new blends of personal characteristics that differ from the individual models (Bandura, Ross & Ross, 1963). Modeling influences that exemplify new perspectives and innovative styles of thinking also foster creativity by weakening conventional mind sets (Belcher, 1975; Harris & Evans, 1973).", "title": "" }, { "docid": "7ddb76cfc048e0cb65be4055fe782af6", "text": "In this work, we present an extended study of image representations for fine-grained classification with respect to image resolution. Understudied in literature, this parameter yet presents many practical and theoretical interests, e.g. in embedded systems where restricted computational resources prevent treating high-resolution images. It is thus interesting to figure out which representation provides the best results in this particular context. On this purpose, we evaluate Fisher Vectors and deep representations on two significant finegrained oriented datasets: FGVC Aircraft [1] and PPMI [2]. We also introduce LR-CNN, a deep structure designed for classification of low-resolution images with strong semantic content. This net provides rich compact features and outperforms both pre-trained deep features and Fisher Vectors.", "title": "" }, { "docid": "91ce5f56a722975d877aa1acbf65554b", "text": "Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.", "title": "" }, { "docid": "428e87162b4cab947f38019782c4991f", "text": "We present a wideband tightly coupled dipole array (TCDA) with integrated balun and a novel superstrate consisting of printed frequency selective surface (FSS) for wide angle scanning. Although previous TCDAs have had decent scanning performance up to ±60°, use of dielectric superstrates are usually required, resulting in additional cost and fabrication complexity. In this paper, we replace the bulky dielectric layer(s) with periodic printed elements and yet achieve wide-angle and wideband impedance matching. The proposed approach provides superior performance of 6.1:1 bandwidth (0.5-3.1 GHz) with VSWR <; 3.2 when scanning ±75° in E plane, ±70° in D plane and ±60° in H plane. The FSS, radiating dipoles and feed lines are designed and fabricated on the same vertically oriented printed circuit board, resulting in a low-cost and lightweight structure as compared to other low profile arrays. Measured scanning patterns of a 12 × 12 prototype are presented, showing good agreement with simulations.", "title": "" }, { "docid": "28f6751a043201fd8313944b4f79101f", "text": "FLLL 2 Preface This is a printed collection of the contents of the lecture \" Genetic Algorithms: Theory and Applications \" which I gave first in the winter semester 1999/2000 at the Johannes Kepler University in Linz. The reader should be aware that this manuscript is subject to further reconsideration and improvement. Corrections, complaints, and suggestions are cordially welcome. The sources were manifold: Chapters 1 and 2 were written originally for these lecture notes. All examples were implemented from scratch. The third chapter is a distillation of the books of Goldberg [13] and Hoffmann [15] and a handwritten manuscript of the preceding lecture on genetic algorithms which was given by Andreas Stöckl in 1993 at the Johannes Kepler University. Chapters 4, 5, and 7 contain recent adaptations of previously published material from my own master thesis and a series of lectures which was given by Francisco Herrera and myself at the Second Summer School on Advanced Control at the Slovak Technical University, Bratislava, in summer 1997 [4]. Chapter 6 was written originally, however, strongly influenced by A. Geyer-Schulz's works and H. Hörner's paper on his C++ GP kernel [18]. I would like to thank all the students attending the first GA lecture in Winter 1999/2000, for remaining loyal throughout the whole term and for contributing much to these lecture notes with their vivid, interesting, and stimulating questions, objections, and discussions. Last but not least, I want to express my sincere gratitude to Sabine Lumpi and Susanne Saminger for support in organizational matters, and Pe-ter Bauer for proofreading .", "title": "" }, { "docid": "bc85c7b65edb61748fe3320a4c8d83d3", "text": "As mobile Internet is now indispensable in our daily lives, WiFi's latency performance has become critical to mobile applications' quality of experience. Unfortunately, WiFi hop latency in the wild remains largely unknown. In this paper, we first propose an effective approach to break down the round trip network latency. Then we provide the first systematic study on WiFi hop latency in the wild based on the latency and WiFi factors collected from 47 APs on T university campus for two months. We observe that WiFi hop can be the weakest link in the round trip network latency: more than 50% (10%) of TCP packets suffer from WiFi hop latency larger than 20ms (100ms), and WiFi hop latency occupies more than 60% in more than half of the round trip network latency. To help understand, troubleshoot, and optimize WiFi hop latency for WiFi APs in general, we train a decision tree model. Based on the model's output, we are able to reduce the median latency by 80% from 50ms to 10ms in one real case, and reduce the maximum latency from 250ms to 50ms in another real case.", "title": "" }, { "docid": "3fbea8b5feb0c5a471aa0ec91d2e2d1a", "text": "Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover (NTP) model proposed by Rocktäschel and Riedel (2017), a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. For answering a given query, this model needs to consider all possible proof paths, and then aggregate results – this quickly becomes infeasible even for small Knowledge Bases (KBs). We observe that we can accurately approximate the inference process in this model by considering only proof paths associated with the highest proof scores. This enables inference and learning on previously impracticable KBs.", "title": "" } ]
scidocsrr
55ffa80662273854d6f9aeeea9da2ab8
Enlightening Deep Neural Networks with Knowledge of Confounding Factors
[ { "docid": "fb87648c3bb77b1d9b162a8e9dbc5e86", "text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "title": "" } ]
[ { "docid": "b64c48d4d2820e01490076c1b18cf32b", "text": "The availability of detailed environmental data, together with inexpensive and powerful computers, has fueled a rapid increase in predictive modeling of species environmental requirements and geographic distributions. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, absence data are not available for most species. In this paper, we introduce the use of the maximum entropy method (Maxent) for modeling species geographic distributions with presence-only data. Maxent is a general-purpose machine learning method with a simple and precise mathematical formulation, and it has a number of aspects that make it well-suited for species distribution modeling. In mmals: a diction emaining outline eceiver dicating ts present ues horder to investigate the efficacy of the method, here we perform a continental-scale case study using two Neotropical ma lowland species of sloth, Bradypus variegatus, and a small montane murid rodent, Microryzomys minutus. We compared Maxent predictions with those of a commonly used presence-only modeling method, the Genetic Algorithm for Rule-Set Pre (GARP). We made predictions on 10 random subsets of the occurrence records for both species, and then used the r localities for testing. Both algorithms provided reasonable estimates of the species’ range, far superior to the shaded maps available in field guides. All models were significantly better than random in both binomial tests of omission and r operating characteristic (ROC) analyses. The area under the ROC curve (AUC) was almost always higher for Maxent, in better discrimination of suitable versus unsuitable areas for the species. The Maxent modeling approach can be used in i form for many applications with presence-only datasets, and merits further research and development. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f1744cf87ee2321c5132d6ee30377413", "text": "How do movements in the distribution of income and wealth affect the macroeconomy? We analyze this question using a calibrated version of the stochastic growth model with partially uninsurable idiosyncratic risk and movements in aggregate productivity. Our main finding is that, in the stationary stochastic equilibrium, the behavior of the macroeconomic aggregates can be almost perfectly described using only the mean of the wealth distribution. This result is robust to substantial changes in both parameter values and model specification. Our benchmark model, whose only difference from the representative-agent framework is the existence of uninsurable idiosyncratic risk, displays far less cross-sectional dispersion", "title": "" }, { "docid": "81e6994ef76d537b8905cf6b8271c895", "text": "Programming language design benefits from constructs for extending the syntax and semantics of a host language. While C's string-based macros empower programmers to introduce notational shorthands, the parser-level macros of Lisp encourage experimentation with domain-specific languages. The Scheme programming language improves on Lisp with macros that respect lexical scope.\n The design of Racket---a descendant of Scheme---goes even further with the introduction of a full-fledged interface to the static semantics of the language. A Racket extension programmer can thus add constructs that are indistinguishable from \"native\" notation, large and complex embedded domain-specific languages, and even optimizing transformations for the compiler backend. This power to experiment with language design has been used to create a series of sub-languages for programming with first-class classes and modules, numerous languages for implementing the Racket system, and the creation of a complete and fully integrated typed sister language to Racket's untyped base language.\n This paper explains Racket's language extension API via an implementation of a small typed sister language. The new language provides a rich type system that accommodates the idioms of untyped Racket. Furthermore, modules in this typed language can safely exchange values with untyped modules. Last but not least, the implementation includes a type-based optimizer that achieves promising speedups. Although these extensions are complex, their Racket implementation is just a library, like any other library, requiring no changes to the Racket implementation.", "title": "" }, { "docid": "cd5752a9668c03ac3f2300918e829579", "text": "Abstract—The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.", "title": "" }, { "docid": "17aecef988a609953923e3d19ee15b53", "text": "Deploying and managing multi-component IoT applications in Fog computing scenarios is challenging due to the heterogeneity, scale and dynamicity of Fog infrastructures, as well as to the complexity of modern software systems. When deciding on where/how to (re-)allocate application components over the continuum from the IoT to the Cloud, application administrators need to find the best deployment, satisfying all application (hardware, software, QoS, IoT) requirements over the contextually available resources, also trading-off non-functional desiderata (e.g., financial costs, security). This PhD thesis proposal aims at devising models, algorithms and methodologies to support the adaptive deployment and management of Fog applications.", "title": "" }, { "docid": "8618b407f851f0806920f6e28fdefe3f", "text": "The explosive growth of Internet applications and content, during the last decade, has revealed an increasing need for information filtering and recommendation. Most research in the area of recommendation systems has focused on designing and implementing efficient algorithms that provide accurate recommendations. However, the selection of appropriate recommendation content and the presentation of information are equally important in creating successful recommender applications. This paper addresses issues related to the presentation of recommendations in the movies domain. The current work reviews previous research approaches and popular recommender systems, and focuses on user persuasion and satisfaction. In our experiments, we compare different presentation methods in terms of recommendations’ organization in a list (i.e. top N-items list and structured overview) and recommendation modality (i.e. simple text, combination of text and image, and combination of text and video). The most efficient presentation methods, regarding user persuasion and satisfaction, proved to be the “structured overview” and the “text and video” interfaces, while a strong positive correlation was also found between user satisfaction and persuasion in all experimental conditions.", "title": "" }, { "docid": "42d755dbb843d9e5ba4bae4b492c2b8e", "text": "Context: The management of software development productivity is a key issue in software organizations, where the major drivers are lower cost and shorter time-to-market. Agile methods, including Extreme Programming and Scrum, have evolved as “light” approaches that simplify the software development process, potentially leading to increased team productivity. However, little empirical research has examined which factors do have an impact on productivity and in what way, when using agile methods. Objective: Our objective is to provide a better understanding of the factors and mediators that impact agile team productivity. Method: We have conducted a multiple-case study for six months in three large Brazilian companies that have been using agile methods for over two years. We have focused on the main productivity factors perceived by team members through interviews, documentation from retrospectives, and non-participant observation. Results: We developed a novel conceptual framework, using thematic analysis to understand the possible mechanisms behind such productivity factors. Agile team management was found to be the most influential factor in achieving agile team productivity. At the intra-team level, the main productivity factors were team design (structure and work allocation) and member turnover. At the inter-team level, the main productivity factors were how well teams could be effectively coordinated by proper interfaces and other dependencies and avoiding delays in providing promised software to dependent teams. Conclusion: Teams should be aware of the influence and magnitude of turnover, which has been shown negative for agile team productivity. Team design choices remain an important factor impacting team productivity, even more pronounced on agile teams that rely on teamwork and people factors. The intra-team coordination processes must be adjusted to enable productive work by considering priorities and pace between teams. Finally, the revised conceptual framework for agile team productivity supports further tests through confirmatory studies.", "title": "" }, { "docid": "00e06f34117dc96ec6f7a5fba47b3f5f", "text": "This paper presents a new algorithm for downloading big files from multiple sources in peer-to-peer networks. The algorithm is compelling with the simplicity of its implementation and the novel properties it offers. It ensures low hand-shaking cost between peers who intend to download a file (or parts of a file) from each other. Furthermore, it achieves maximal file availability, meaning that any two peers with partial knowledge of a given file will almost always be able to fully benefit from each other’s knowledge– i.e., overlapping knowledge will rarely occur. Our algorithm is made possible by the recent introduction of linear-time rateless erasure codes.", "title": "" }, { "docid": "83b2b2937f22fc3b5f607b381bfa6239", "text": "Article history: Received 12 February 2008 Accepted 26 November 2008 Available online xxxx", "title": "" }, { "docid": "f99fe9c7aaf417a3893c264b2602a9f3", "text": "A male infant was brought to hospital aged eight weeks. He was born at full term via normal vaginal home delivery without any complications. The delivery was conducted by a traditional birth attendant and Apgar scores at birth were unrecorded. One week after the birth, the parents noticed an increase in size of the baby’s breasts. In accordance with cultural practice, they massaged the breasts in order to express milk, hoping that by doing so the size of the breasts would return to normal. However, the size of the breasts increased. They also reported that milk was being discharged spontaneously through the nipples. There was no history of drug intake neither by the mother nor the baby. The infant appeared clinically well and showed no signs of irritability. On examination, bilateral breast enlargement was observed of approximate diameter 6 cm. No tenderness, purulent discharge or any sign of inflammation were observed (Figure 1). Systemic and genital examination were unremarkable. Routine blood investigations were normal. Firm advice was given not to massage the breasts of the baby.", "title": "" }, { "docid": "fd32f2117ae01049314a0c1cfb565724", "text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.", "title": "" }, { "docid": "62773f65b9157ba1f2ceafa6971be884", "text": "Creating or modifying a primary index is a time-consuming process, as the index typically needs to be rebuilt from scratch. In this paper, we explore a more graceful “just-in-time” approach to index reorganization, where small changes are dynamically applied in the background. To enable this type of reorganization, we formalize a composable organizational grammar, expressive enough to capture instances of not only existing index structures, but arbitrary hybrids as well. We introduce an algebra of rewrite rules for such structures, and a framework for defining and optimizing policies for just-in-time rewriting. Our experimental analysis shows that the resulting index structure is flexible enough to adapt to a variety of performance goals, while also remaining competitive with existing structures like the C++ standard template library map.", "title": "" }, { "docid": "0b941153b9ade732ca52058698643a44", "text": "In this paper, we prove the complexity bounds for methods of Convex Optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true both for nonsmooth and smooth problems. For the later class, we present also an accelerated scheme with the expected rate of convergence O(n/k), where k is the iteration counter. For Stochastic Optimization, we propose a zero-order scheme and justify its expected rate of convergence O(n/k). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, both for smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.", "title": "" }, { "docid": "3c650bd064d3f6d7fb9ae1e4ec0c8cfa", "text": "We address semi-supervised video object segmentation, the task of automatically generating accurate and consistent pixel masks for objects in a video sequence, given the first-frame ground truth annotations. Towards this goal, we present the PReMVOS algorithm (Proposal-generation, Refinement and Merging for Video Object Segmentation). This method involves generating coarse object proposals using a Mask R-CNN like object detector, followed by a refinement network that produces accurate pixel masks for each proposal. We then select and link these proposals over time using a merging algorithm that takes into account an objectness score, the optical flow warping, and a Re-ID feature embedding vector for each proposal. We adapt our networks to the target video domain by fine-tuning on a large set of augmented images generated from the firstframe ground truth. Our approach surpasses all previous state-of-the-art results on the DAVIS 2017 video object segmentation benchmark and achieves first place in the DAVIS 2018 Video Object Segmentation Challenge with a mean of J & F score of 74.7.", "title": "" }, { "docid": "95ec0d130862f7a514fd5d47a95f6585", "text": "With the rising cost of energy and growing environmental concerns, the demand for sustainable building facilities with minimal environmental impact is increasing. The most effective decisions regarding sustainability in a building facility are made in the early design and preconstruction stages. In this context, Building Information Modeling (BIM) can aid in performing complex building performance analyses to ensure an optimized sustainable building design. In this exploratory research, three building performance analysis software namely EcotectTM, Green Building StudioTM (GBS) and Virtual EnvironmentTM are evaluated to gage their suitability for BIM-based sustainability analysis. First presented in this paper are the main concepts of sustainability and BIM. Then an evaluation of the three abovementioned software is performed with their pros and cons. An analytical weight-based scoring system is used for this purpose. At the end, a conceptual framework is presented to illustrate how construction companies can use BIM for sustainability analysis and evaluate LEED (Leadership in Energy and Environmental Design) rating of a building facility.", "title": "" }, { "docid": "b58793f6bce670efefe34bd0a1f29898", "text": "Cell signaling networks coordinate specific patterns of protein expression in response to external cues, yet the logic by which signaling pathway activity determines the eventual abundance of target proteins is complex and poorly understood. Here, we describe an approach for simultaneously controlling the Ras/Erk pathway and monitoring a target gene's transcription and protein accumulation in single live cells. We apply our approach to dissect how Erk activity is decoded by immediate early genes (IEGs). We find that IEG transcription decodes Erk dynamics through a shared band-pass filtering circuit; repeated Erk pulses transcribe IEGs more efficiently than sustained Erk inputs. However, despite highly similar transcriptional responses, each IEG exhibits dramatically different protein-level accumulation, demonstrating a high degree of post-transcriptional regulation by combinations of multiple pathways. Our results demonstrate that the Ras/Erk pathway is decoded by both dynamic filters and logic gates to shape target gene responses in a context-specific manner.", "title": "" }, { "docid": "eae0f8a921b301e52c822121de6c6b58", "text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.", "title": "" }, { "docid": "3b2791f50ed1939e4eb82405bed4c927", "text": "We start from the state-of-the-art Bag of Words pipeline that in the 2008 benchmarks of TRECvid and PASCAL yielded the best performance scores. We have contributed to that pipeline, which now forms the basis to compare various fast alternatives for all of its components: (i) For descriptor extraction we propose a fast algorithm to densely sample SIFT and SURF, and we compare several variants of these descriptors. (ii) For descriptor projection we compare a k-means visual vocabulary with a Random Forest. As a preprojection step we experiment with PCA on the descriptors to decrease projection time. (iii) For classification we use Support Vector Machines and compare the x2 kernel with the RBF kernel. Our results lead to a 10-fold speed increase without any loss of accuracy and to a 30-fold speed increase with 17% loss of accuracy, where the latter system does real-time classification at 26 images per second.", "title": "" }, { "docid": "3264b3fb1737be4f77aea8803daa2b27", "text": "Long Short-Term Memory (LSTM) is a deep recurrent neural network architecture with high computational complexity. Contrary to the standard practice to train LSTM online with stochastic gradient descent (SGD) methods, we propose a matrix-based batch learning method for LSTM with full Backpropagation Through Time (BPTT). We further solve the state drifting issues as well as improving the overall performance for LSTM using revised activation functions for gates. With these changes, advanced optimization algorithms are applied to LSTM with long time dependency for the first time and show great advantages over SGD methods. We further demonstrate that large-scale LSTM training can be greatly accelerated with parallel computation architectures like CUDA and MapReduce.", "title": "" }, { "docid": "56a6ece7a826cc7e34c70c581355a4b6", "text": "Superpixels are an oversegmentation of an image and popularly used as a preprocessing in many computer vision applications. Many state-of-the-art superpixel segmentation algorithms rely either on minimizing special energy functions or on clustering pixels in the effective distance space. While in this paper, we introduce a novel algorithm to produce superpixels based on the edge map by utilizing a split-andmerge strategy. Firstly, we obtain the initial superpixels with uniform size and shape. Secondly, in the splitting stage, we find all possible splitting contours for each superpixel by overlapping the boundaries of this superpixel with the edge map, and then choose the best one to split it which ensure the superpixels produced by this splitting are dissimilarity in color and similarity in size. Thirdly, in the merging stage, the Bhattacharyya distance between two color histograms in the RGB space for each pair of adjacent superpixels is computed to evaluate their color similarity for merging superpixels. At last, we iterate the split-and-merge steps until no superpixels have changed. Experimental results on the Berkeley Segmentation Dataset (BSD) show that the proposed algorithm can achieve a good performance compared with the state-of-the-art superpixel segmentation algorithms.", "title": "" } ]
scidocsrr
522f8c5b4d154766013be9e0b622f745
Virtual Worlds and Augmented Reality in Cultural Heritage Applications
[ { "docid": "d2f36cc750703f5bbec2ea3ef4542902", "text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …", "title": "" } ]
[ { "docid": "5b3ca1cc607d2e8f0394371f30d9e83a", "text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.", "title": "" }, { "docid": "051711535ae78e4c24278553843cbc91", "text": "In the current \"Syntactic Web\", uninterpreted syntactic constructs are given meaning only by private off-line agreements that are inaccessible to computers. In the Semantic Web vision, this is replaced by a web where both data and its semantic definition are accessible and manipulable by computer software. DAML+OIL is an ontology language specifically designed for this use in the Web; it exploits existing Web standards (XML and RDF), adding the familiar ontological primitives of object oriented and frame based systems, and the formal rigor of a very expressive description logic. The definition of DAML+OIL is now over a year old, and the language has been in fairly widespread use. In this paper, we review DAML+OIL's relation with its key ingredients (XML, RDF, OIL, DAML-ONT, Description Logics), we discuss the design decisions and trade-offs that were the basis for the language definition, and identify a number of implementation challenges posed by the current language. These issues are important for designers of other representation languages for the Semantic Web, be they competitors or successors of DAML+OIL, such as the language currently under definition by W3C.", "title": "" }, { "docid": "dd45abc886edb854707acde3e675c5f7", "text": "The connecting of physical units, such as thermostats, medical devices and self-driving vehicles, to the Internet is happening very quickly and will most likely continue to increase exponentially for some time to come. Valid concerns about security, safety and privacy do not appear to be hampering this rapid growth of the so-called Internet of Things (IoT). There have been many popular and technical publications by those in software engineering, cyber security and systems safety describing issues and proposing various “fixes.” In simple terms, they address the “why” and the “what” of IoT security, safety and privacy, but not the “how.” There are many cultural and economic reasons why security and privacy concerns are relegated to lower priorities. Also, when many systems are interconnected, the overall security, safety and privacy of the resulting systems of systems generally have not been fully considered and addressed. In order to arrive at an effective enforcement regime, we will examine the costs of implementing suitable security, safety and privacy and the economic consequences of failing to do so. We evaluated current business, professional and government structures and practices for achieving better IoT security, safety and privacy, and found them lacking. Consequently, we proposed a structure for ensuring that appropriate security, safety and privacy are built into systems from the outset. Within such a structure, enforcement can be achieved by incentives on one hand and penalties on the other. Determining the structures and rules necessary to optimize the mix of penalties and incentives is a major goal of this paper.", "title": "" }, { "docid": "56ff9b231738b24fda47ab152bf78ba1", "text": "We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS.", "title": "" }, { "docid": "a8ddaed8209d09998159014307233874", "text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.", "title": "" }, { "docid": "ef17636e16fdd1eaab5e21ba69981c38", "text": "OBJECTIVES\nIt has been considered a fact that informal social activities promote well-being in old age, irrespective of whether they are performed with friends or family members. Fundamental differences in the relationship quality between family members (obligatory) and friends (voluntary), however, suggest differential effects on well-being. Further, age-related changes in networks suggest age-differential effects of social activities on well-being, as older adults cease emotionally detrimental relationships.\n\n\nMETHOD\nLongitudinal representative national survey study with middle-aged (n = 2,830) and older adults (n = 2,032). Age-differential effects of activities with family members and friends on changes in life satisfaction, positive affect (PA), and negative affect (NA) were examined in latent change score models.\n\n\nRESULTS\nIn the middle-aged group, activities with friends and families increased PA and life satisfaction and were unrelated to NA. In the older age group, family activities increased both PA and NA and were unrelated to changes in life satisfaction, but activities with friends increased PA and life satisfaction and decreased NA.\n\n\nDISCUSSION\nSocial activities differentially affect different facets of well-being. These associations change with age. In older adults, the effects of social activities with friends may become more important and may act as a buffer against negative effects of aging.", "title": "" }, { "docid": "e415deac22afd9221995385e681b7f63", "text": "AIM & OBJECTIVES\nThe purpose of this in vitro study was to evaluate and compare the microleakage of pit and fissure sealants after using six different preparation techniques: (a) brush, (b) pumice slurry application, (c) bur, (d) air polishing, (e) air abrasion, and (f) longer etching time.\n\n\nMATERIAL & METHOD\nThe study was conducted on 60 caries-free first premolars extracted for orthodontic purpose. These teeth were randomly assigned to six groups of 10 teeth each. Teeth were prepared using one of six occlusal surface treatments prior to placement of Clinpro\" 3M ESPE light-cured sealant. The teeth were thermocycled for 500 cycles and stored in 0.9% normal saline. Teeth were sealed apically and coated with nail varnish 1 mm from the margin and stained in 1% methylene blue for 24 hours. Each tooth was divided buccolingually parallel to the long axis of the tooth, yielding two sections per tooth for analysis. The surfaces were scored from 0 to 2 for the extent of microleakage.\n\n\nSTATISTICAL ANALYSIS\nResults obtained for microleakage were analyzed by using t-tests at sectional level and chi-square test and analysis of variance (ANOVA) at the group level.\n\n\nRESULTS\nThe results of round bur group were significantly superior when compared to all other groups. The application of air polishing and air abrasion showed better results than pumice slurry, bristle brush, and longer etching time. Round bur group was the most successful cleaning and preparing technique. Air polishing and air abrasion produced significantly less microleakage than traditional pumice slurry, bristle brush, and longer etching time.", "title": "" }, { "docid": "7753a4ce6b62ee437acb49eb40eb4bea", "text": "Music, speech, and acoustic scene sound are often handled separately in the audio domain because of their different signal characteristics. However, as the image domain grows rapidly by versatile image classification models, it is necessary to study extensible classification models in the audio domain as well. In this study, we approach this problem using two types of sample-level deep convolutional neural networks that take raw waveforms as input and uses filters with small granularity. One is a basic model that consists of convolution and pooling layers. The other is an improved model that additionally has residual connections, squeeze-and-excitation modules and multi-level concatenation. We show that the sample-level models reach state-of-the-art performance levels for the three different categories of sound. Also, we visualize the filters along layers and compare the characteristics of learned filters.", "title": "" }, { "docid": "2adbf124a4a034b7d880c9835addafb4", "text": "The Internet of Things (IoT) provides transparent and seamless incorporation of heterogeneous and different end systems. It has been widely used in many applications including smart cities such as public water system, power grid, water management, and vehicle traffic control system. In these smart city applications, a large number of IoT devices are deployed that can sense, communicate, compute, and potentially actuate. The uninterrupted and accurate functioning of these devices are critical to smart city applications as crucial decisions will be made based on the data received. One of the challenging tasks is to assure the authenticity of the devices so that we can rely on the decision making process with a very high confidence. One of the characteristics of IoT devices deployed in such applications is that they have limited battery power. A challenge is to design a secure mutual authentication protocol which is affordable to resource constrained devices. In this paper, we propose a lightweight mutual authentication protocol based on a novel public key encryption scheme for smart city applications. The proposed protocol takes a balance between the efficiency and communication cost without sacrificing the security. We evaluate the performance of our protocol in software and hardware environments. On the same security level, our protocol performance is significantly better than existing RSA and ECC based protocols. We also provide security analysis of the proposed encryption scheme and the mutual authentication protocol.", "title": "" }, { "docid": "277edaaf026e541bc9abc83eaabbecbe", "text": "In most situations, simple techniques for handling missing data (such as complete case analysis, overall mean imputation, and the missing-indicator method) produce biased results, whereas imputation techniques yield valid results without complicating the analysis once the imputations are carried out. Imputation techniques are based on the idea that any subject in a study sample can be replaced by a new randomly chosen subject from the same source population. Imputation of missing data on a variable is replacing that missing by a value that is drawn from an estimate of the distribution of this variable. In single imputation, only one estimate is used. In multiple imputation, various estimates are used, reflecting the uncertainty in the estimation of this distribution. Under the general conditions of so-called missing at random and missing completely at random, both single and multiple imputations result in unbiased estimates of study associations. But single imputation results in too small estimated standard errors, whereas multiple imputation results in correctly estimated standard errors and confidence intervals. In this article we explain why all this is the case, and use a simple simulation study to demonstrate our explanations. We also explain and illustrate why two frequently used methods to handle missing data, i.e., overall mean imputation and the missing-indicator method, almost always result in biased estimates.", "title": "" }, { "docid": "15e034d722778575b43394b968be19ad", "text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.", "title": "" }, { "docid": "dc91774abd58e19066a110bbff9fa306", "text": "Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip.", "title": "" }, { "docid": "6c3cd29b316d68d555bb85d9f4d48e04", "text": "Power API-the result of collaboration among national laboratories, universities, and major vendors-provides a range of standardized power management functions, from application-level control and measurement to facility-level accounting, including real-time and historical statistics gathering. Support is already available for Intel and AMD CPUs and standalone measurement devices.", "title": "" }, { "docid": "ac5e7e88d965aa695b8ae169edce2426", "text": "Randomness test suites constitute an essential component within the process of assessing random number generators in view of determining their suitability for a specific application. Evaluating the randomness quality of random numbers sequences produced by a given generator is not an easy task considering that no finite set of statistical tests can assure perfect randomness, instead each test attempts to rule out sequences that show deviation from perfect randomness by means of certain statistical properties. This is the reason why several batteries of statistical tests are applied to increase the confidence in the selected generator. Therefore, in the present context of constantly increasing volumes of random data that need to be tested, special importance has to be given to the performance of the statistical test suites. Our work enrolls in this direction and this paper presents the results on improving the well known NIST Statistical Test Suite (STS) by introducing parallelism and a paradigm shift towards byte processing delivering a design that is more suitable for today's multicore architectures. Experimental results show a very significant speedup of up to 103 times compared to the original version.", "title": "" }, { "docid": "9c8fefeb34cc1adc053b5918ea0c004d", "text": "Mezzo is a computer program designed that procedurally writes Romantic-Era style music in real-time to accompany computer games. Leitmotivs are associated with game characters and elements, and mapped into various musical forms. These forms are distinguished by different amounts of harmonic tension and formal regularity, which lets them musically convey various states of markedness which correspond to states in the game story. Because the program is not currently attached to any game or game engine, “virtual” gameplays were been used to explore the capabilities of the program; that is, videos of various game traces were used as proxy examples. For each game trace, Leitmotivs were input to be associated with characters and game elements, and a set of ‘cues’ was written, consisting of a set of time points at which a new set of game data would be passed to Mezzo to reflect the action of the game trace. Examples of music composed for one such game trace, a scene from Red Dead Redemption, are given to illustrate the various ways the program maps Leitmotivs into different levels of musical markedness that correspond with the game state. Introduction Mezzo is a computer program designed by the author that procedurally writes Romantic-Era-style music in real time to accompany computer games. It was motivated by the desire for game music to be as rich and expressive as that written for traditional media such as opera, ballet, or film, while still being procedurally generated, and thus able to adapt to a variety of dramatic situations. To do this, it models deep theories of musical form and semiotics in Classical and Romantic music. Characters and other important game elements like props and environmental features are given Leitmotivs, which are constantly rearranged and developed throughout gameplay in ways Copyright © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. that evoke the conditions and relationships of these elements. Story states that occur in a game are musically conveyed by employing or withholding normative musical features. This creates various states of markedness, a concept which is defined in semiotic terms as a valuation given to difference (Hatten 1994). An unmarked state or event is one that conveys normativity, while an unmarked one conveys deviation from or lack of normativity. A succession of musical sections that passes through varying states of markedness and unmarkedness, producing various trajectories of expectation and fulfillment, tension and release, correlates with the sequence of episodes that makes up a game story’s structure. Mezzo uses harmonic tension and formal regularity as its primary vehicles for musically conveying markedness; it is constantly adjusting the values of these features in order to express states of the game narrative. Motives are associated with characters, and markedness with game conditions. These two independent associations allow each coupling of a motive with a level of markedness to be interpreted as a pair of coordinates in a state space (a “semiotic square”), where various regions of the space correspond to different expressive musical qualities (Grabócz 2009). Certain patterns of melodic repetition combined with harmonic function became conventionalized in the Classical Era as normative forms, labeled the sentence, period, and sequence (Caplin 1998, Schoenberg 1969). These forms exist in the middleground of a musical work, each comprising one or several phrase repetitions and one or a small number of harmonic cadences. Each musical form has a normative structure, and various ways in which it can be deformed by introducing irregular amounts of phrase repetition to make the form asymmetrical. Mezzo’s expressive capability comes from the idea that there are different perceptible levels of formal irregularity that can be quantitatively measured, and that these different levels convey different levels of markedness. Musical Metacreation: Papers from the 2012 AIIDE Workshop AAAI Technical Report WS-12-16", "title": "" }, { "docid": "019375c14bc0377acbf259ef423fa46f", "text": "Original approval signatures are on file with the University of Oregon Graduate School.", "title": "" }, { "docid": "4408d5fa31a64d54fbe4b4d70b18182b", "text": "Using microarray analysis, this study showed up-regulation of toll-like receptors 1, 2, 4, 7, 8, NF-κB, TNF, p38-MAPK, and MHC molecules in human peripheral blood mononuclear cells following infection with Plasmodium falciparum. This analysis reports herein further studies based on time-course microarray analysis with focus on malaria-induced host immune response. The results show that in early malaria, selected immune response-related genes were up-regulated including α β and γ interferon-related genes, as well as genes of IL-15, CD36, chemokines (CXCL10, CCL2, S100A8/9, CXCL9, and CXCL11), TRAIL and IgG Fc receptors. During acute febrile malaria, up-regulated genes included α β and γ interferon-related genes, IL-8, IL-1b IL-10 downstream genes, TGFB1, oncostatin-M, chemokines, IgG Fc receptors, ADCC signalling, complement-related genes, granzymes, NK cell killer/inhibitory receptors and Fas antigen. During recovery, genes for NK receptorsand granzymes/perforin were up-regulated. When viewed in terms of immune response type, malaria infection appeared to induce a mixed TH1 response, in which α and β interferon-driven responses appear to predominate over the more classic IL-12 driven pathway. In addition, TH17 pathway also appears to play a significant role in the immune response to P. falciparum. Gene markers of TH17 (neutrophil-related genes, TGFB1 and IL-6 family (oncostatin-M)) and THαβ (IFN-γ and NK cytotoxicity and ADCC gene) immune response were up-regulated. Initiation of THαβ immune response was associated with an IFN-αβ response, which ultimately resulted in moderate-mild IFN-γ achieved via a pathway different from the more classic IL-12 TH1 pattern. Based on these observations, this study speculates that in P. falciparum infection, THαβ/TH17 immune response may predominate over ideal TH1 response.", "title": "" }, { "docid": "c24e523997eac6d1be9e2a2f38150fc0", "text": "We address the assessment and improvement of the software maintenance function by proposing improvements to the software maintenance standards and introducing a proposed maturity model for daily software maintenance activities: Software Maintenance Maturity Model (SM). The software maintenance function suffers from a scarcity of management models to facilitate its evaluation, management, and continuous improvement. The SM addresses the unique activities of software maintenance while preserving a structure similar to that of the CMMi4 maturity model. It is designed to be used as a complement to this model. The SM is based on practitioners’ experience, international standards, and the seminal literature on software maintenance. We present the model’s purpose, scope, foundation, and architecture, followed by its initial validation.", "title": "" }, { "docid": "e110425b3d464ac63b3d6db6417c0c82", "text": "Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that they involve information symmetry among the players, where all players have identical information. This property of perfect information, though, is far more common in games than in real-world problems. Poker is the quintessential game of imperfect information, and it has been a longstanding challenge problem in artificial intelligence. In this paper we introduce DeepStack, a new algorithm for imperfect information settings such as poker. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition about arbitrary poker situations that is automatically learned from selfplay games using deep learning. In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold’em. Furthermore, we show this approach dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.", "title": "" }, { "docid": "6e8466bd7b87c69c451e9312f1f05d15", "text": "Novel physical phenomena can emerge in low-dimensional nanomaterials. Bulk MoS(2), a prototypical metal dichalcogenide, is an indirect bandgap semiconductor with negligible photoluminescence. When the MoS(2) crystal is thinned to monolayer, however, a strong photoluminescence emerges, indicating an indirect to direct bandgap transition in this d-electron system. This observation shows that quantum confinement in layered d-electron materials like MoS(2) provides new opportunities for engineering the electronic structure of matter at the nanoscale.", "title": "" } ]
scidocsrr
5f58a2e789e40ab71eaa07be8b19aacc
CARLsim 3: A user-friendly and highly optimized library for the creation of neurobiologically detailed spiking neural networks
[ { "docid": "035341c7862f31eb6a4de0126ae569b5", "text": "Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain.", "title": "" } ]
[ { "docid": "f6472cbb2beb8f36a3473759951a1cfa", "text": "Hair highlighting procedures are very common throughout the world. While rarely reported, potential adverse events to such procedures include allergic and irritant contact dermatitis, thermal burns, and chemical burns. Herein, we report two cases of female adolescents who underwent a hair highlighting procedure at local salons and sustained a chemical burn to the scalp. The burn etiology, clinical and histologic features, the expected sequelae, and a review of the literature are described.", "title": "" }, { "docid": "236896835b48994d7737b9152c0e435f", "text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "title": "" }, { "docid": "1abeeaa8c100e1231f3e06cad3f0ea70", "text": "Collaborative online shopping refers to an activity in which a consumer shops at an eCommerce website with remotely located shopping partners such as friends or family. Although collaborative online shopping has increased with the pervasiveness of social networking, few studies have examined how to enhance this type of shopping experience. This study examines two potential design components, embodiment and media richness, that could enhance shoppers’ experiences. Based on theories of copresence and flow, we examined whether the implementation of these two features could increase copresence, flow, and the intention to use a collaborative online shopping website. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f22f764721b6fef72dadb8a0ba7c7128", "text": "Multispectral images of color-thermal pairs have shown more effective than a single color channel for pedestrian detection, especially under challenging illumination conditions. However, there is still a lack of studies on how to fuse the two modalities effectively. In this paper, we deeply compare six different convolutional network fusion architectures and analyse their adaptations, enabling a vanilla architecture to obtain detection performances comparable to the state-of-the-art results. Further, we discover that pedestrian detection confidences from color or thermal images are correlated with illumination conditions. With this in mind, we propose an Illumination-aware Faster R-CNN (IAF RCNN). Specifically, an Illumination-aware Network is introduced to give an illumination measure of the input image. Then we adaptively merge color and thermal sub-networks via a gate function defined over the illumination value. The experimental results on KAIST Multispectral Pedestrian Benchmark validate the effectiveness of the proposed IAF R-CNN.", "title": "" }, { "docid": "401bad1d0373acb71a855a28d2aeea38", "text": "mechanobullous epidermolysis bullosa acquisita to combined treatment with immunoadsorption and rituximab (anti-CD20 monoclonal antibodies). Arch Dermatol 2007; 143: 192–198. 6 Sadler E, Schafleitner B, Lanschuetzer C et al. Treatment-resistant classical epidermolysis bullosa acquisita responding to rituximab. Br J Dermatol 2007; 157: 417–419. 7 Crichlow SM, Mortimer NJ, Harman KE. A successful therapeutic trial of rituximab in the treatment of a patient with recalcitrant, high-titre epidermolysis bullosa acquisita. Br J Dermatol 2007; 156: 194–196. 8 Saha M, Cutler T, Bhogal B, Black MM, Groves RW. Refractory epidermolysis bullosa acquisita: successful treatment with rituximab. Clin Exp Dermatol 2009; 34: e979–e980. 9 Kubisch I, Diessenbacher P, Schmidt E, Gollnick H, Leverkus M. Premonitory epidermolysis bullosa acquisita mimicking eyelid dermatitis: successful treatment with rituximab and protein A immunoapheresis. Am J Clin Dermatol 2010; 11: 289–293. 10 Meissner C, Hoefeld-Fegeler M, Vetter R et al. Severe acral contractures and nail loss in a patient with mechano-bullous epidermolysis bullosa acquisita. Eur J Dermatol 2010; 20: 543–544.", "title": "" }, { "docid": "a4af85ec575fc5979bad8b834a51695f", "text": "Adolescents and adults with an autism spectrum disorder (ASD) who do not have an intellectual impairment or disability (ID), described here as individuals with high-functioning autism spectrum disorder (HFASD), represent a complex and underserved psychiatric population. While there is an emerging literature on the mental health needs of children with ASD with normal intelligence, we know less about these issues in adults. Of the few studies of adolescents and adults with HFASD completed to date, findings suggest that they face a multitude of cooccurring psychiatric (e.g., anxiety, depression), psychosocial, and functional issues, all of which occur in addition to their ASD symptomatology. Despite this, traditional mental health services and supports are falling short of meeting the needs of these adults. This review highlights the service needs and the corresponding gaps in care for this population. It also provides an overview of the literature on psychiatric risk factors, identifies areas requiring further study, and makes recommendations for how existing mental health services could include adults with HFASD.", "title": "" }, { "docid": "eea7af98f2e79253c63e68bc7ef6eb8e", "text": "This study was carried out to estimate phenotypic and genetic parameters for body weights and primary antibody response (Ab) against Newcastle diseases virus (NDV) vaccine for Kuchi chicken ecotype of Tanzania managed extensively. Body weight was evaluated at 8 (Bwt8), 12 (Bwt12), 16 (Bwt16) and 20 (Bwt20) weeks of age, while Ab against NDV vaccine was evaluated at 6 weeks of age (2 weeks post-vaccination). Number of birds per trait varied from 373 to 430. Mean value ± standard error (SE) over both sexes for Bwt8, Bwt12, Bwt12 and Bwt20 were 348 ± 2.8, 685 ± 5.3 g, 974 ± 6.4 g and 1,188 ± 7.3 g, respectively. Mean Ab value (HI titre in log2) against NDV vaccine was 4.69 ± 0.06. Heritability values ± SE for Bwt8, Bwt12, Bwt16, Bwt20 and Ab against NDV vaccine were 0.30 ± 0.13, 0.34 ± 0.12, 0.37 ± 0.11, 0.39 ± 0.12 and 0.22 ± 0.08, respectively. Genetic (r g) and phenotypic (r p) correlations were positive and high among body weights (i.e. r g = 0.53 to 0.74 and r p = 0.44 to 0.64), and were negative and low (i.e. around 0.10 and below) among Ab against NDV vaccine and body weights. Based on these estimates, it was concluded that growth performance for Kuchi chicken under extensive management is still poor. Adequate additive genetic variations exist for body weights and Ab against NDV vaccine under extensive management, thus they can be improved through selection under such environment, and further that both traits (body weight(s) and Ab) can be improved/selected simultaneously without significant reduction in genetic gain (response) for each trait.", "title": "" }, { "docid": "90b6b0ff4b60e109fc111b26aab4a25c", "text": "Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.", "title": "" }, { "docid": "cfc0caeb9c00b375d930cde8f5eed66e", "text": "Usability is an important and determinant factor in human-computer systems acceptance. Usability issues are still identified late in the software development process, during testing and deployment. One of the reasons these issues arise late in the process is that current requirements engineering practice does not incorporate usability perspectives effectively into software requirements specifications. The main strength of usability-focused software requirements is the clear visibility of usability aspects for both developers and testers. The explicit expression of these aspects of human-computer systems can be built for optimal usability and also evaluated effectively to uncover usability issues. This paper presents a design science-oriented research design to test the proposition that incorporating user modelling and usability modelling in software requirements specifications improves design. The proposal and the research design are expected to make a contribution to knowledge by theory testing and to practice with effective techniques to produce usable human computer systems.", "title": "" }, { "docid": "fc6f02a4eb006efe54b34b1705559a55", "text": "Company movements and market changes often are headlines of the news, providing managers with important business intelligence (BI). While existing corporate analyses are often based on numerical financial figures, relatively little work has been done to reveal from textual news articles factors that represent BI. In this research, we developed BizPro, an intelligent system for extracting and categorizing BI factors from news articles. BizPro consists of novel text mining procedures and BI factor modeling and categorization. Expert guidance and human knowledge (with high inter-rater reliability) were used to inform system development and profiling of BI factors. We conducted a case study of using the system to profile BI factors of four major IT companies based on 6859 sentences extracted from 231 news articles published in major news sources. The results show that the chosen techniques used in BizPro – Naïve Bayes (NB) and Logistic Regression (LR) – significantly outperformed a benchmark technique. NB was found to outperform LR in terms of precision, recall, F-measure, and area under ROC curve. This research contributes to developing a new system for profiling company BI factors from news articles, to providing new empirical findings to enhance understanding in BI factor extraction and categorization, and to addressing an important yet under-explored concern of BI analysis. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "21e47bd70185299e94f8553ca7e60a6e", "text": "Processes causing greenhouse gas (GHG) emissions benefit humans by providing consumer goods and services. This benefit, and hence the responsibility for emissions, varies by purpose or consumption category and is unevenly distributed across and within countries. We quantify greenhouse gas emissions associated with the final consumption of goods and services for 73 nations and 14 aggregate world regions. We analyze the contribution of 8 categories: construction, shelter, food, clothing, mobility, manufactured products, services, and trade. National average per capita footprints vary from 1 tCO2e/y in African countries to approximately 30/y in Luxembourg and the United States. The expenditure elasticity is 0.57. The cross-national expenditure elasticity for just CO2, 0.81, corresponds remarkably well to the cross-sectional elasticities found within nations, suggesting a global relationship between expenditure and emissions that holds across several orders of magnitude difference. On the global level, 72% of greenhouse gas emissions are related to household consumption, 10% to government consumption, and 18% to investments. Food accounts for 20% of GHG emissions, operation and maintenance of residences is 19%, and mobility is 17%. Food and services are more important in developing countries, while mobility and manufactured goods rise fast with income and dominate in rich countries. The importance of public services and manufactured goods has not yet been sufficiently appreciated in policy. Policy priorities hence depend on development status and country-level characteristics.", "title": "" }, { "docid": "3111f3f28391c2b2f46a2a0f726bce2f", "text": "This paper introduces an efficient motion planning method for on-road driving of the autonomous vehicles, which is based on the rapidly exploring random tree (RRT) algorithm. RRT is an incremental sampling-based algorithm and is widely used to solve the planning problem of mobile robots. However, due to the meandering path, the inaccurate terminal state, and the slow exploration, it is often inefficient in many applications such as autonomous vehicles. To address these issues and considering the realistic context of on-road autonomous driving, we propose a fast RRT algorithm that introduces a rule-template set based on the traffic scenes and an aggressive extension strategy of search tree. Both improvements lead to a faster and more accurate RRT toward the goal state compared with the basic RRT algorithm. Meanwhile, a model-based prediction postprocess approach is adopted, by which the generated trajectory can be further smoothed and a feasible control sequence for the vehicle would be obtained. Furthermore, in the environments with dynamic obstacles, an integrated approach of the fast RRT algorithm and the configuration-time space can be used to improve the quality of the planned trajectory and the replanning. A large number of experimental results illustrate that our method is fast and efficient in solving planning queries of on-road autonomous driving and demonstrate its superior performances over previous approaches.", "title": "" }, { "docid": "36347412c7d30ae6fde3742bbc4f21b9", "text": "iii", "title": "" }, { "docid": "f355ce69c36dc68fb6509528d92bf07c", "text": "The problem of position estimation in sensor networks using a combination of distance and angle information as well as pure angle information is discussed. For this purpose, a semidefinite programming relaxation based method that has been demonstrated on pure distance information is extended to solve the problem. Practical considerations such as the effect of noise and computational effort are also addressed. In particular, a random constraint selection method to minimize the number of constraints in the problem formulation is described. The performance evaluation of the technique with regard to estimation accuracy and computation time is also presented by the means of extensive simulations.", "title": "" }, { "docid": "30cd626772ad8c8ced85e8312d579252", "text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>", "title": "" }, { "docid": "3b72f2d158aad8b21746f59212698c4f", "text": "22 23 24 25 26", "title": "" }, { "docid": "fe3ccdc73ef42cebdc602544e4279825", "text": "Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.", "title": "" }, { "docid": "8cecac2a619701d7a7a16d706beadc0a", "text": "Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.\n We model classification as a sequential game between a player named \"Jury\" and a player named \"Contestant.\" Jury designs a classifier, and Contestant receives an input to the classifier drawn from a distribution. Before being classified, Contestant may change his input based on Jury's classifier. However, Contestant incurs a cost for these changes according to a cost function. Jury's goal is to achieve high classification accuracy with respect to Contestant's original input and some underlying target classification function, assuming Contestant plays best response. Contestant's goal is to achieve a favorable classification outcome while taking into account the cost of achieving it.\n For a natural class of \"separable\" cost functions, and certain generalizations, we obtain computationally efficient learning algorithms which are near optimal, achieving a classification error that is arbitrarily close to the theoretical minimum. Surprisingly, our algorithms are efficient even on concept classes that are computationally hard to learn. For general cost functions, designing an approximately optimal strategy-proof classifier, for inverse-polynomial approximation, is NP-hard.", "title": "" }, { "docid": "217742ed285e8de40d68188566475126", "text": "It has been proposed that D-amino acid oxidase (DAO) plays an essential role in degrading D-serine, an endogenous coagonist of N-methyl-D-aspartate (NMDA) glutamate receptors. DAO shows genetic association with amyotrophic lateral sclerosis (ALS) and schizophrenia, in whose pathophysiology aberrant metabolism of D-serine is implicated. Although the pathology of both essentially involves the forebrain, in rodents, enzymatic activity of DAO is hindbrain-shifted and absent in the region. Here, we show activity-based distribution of DAO in the central nervous system (CNS) of humans compared with that of mice. DAO activity in humans was generally higher than that in mice. In the human forebrain, DAO activity was distributed in the subcortical white matter and the posterior limb of internal capsule, while it was almost undetectable in those areas in mice. In the lower brain centers, DAO activity was detected in the gray and white matters in a coordinated fashion in both humans and mice. In humans, DAO activity was prominent along the corticospinal tract, rubrospinal tract, nigrostriatal system, ponto-/olivo-cerebellar fibers, and in the anterolateral system. In contrast, in mice, the reticulospinal tract and ponto-/olivo-cerebellar fibers were the major pathways showing strong DAO activity. In the human corticospinal tract, activity-based staining of DAO did not merge with a motoneuronal marker, but colocalized mostly with excitatory amino acid transporter 2 and in part with GFAP, suggesting that DAO activity-positive cells are astrocytes seen mainly in the motor pathway. These findings establish the distribution of DAO activity in cerebral white matter and the motor system in humans, providing evidence to support the involvement of DAO in schizophrenia and ALS. Our results raise further questions about the regulation of D-serine in DAO-rich regions as well as the physiological/pathological roles of DAO in white matter astrocytes.", "title": "" }, { "docid": "0f9a4d22cc7f63ea185f3f17759e185a", "text": "Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.", "title": "" } ]
scidocsrr
f351f9000e61dad83dc3ea9f7090019f
Bag-of-Vector Embeddings of Dependency Graphs for Semantic Induction
[ { "docid": "0a170051e72b58081ad27e71a3545bcf", "text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "title": "" }, { "docid": "5664ca8d7f0f2f069d5483d4a334c670", "text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.", "title": "" } ]
[ { "docid": "8a257223c6d9b5c6c6b17e023f010c66", "text": "Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.", "title": "" }, { "docid": "49910c444cef98bdea4fca1beb8381c3", "text": "This paper introduces the concept of gait transitions, acyclic feedforward motion patterns that allow a robot to switch from one gait to another. Legged robots often utilize collections of gait patterns to locomote over a variety of surfaces. Each feedforward gait is generally tuned for a specific surface and set of operating conditions. To enable locomotion across a changing surface, a robot must be able to stably change between gaits while continuing to locomote. By understanding the fundamentals of gaits, we present methods to correctly transition between differing gaits. On two separate robotic platforms, we show how the application of gait transitions enhances each robot's behavioral suite. Using the RHex robotic hexapod, gait transitions are used to smoothly switch from a tripod walking gait to a metachronal wave gait used to climb stairs. We also introduce the RiSE platform, a hexapod robot capable of vertical climbing, and discuss how gait transitions play an important role in achieving vertical mobility", "title": "" }, { "docid": "07239163734357138011bbcc7b9fd38f", "text": "Open cross-section, thin-walled, cold-formed steel columns have at least three competing buckling modes: local, dis and Euler~i.e., flexural or flexural-torsional ! buckling. Closed-form prediction of the buckling stress in the local mode, includ interaction of the connected elements, and the distortional mode, including consideration of the elastic and geometric stiffne web/flange juncture, are provided and shown to agree well with numerical methods. Numerical analyses and experiments postbuckling capacity in the distortional mode is lower than in the local mode. Current North American design specificati cold-formed steel columns ignore local buckling interaction and do not provide an explicit check for distortional buckling. E experiments on cold-formed channel, zed, and rack columns indicate inconsistency and systematic error in current design me provide validation for alternative methods. A new method is proposed for design that explicitly incorporates local, distortional an buckling, does not require calculations of effective width and/or effective properties, gives reliable predictions devoid of systema and provides a means to introduce rational analysis for elastic buckling prediction into the design of thin-walled columns. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:3~289! CE Database keywords: Thin-wall structures; Columns; Buckling; Cold-formed steel.", "title": "" }, { "docid": "2d981243bfb30196474d5855043fa7b7", "text": "Gamification, an application of game design elements to non-gaming contexts, is proposed as a way to add engagement in technology-mediated training programs. But there is hardly any information on how to adapt game design elements to improve learning outcomes and promote learner engagement. To address the issue, we focus on a popular game design element, competition, and specifically examine the effects of different competitive structures – whether a person faces a higher-skilled, lower-skilled, or equally-skilled competitor – on learning and engagement. We study a gamified training design for databases, where trainees play a trivia-based mini-game with a competitor after each e-training module. Trainees who faced a lower-skilled competitor reported higher self-efficacy beliefs and better learning outcomes, supporting the effect of peer appraisal, a less examined aspect of social cognitive theory. But trainees who faced equallyskilled competitors reported higher levels of engagement, supporting the balance principle of flow theory. Our study findings indicate that no one competitive structure can address learning and engagement outcomes simultaneously. The choice of competitive structures depends on the priority of the outcomes in training. Our findings provide one explanation for the mixed findings on the effect of competitive gamification designs in technology mediated training.", "title": "" }, { "docid": "35c18e570a6ab44090c1997e7fe9f1b4", "text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.", "title": "" }, { "docid": "01c5231566670caa9a0ca94f8f5ef558", "text": "In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.", "title": "" }, { "docid": "2820f1623ab5c17e18c8a237156c2d36", "text": "In a two-tier heterogeneous network (HetNet) where small base stations (SBSs) coexist with macro base stations (MBSs), the SBSs may suffer significant performance degradation due to the inter- and intra-tier interferences. Introducing cognition into the SBSs through the spectrum sensing (e.g., carrier sensing) capability helps them detecting the interference sources and avoiding them via opportunistic access to orthogonal channels. In this paper, we use stochastic geometry to model and analyze the performance of two cases of cognitive SBSs in a multichannel environment, namely, the semi-cognitive case and the full-cognitive case. In the semi-cognitive case, the SBSs are only aware of the interference from the MBSs, hence, only inter-tier interference is minimized. On the other hand, in the full-cognitive case, the SBSs access the spectrum via a contention resolution process, hence, both the intra- and intertier interferences are minimized, but at the expense of reduced spectrum access opportunities. We quantify the performance gain in outage probability obtained by introducing cognition into the small cell tier for both the cases. We will focus on a special type of SBSs called the femto access points (FAPs) and also capture the effect of different admission control policies, namely, the open-access and closed-access policies. We show that a semi-cognitive SBS always outperforms a full-cognitive SBS and that there exists an optimal spectrum sensing threshold for the cognitive SBSs which can be obtained via the analytical framework presented in this paper.", "title": "" }, { "docid": "f578c9ea0ac7f28faa3d9864c0e43711", "text": "Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph convolutional networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.", "title": "" }, { "docid": "0dcfd748b2ea70de8b84b9056eb79fc4", "text": "The number of resource-limited wireless devices utilized in many areas of Internet of Things is growing rapidly; there is a concern about privacy and security. Various lightweight block ciphers are proposed; this work presents a modified lightweight block cipher algorithm. A Linear Feedback Shift Register is used to replace the key generation function in the XTEA1 Algorithm. Using the same evaluation conditions, we analyzed the software implementation of the modified XTEA using FELICS (Fair Evaluation of Lightweight Cryptographic Systems) a benchmarking framework which calculates RAM footprint, ROM occupation and execution time on three largely used embedded devices: 8-bit AVR microcontroller, 16-bit MSP microcontroller and 32-bit ARM microcontroller. Implementation results show that it provides less software requirements compared to original XTEA. We enhanced the security level and the software performance.", "title": "" }, { "docid": "b9c54211575909291cbd4428781a3b05", "text": "The purpose is to arrive at recognition of multicolored objects invariant to a substantial change in viewpoint, object geometry and illumination. Assuming dichromatic reflectance and white illumination, it is shown that normalized color rgb, saturation S and hue H, and the newly proposed color models c 1 c 2 c 3 and l 1 l 2 l 3 are all invariant to a change in viewing direction, object geometry and illumination. Further, it is shown that hue H and l 1 l 2 l 3 are also invariant to highlights. Finally, a change in spectral power distribution of the illumination is considered to propose a new color constant color model m 1 m 2 m 3 . To evaluate the recognition accuracy differentiated for the various color models, experiments have been carried out on a database consisting of 500 images taken from 3-D multicolored man-made objects. The experimental results show that highest object recognition accuracy is achieved by l 1 l 2 l 3 and hue H followed by c 1 c 2 c 3 , normalized color rgb and m 1 m 2 m 3 under the constraint of white illumination. Also, it is demonstrated that recognition accuracy degrades substantially for all color features other than m 1 m 2 m 3 with a change in illumination color. The recognition scheme and images are available within the PicToSeek and Pic2Seek systems on-line at: http: //www.wins.uva.nl/research/isis/zomax/. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "0b28e0e8637a666d616a8c360d411193", "text": "As a novel dynamic network service infrastructure, Internet of Things (IoT) has gained remarkable popularity with obvious superiorities in the interoperability and real-time communication. Despite of the convenience in collecting information to provide the decision basis for the users, the vulnerability of embedded sensor nodes in multimedia devices makes the malware propagation a growing serious problem, which would harm the security of devices and their users financially and physically in wireless multimedia system (WMS). Therefore, many researches related to the malware propagation and suppression have been proposed to protect the topology and system security of wireless multimedia network. In these studies, the epidemic model is of great significance to the analysis of malware propagation. Considering the cloud and state transition of sensor nodes, a cloud-assisted model for malware detection and the dynamic differential game against malware propagation are proposed in this paper. Firstly, a SVM based malware detection model is constructed with the data sharing at the security platform in the cloud. Then the number of malware-infected nodes with physical infectivity to susceptible nodes is calculated precisely based on the attributes of WMS transmission. Then the state transition among WMS devices is defined by the modified epidemic model. Furthermore, a dynamic differential game and target cost function are successively derived for the Nash equilibrium between malware and WMS system. On this basis, a saddle-point malware detection and suppression algorithm is presented depending on the modified epidemic model and the computation of optimal strategies. Numerical results and comparisons show that the proposed algorithm can increase the utility of WMS efficiently and effectively.", "title": "" }, { "docid": "25deed9855199ef583524a2eef0456f0", "text": "We introduce a method for creating very dense reconstructions of datasets, particularly turn-table varieties. The method takes in initial reconstructions (of any origin) and makes them denser by interpolating depth values in two-dimensional image space within a superpixel region and then optimizing the interpolated value via image consistency analysis across neighboring images in the dataset. One of the core assumptions in this method is that depth values per pixel will vary gradually along a gradient for a given object. As such, turntable datasets, such as the dinosaur dataset, are particularly easy for our method. Our method modernizes some existing techniques and parallelizes them on a GPU, which produces results faster than other densification methods.", "title": "" }, { "docid": "2b8305c10f1105905f2a2f9651cb7c9f", "text": "Many distributed collective decision-making processes must balance diverse individual preferences with a desire for collective unity. We report here on an extensive session of behavioral experiments on biased voting in networks of individuals. In each of 81 experiments, 36 human subjects arranged in a virtual network were financially motivated to reach global consensus to one of two opposing choices. No payments were made unless the entire population reached a unanimous decision within 1 min, but different subjects were paid more for consensus to one choice or the other, and subjects could view only the current choices of their network neighbors, thus creating tensions between private incentives and preferences, global unity, and network structure. Along with analyses of how collective and individual performance vary with network structure and incentives generally, we find that there are well-studied network topologies in which the minority preference consistently wins globally; that the presence of \"extremist\" individuals, or the awareness of opposing incentives, reliably improve collective performance; and that certain behavioral characteristics of individual subjects, such as \"stubbornness,\" are strongly correlated with earnings.", "title": "" }, { "docid": "b347cea48fea5341737e315535ea57e5", "text": "1 EXTENDED ABSTRACT Real world interactions are full of coordination problems [2, 3, 8, 14, 15] and thus constructing agents that can solve them is an important problem for artificial intelligence research. One of the simplest, most heavily studied coordination problems is the matrixform, two-player Stag Hunt. In the Stag Hunt, each player makes a choice between a risky action (hunt the stag) and a safe action (forage for mushrooms). Foraging for mushrooms always yields a safe payoff while hunting yields a high payoff if the other player also hunts but a very low payoff if one shows up to hunt alone. This game has two important Nash equilibria: either both players show up to hunt (this is called the payoff dominant equilibrium) or both players stay home and forage (this is called the risk-dominant equilibrium [7]). In the Stag Hunt, when the payoff to hunting alone is sufficiently low, dyads of learners as well as evolving populations converge to the risk-dominant (safe) equilibrium [6, 8, 10, 11]. The intuition here is that even a slight amount of doubt about whether one’s partner will show up causes an agent to choose the safe action. This in turn causes partners to be less likely to hunt in the future and the system trends to the inefficient equilibrium. We are interested in the problem of agent design: our task is to construct an agent that will go into an initially poorly understood environment and make decisions. Our agent must learn from its experiences to update its policy and maximize some scalar reward. However, there will also be other agents which we do not control. These agents will also learn from their experiences. We ask: if the environment has Stag Hunt-like properties, can we make changes to our agent’s learning to improve its outcomes? We focus on reinforcement learning (RL), however, many of our results should generalize to other learning algorithms.", "title": "" }, { "docid": "9f5f79a19d3a181f5041a7b5911db03a", "text": "BACKGROUND\nNucleoside analogues against herpes simplex virus (HSV) have been shown to suppress shedding of HSV type 2 (HSV-2) on genital mucosal surfaces and may prevent sexual transmission of HSV.\n\n\nMETHODS\nWe followed 1484 immunocompetent, heterosexual, monogamous couples: one with clinically symptomatic genital HSV-2 and one susceptible to HSV-2. The partners with HSV-2 infection were randomly assigned to receive either 500 mg of valacyclovir once daily or placebo for eight months. The susceptible partner was evaluated monthly for clinical signs and symptoms of genital herpes. Source partners were followed for recurrences of genital herpes; 89 were enrolled in a substudy of HSV-2 mucosal shedding. Both partners were counseled on safer sex and were offered condoms at each visit. The predefined primary end point was the reduction in transmission of symptomatic genital herpes.\n\n\nRESULTS\nClinically symptomatic HSV-2 infection developed in 4 of 743 susceptible partners who were given valacyclovir, as compared with 16 of 741 who were given placebo (hazard ratio, 0.25; 95 percent confidence interval, 0.08 to 0.75; P=0.008). Overall, acquisition of HSV-2 was observed in 14 of the susceptible partners who received valacyclovir (1.9 percent), as compared with 27 (3.6 percent) who received placebo (hazard ratio, 0.52; 95 percent confidence interval, 0.27 to 0.99; P=0.04). HSV DNA was detected in samples of genital secretions on 2.9 percent of the days among the HSV-2-infected (source) partners who received valacyclovir, as compared with 10.8 percent of the days among those who received placebo (P<0.001). The mean rates of recurrence were 0.11 per month and 0.40 per month, respectively (P<0.001).\n\n\nCONCLUSIONS\nOnce-daily suppressive therapy with valacyclovir significantly reduces the risk of transmission of genital herpes among heterosexual, HSV-2-discordant couples.", "title": "" }, { "docid": "332a30e8d03d4f8cc03e7ab9b809ec9f", "text": "The study of electromyographic (EMG) signals has gained increased attention in the last decades since the proper analysis and processing of these signals can be instrumental for the diagnosis of neuromuscular diseases and the adaptive control of prosthetic devices. As a consequence, various pattern recognition approaches, consisting of different modules for feature extraction and classification of EMG signals, have been proposed. In this paper, we conduct a systematic empirical study on the use of Fractal Dimension (FD) estimation methods as feature extractors from EMG signals. The usage of FD as feature extraction mechanism is justified by the fact that EMG signals usually show traces of selfsimilarity and by the ability of FD to characterize and measure the complexity inherent to different types of muscle contraction. In total, eight different methods for calculating the FD of an EMG waveform are considered here, and their performance as feature extractors is comparatively assessed taking into account nine well-known classifiers of different types and complexities. Results of experiments conducted on a dataset involving seven distinct types of limb motions are reported whereby we could observe that the normalized version of the Katz's estimation method and the Hurst exponent significantly outperform the others according to a class separability measure and five well-known accuracy measures calculated over the induced classifiers. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f5c4bdf959e455193221a1fa76e1895a", "text": "This book contains a wide variety of hot topics on advanced computational intelligence methods which incorporate the concept of complex and hypercomplex number systems into the framework of artificial neural networks. In most chapters, the theoretical descriptions of the methodology and its applications to engineering problems are excellently balanced. This book suggests that a better information processing method could be brought about by selecting a more appropriate information representation scheme for specific problems, not only in artificial neural networks but also in other computational intelligence frameworks. The advantages of CVNNs and hypercomplex-valued neural networks over real-valued neural networks are confirmed in some case studies but still unclear in general. Hence, there is a need to further explore the difference between them from the viewpoint of nonlinear dynamical systems. Nevertheless, it seems that the applications of CVNNs and hypercomplex-valued neural networks are very promising.", "title": "" }, { "docid": "14835b93b580081b0398e5e370b72c2c", "text": "In order for autonomous vehicles to achieve life-long operation in outdoor environments, navigation systems must be able to cope with visual change—whether it’s short term, such as variable lighting or weather conditions, or long term, such as different seasons. As a Global Positioning System (GPS) is not always reliable, autonomous vehicles must be self sufficient with onboard sensors. This thesis examines the problem of localisation against a known map across extreme lighting and weather conditions using only a stereo camera as the primary sensor. The method presented departs from traditional techniques that blindly apply out-of-the-box interest-point detectors to all images of all places. This naive approach fails to take into account any prior knowledge that exists about the environment in which the robot is operating. Furthermore, the point-feature approach often fails when there are dramatic appearance changes, as associating low-level features such as corners or edges is extremely difficult and sometimes not possible. By leveraging knowledge of prior appearance, this thesis presents an unsupervised method for learning a set of distinctive and stable (i.e., stable under appearance changes) feature detectors that are unique to a specific place in the environment. In other words, we learn place-dependent feature detectors that enable vastly superior performance in terms of robustness in exchange for a reduced, but tolerable metric precision. By folding in a method for masking distracting objects in dynamic environments and examining a simple model for external illuminates, such as the sun, this thesis presents a robust localisation system that is able to achieve metric estimates from night-today or summer-to-winter conditions. Results are presented from various locations in the UK, including the Begbroke Science Park, Woodstock, Oxford, and central London. Statement of Authorship This thesis is submitted to the Department of Engineering Science, University of Oxford, in fulfilment of the requirements for the degree of Doctor of Philosophy. This thesis is entirely my own work, and except where otherwise stated, describes my own research. Colin McManus, Lady Margaret Hall Funding The work described in this thesis was funded by Nissan Motors.", "title": "" }, { "docid": "4973ce25e2a638c3923eda62f92d98b2", "text": "About 20 ethnic groups reside in Mongolia. On the basis of genetic and anthropological studies, it is believed that Mongolians have played a pivotal role in the peopling of Central and East Asia. However, the genetic relationships among these ethnic groups have remained obscure, as have their detailed relationships with adjacent populations. We analyzed 16 binary and 17 STR polymorphisms of human Y chromosome in 669 individuals from nine populations, including four indigenous ethnic groups in Mongolia (Khalkh, Uriankhai, Zakhchin, and Khoton). Among these four Mongolian populations, the Khalkh, Uriankhai, and Zakhchin populations showed relatively close genetic affinities to each other and to Siberian populations, while the Khoton population showed a closer relationship to Central Asian populations than to even the other Mongolian populations. These findings suggest that the major Mongolian ethnic groups have a close genetic affinity to populations in northern East Asia, although the genetic link between Mongolia and Central Asia is not negligible.", "title": "" }, { "docid": "f79472b17396fd180821b0c02fe92939", "text": "Bull breeds are commonly kept as companion animals, but the pit bull terrier is restricted by breed-specific legislation (BSL) in parts of the United States and throughout the United Kingdom. Shelter workers must decide which breed(s) a dog is. This decision may influence the dog's fate, particularly in places with BSL. In this study, shelter workers in the United States and United Kingdom were shown pictures of 20 dogs and were asked what breed each dog was, how they determined each dog's breed, whether each dog was a pit bull, and what they expected the fate of each dog to be. There was much variation in responses both between and within the United States and United Kingdom. UK participants frequently labeled dogs commonly considered by U.S. participants to be pit bulls as Staffordshire bull terriers. UK participants were more likely to say their shelters would euthanize dogs deemed to be pit bulls. Most participants noted using dogs' physical features to determine breed, and 41% affected by BSL indicated they would knowingly mislabel a dog of a restricted breed, presumably to increase the dog's adoption chances.", "title": "" } ]
scidocsrr
cf97dbb648e0dae77fe6beda8c26924f
Predicting Ego-Vehicle Paths from Environmental Observations with a Deep Neural Network
[ { "docid": "2ecd0bf132b3b77dc1625ef8d09c925b", "text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.", "title": "" }, { "docid": "9bae1002ee5ebf0231fe687fd66b8bb5", "text": "We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.", "title": "" } ]
[ { "docid": "c120406dd4e60a9bb33dd4a87cbd3616", "text": "Intersubjectivity is an important concept in psychology and sociology. It refers to sharing conceptualizations through social interactions in a community and using such shared conceptualization as a resource to interpret things that happen in everyday life. In this work, we make use of intersubjectivity as the basis to model shared stance and subjectivity for sentiment analysis. We construct an intersubjectivity network which links review writers, terms they used, as well as the polarities of the terms. Based on this network model, we propose a method to learn writer embeddings which are subsequently incorporated into a convolutional neural network for sentiment analysis. Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance.", "title": "" }, { "docid": "ce0b0543238a81c3f02c43e63a285605", "text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.", "title": "" }, { "docid": "7f8ca7d8d2978bfc08ab259fba60148e", "text": "Over the last few years, much online volunteered geographic information (VGI) has emerged and has been increasingly analyzed to understand places and cities, as well as human mobility and activity. However, there are concerns about the quality and usability of such VGI. In this study, we demonstrate a complete process that comprises the collection, unification, classification and validation of a type of VGI—online point-of-interest (POI) data—and develop methods to utilize such POI data to estimate disaggregated land use (i.e., employment size by category) at a very high spatial resolution (census block level) using part of the Boston metropolitan area as an example. With recent advances in activity-based land use, transportation, and environment (LUTE) models, such disaggregated land use data become important to allow LUTE models to analyze and simulate a person’s choices of work location and activity destinations and to understand policy impacts on future cities. These data can also be used as alternatives to explore economic activities at the local level, especially as government-published census-based disaggregated employment data have become less available in the recent decade. Our new approach provides opportunities for cities to estimate land use at high resolution with low cost by utilizing VGI while ensuring its quality with a certain accuracy threshold. The automatic classification of POI can also be utilized for other types of analyses on cities. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0cfda368edafe21e538f2c1d7ed75056", "text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.", "title": "" }, { "docid": "19350a76398e0054be44c73618cdfb33", "text": "An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.", "title": "" }, { "docid": "945ead15b96ed06a15b12372b4787fcf", "text": "We describe the development and testing of ab initio derived, AMBER ff03 compatible charge parameters for a large library of 147 noncanonical amino acids including β- and N-methylated amino acids for use in applications such as protein structure prediction and de novo protein design. The charge parameter derivation was performed using the RESP fitting approach. Studies were performed assessing the suitability of the derived charge parameters in discriminating the activity/inactivity between 63 analogs of the complement inhibitor Compstatin on the basis of previously published experimental IC50 data and a screening procedure involving short simulations and binding free energy calculations. We found that both the approximate binding affinity (K*) and the binding free energy calculated through MM-GBSA are capable of discriminating between active and inactive Compstatin analogs, with MM-GBSA performing significantly better. Key interactions between the most potent Compstatin analog that contains a noncanonical amino acid are presented and compared to the most potent analog containing only natural amino acids and native Compstatin. We make the derived parameters and an associated web interface that is capable of performing modifications on proteins using Forcefield_NCAA and outputting AMBER-ready topology and parameter files freely available for academic use at http://selene.princeton.edu/FFNCAA . The forcefield allows one to incorporate these customized amino acids into design applications with control over size, van der Waals, and electrostatic interactions.", "title": "" }, { "docid": "1fbf45145e6ce4b37e3b840a80733ce7", "text": "Ionic liquids (ILs) comprise an extremely broad class of molten salts that are attractive for many practical applications because of their useful combinations of properties [1-3]. The ability to mix and match the cationic and anionic constituents of ILs and functionalize their side chains. These allow amazing tenability of IL properties, including conductivity, viscosi‐ ty, solubility of diverse solutes and miscibility/ immiscibility with a wide range of solvents. [4] Over the past several years, room temperature ILs (RTILs) has generated considerable excitement, as they consist entirely of ions, yet in liquid state and possess minimal vapour pressure. Consequently, ILs can be recycled, thus making synthetic processes less expensive and potentially more efficient and environmentally friendly. Considerable progress has been made using ILs as solvents in the areas of monophasic and biphasic catalysis (homoge‐ neus and heterogeneous).[5-6] The ILs investigated herein provides real practical advantag‐ es over earlier molten salt (high temperature) systems because of their relative insensitivity to air and water. [6-7] A great deal of progress has been made during last five years towards identifying the factors that cause these salts to have low melting points and other useful properties.[8] ILs are subject of intense current interest within the physical chemistry com‐ munity as well. There have been quite a lot of photophysical studies in ionic liquids. [8] The most important properties of ionic liquids are: thermal stability, low vapour pressure, elec‐ tric conductivity, liquid crystal structures, high electro-elasticity, high heat capacity and in‐ flammability properties enable the use of ionic liquids in a wide range of applications, as shown in Figure 1. It is also a suitable solvent for synthesis, [5, 8, 9-12] catalysis [6, 8, 13] and purification. [14-18] It is also used in electrochemical devices and processes, such as re‐ chargeable lithium batteries and electrochemical capacitors, etc.[19] Rechargeable Lithium", "title": "" }, { "docid": "d49d405fc765b647b39dc9ef1b4d6ba9", "text": "The World Wide Web plays an important role while searching for information in the data network. Users are constantly exposed to an ever-growing flood of information. Our approach will help in searching for the exact user relevant content from multiple search engines thus, making the search more efficient and reliable. Our framework will extract the relevant result records based on two approaches i.e. Stored URL list and Run time Generated URL list. Finally, the unique set of records is displayed in a common framework's search result page. The extraction is performed using the concepts of Document Object Model (DOM) tree. The paper comprises of a concept of threshold and data filters to detect and remove irrelevant & redundant data from the web page. The data filters will also be used to further improve the similarity check of data records. Our system will be able to extract 75%-80% user relevant content by eliminating noisy content from the different structured web pages like blogs, forums, articles etc. in the dynamic environment. Our approach shows significant advantages in both precision and recall.", "title": "" }, { "docid": "f159ee79d20f00194402553758bcd031", "text": "Recently, narrowband Internet of Things (NB-IoT), one of the most promising low power wide area (LPWA) technologies, has attracted much attention from both academia and industry. It has great potential to meet the huge demand for machine-type communications in the era of IoT. To facilitate research on and application of NB-IoT, in this paper, we design a system that includes NB devices, an IoT cloud platform, an application server, and a user app. The core component of the system is to build a development board that integrates an NB-IoT communication module and a subscriber identification module, a micro-controller unit and power management modules. We also provide a firmware design for NB device wake-up, data sensing, computing and communication, and the IoT cloud configuration for data storage and analysis. We further introduce a framework on how to apply the proposed system to specific applications. The proposed system provides an easy approach to academic research as well as commercial applications.", "title": "" }, { "docid": "3ed3b4f507c32f6423ca3918fa3eb843", "text": "In recent years, it has been clearly evidenced that most cells in a human being are not human: they are microbial, represented by more than 1000 microbial species. The vast majority of microbial species give rise to symbiotic host-bacterial interactions that are fundamental for human health. The complex of these microbial communities has been defined as microbiota or microbiome. These bacterial communities, forged over millennia of co-evolution with humans, are at the basis of a partnership with the developing human newborn, which is based on reciprocal molecular exchanges and cross-talking. Recent data on the role of the human microbiota in newborns and children clearly indicate that microbes have a potential importance to pediatrics, contributing to host nutrition, developmental regulation of intestinal angiogenesis, protection from pathogens, and development of the immune system. This review is aimed at reporting the most recent data on the knowledge of microbiota origin and development in the human newborn, and on the multiple factors influencing development and maturation of our microbiota, including the use and abuse of antibiotic therapies.", "title": "" }, { "docid": "4d0b04f546ab5c0d79bb066b1431ff51", "text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a17052726cbf3239c3f516b51af66c75", "text": "Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part, or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be “reinventing the wheel”. Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.", "title": "" }, { "docid": "366f31829bb1ac55d195acef880c488e", "text": "Intense competition among a vast number of group-buying websites leads to higher product homogeneity, which allows customers to switch to alternative websites easily and reduce their website stickiness and loyalty. This study explores the antecedents of user stickiness and loyalty and their effects on consumers’ group-buying repurchase intention. Results indicate that systems quality, information quality, service quality, and alternative system quality each has a positive relationship with user loyalty through user stickiness. Meanwhile, information quality directly impacts user loyalty. Thereafter, user stickiness and loyalty each has a positive relationship with consumers’ repurchase intention. Theoretical and managerial implications are also discussed.", "title": "" }, { "docid": "445d57e24150087a866fc34ddb422184", "text": "A survey of the major techniques used in the design of microwave filters is presented in this paper. It is shown that the basis for much fundamental microwave filter theory lies in the realm of lumped-element filters, which indeed are actually used directly for many applications at microwave frequencies as high as 18 GHz. Many types of microwave filters are discussed with the object of pointing out the most useful references, especially for a newcomer to the field.", "title": "" }, { "docid": "8f2a36d188e9efb614d4b324188c83d5", "text": "Neurobiologically inspired algorithms have been developed to continuously learn behavioral patterns at a variety of conceptual, spatial, and temporal levels. In this paper, we outline our use of these algorithms for situation awareness in the maritime domain. Our algorithms take real-time tracking information and learn motion pattern models on-the-fly, enabling the models to adapt well to evolving situations while maintaining high levels of performance. The constantly refined models, resulting from concurrent incremental learning, are used to evaluate the behavior patterns of vessels based on their present motion states. At the event level, learning provides the capability to detect (and alert) upon anomalous behavior. At a higher (inter-event) level, learning enables predictions, over pre-defined time horizons, to be made about future vessel location. Predictions can also be used to alert on anomalous behavior. Learning is context-specific and occurs at multiple levels: for example, for individual vessels as well as classes of vessels. Features and performance of our learning system using recorded data are described", "title": "" }, { "docid": "b1823c456360037d824614a6cf4eceeb", "text": "This paper provides an overview of the Industrial Internet with the emphasis on the architecture, enabling technologies, applications, and existing challenges. The Industrial Internet is enabled by recent rising sensing, communication, cloud computing, and big data analytic technologies, and has been receiving much attention in the industrial section due to its potential for smarter and more efficient industrial productions. With the merge of intelligent devices, intelligent systems, and intelligent decisioning with the latest information technologies, the Industrial Internet will enhance the productivity, reduce cost and wastes through the entire industrial economy. This paper starts by investigating the brief history of the Industrial Internet. We then present the 5C architecture that is widely adopted to characterize the Industrial Internet systems. Then, we investigate the enabling technologies of each layer that cover from industrial networking, industrial intelligent sensing, cloud computing, big data, smart control, and security management. This provides the foundations for those who are interested in understanding the essence and key enablers of the Industrial Internet. Moreover, we discuss the application domains that are gradually transformed by the Industrial Internet technologies, including energy, health care, manufacturing, public section, and transportation. Finally, we present the current technological challenges in developing Industrial Internet systems to illustrate open research questions that need to be addressed to fully realize the potential of future Industrial Internet systems.", "title": "" }, { "docid": "a8699e1ed8391e5a55fbd79ae3ac0972", "text": "The benefits of an e-learning system will not be maximized unless learners use the system. This study proposed and tested alternative models that seek to explain student intention to use an e-learning system when the system is used as a supplementary learning tool within a traditional class or a stand-alone distance education method. The models integrated determinants from the well-established technology acceptance model as well as system and participant characteristics cited in the research literature. Following a demonstration and use phase of the e-learning system, data were collected from 259 college students. Structural equation modeling provided better support for a model that hypothesized stronger effects of system characteristics on e-learning system use. Implications for both researchers and practitioners are discussed. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d6ebe4bacd4a9cea920cfb18aebd5f28", "text": "Page Abstract ............................................................................................................2 Introduction ......................................................................................................2 Key MOSFET Electrical Parameters in Class D Audio Amplifiers ....................2 Drain Source Breakdown Voltage BVDSS................................................2 Static Drain-to-Source On Resistance RDS(on).........................................4 Gate Charge Qg......................................................................................5 Body Diode Reverse Recovery Charge, Qrr ...........................................8 Internal Gate Resistance RG(int)........................................................11 MOSFET Package ..................................................................................11 Maximum Junction Temperature .............................................................12 International Rectifier Digital Audio MOSFET ...................................................13 Conclusions.........................................................................................14 References........................................................................................................14", "title": "" }, { "docid": "0869a75f158b04513c848bc7bfb10e37", "text": "Tracking of multiple objects is an important application in AI City geared towards solving salient problems related to safety and congestion in an urban environment. Frequent occlusion in traffic surveillance has been a major problem in this research field. In this challenge, we propose a model-based vehicle localization method, which builds a kernel at each patch of the 3D deformable vehicle model and associates them with constraints in 3D space. The proposed method utilizes shape fitness evaluation besides color information to track vehicle objects robustly and efficiently. To build 3D car models in a fully unsupervised manner, we also implement evolutionary camera self-calibration from tracking of walking humans to automatically compute camera parameters. Additionally, the segmented foreground masks which are crucial to 3D modeling and camera self-calibration are adaptively refined by multiple-kernel feedback from tracking. For object detection/ classification, the state-of-theart single shot multibox detector (SSD) is adopted to train and test on the NVIDIA AI City Dataset. To improve the accuracy on categories with only few objects, like bus, bicycle and motorcycle, we also employ the pretrained model from YOLO9000 with multiscale testing. We combine the results from SSD and YOLO9000 based on ensemble learning. Experiments show that our proposed tracking system outperforms both state-of-the-art of tracking by segmentation and tracking by detection. Keywords—multiple object tracking, constrained multiple kernels, 3D deformable model, camera self-calibration, adaptive segmentation, object detection, object classification", "title": "" }, { "docid": "8f4c629147db41356763de733aea618b", "text": "The application of simulation software in the planning process is state-of-the-art at many railway infrastructure managers. On the one hand software tools are used to point out the demand for new infrastructure and on the other hand they are used to optimize traffic flow in railway networks by support of the time table related processes. This paper deals with the first application of the software tool called OPENTRACK for simulation of railway operation on an existing line in Croatia from Zagreb to Karlovac. Aim of the work was to find out if the actual version of OPENTRACK able to consider the Croatian signalling system. Therefore the capability arises to use it also for other investigations in railway operation.", "title": "" } ]
scidocsrr
21737b8c6c37aef3918d38a66552e5d2
A unified Bayesian framework for MEG/EEG source imaging
[ { "docid": "62d39d41523bca97939fa6a2cf736b55", "text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.", "title": "" } ]
[ { "docid": "6824f227a05b30b9e09ea9a4d16429b0", "text": "This study presents a Long Short-Term Memory (LSTM) neural network approach to Japanese word segmentation (JWS). Previous studies on Chinese word segmentation (CWS) succeeded in using recurrent neural networks such as LSTM and gated recurrent units (GRU). However, in contrast to Chinese, Japanese includes several character types, such as hiragana, katakana, and kanji, that produce orthographic variations and increase the difficulty of word segmentation. Additionally, it is important for JWS tasks to consider a global context, and yet traditional JWS approaches rely on local features. In order to address this problem, this study proposes employing an LSTMbased approach to JWS. The experimental results indicate that the proposed model achieves state-of-the-art accuracy with respect to various Japanese corpora.", "title": "" }, { "docid": "338efe667e608779f4f41d1cdb1839bb", "text": "In ASP.NET, Programmers maybe use POST or GET to pass parameter's value. Two methods are easy to come true. But In ASP.NET, It is not easy to pass parameter's value. In ASP.NET, Programmers maybe use many methods to pass parameter's value, such as using Application, Session, Querying, Cookies, and Forms variables. In this paper, by way of pass value from WebForm1.aspx to WebForm2.aspx and show out the value on WebForm2. We can give and explain actually examples in ASP.NET language to introduce these methods.", "title": "" }, { "docid": "7e4e5472e5ee0b25511975f3422d2173", "text": "Most people with Parkinson's disease (PD) fall and many experience recurrent falls. The aim of this review was to examine the scope of recurrent falls and to identify factors associated with recurrent fallers. A database search for journal articles which reported prospectively collected information concerning recurrent falls in people with PD identified 22 studies. In these studies, 60.5% (range 35 to 90%) of participants reported at least one fall, with 39% (range 18 to 65%) reporting recurrent falls. Recurrent fallers reported an average of 4.7 to 67.6 falls per person per year (overall average 20.8 falls). Factors associated with recurrent falls include: a positive fall history, increased disease severity and duration, increased motor impairment, treatment with dopamine agonists, increased levodopa dosage, cognitive impairment, fear of falling, freezing of gait, impaired mobility and reduced physical activity. The wide range in the frequency of recurrent falls experienced by people with PD suggests that it would be beneficial to classify recurrent fallers into sub-groups based on fall frequency. Given that there are several factors particularly associated with recurrent falls, fall management and prevention strategies specifically targeting recurrent fallers require urgent evaluation in order to inform clinical practice.", "title": "" }, { "docid": "9869bc5dfc8f20b50608f0d68f7e49ba", "text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.", "title": "" }, { "docid": "0177729f2d7fc610bd8e55a93a93b03b", "text": "Preference-based recommendation systems have transformed how we consume media. By analyzing usage data, these methods uncover our latent preferences for items (such as articles or movies) and form recommendations based on the behavior of others with similar tastes. But traditional preference-based recommendations do not account for the social aspect of consumption, where a trusted friend might point us to an interesting item that does not match our typical preferences. In this work, we aim to bridge the gap between preference- and social-based recommendations. We develop social Poisson factorization (SPF), a probabilistic model that incorporates social network information into a traditional factorization method; SPF introduces the social aspect to algorithmic recommendation. We develop a scalable algorithm for analyzing data with SPF, and demonstrate that it outperforms competing methods on six real-world datasets; data sources include a social reader and Etsy.", "title": "" }, { "docid": "e0633afb6f4dcb1561dbb23b6e3aa713", "text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.", "title": "" }, { "docid": "371d71e1f8cb0881e23f2fc1423baca3", "text": "Positional asphyxia refers to a situation where there is compromise of respiration because of splinting of the chest and/or diaphragm preventing normal respiratory excursion, or occlusion of the upper airway due to abnormal positioning of the body. Examination of autopsy files at Forensic Science SA revealed instances where positional asphyxia resulted from inadvertent positioning that compromised respiration due to intoxication, multiple sclerosis, epilepsy, Parkinson disease, Steele-Richardson-Olszewski syndrome, Lafora disease and quadriplegia. While the manner of death was accidental in most cases, in one instance suicide could not be ruled out. We would not exclude the possibility of individuals with significant cardiac disease succumbing to positional asphyxia, as cardiac disease may be either unrelated to the terminal episode or, alternatively, may result in collapse predisposing to positional asphyxia. Victims of positional asphyxia do not extricate themselves from dangerous situations due to impairment of cognitive responses and coordination resulting from intoxication, sedation, neurological diseases, loss of consciousness, physical impairment or physical restraints.", "title": "" }, { "docid": "1466bdb9a7f5662c8a15de9009bc7687", "text": "Mining opinions and analyzing sentiments from social network data help in various fields such as even prediction, analyzing overall mood of public on a particular social issue and so on. This paper involves analyzing the mood of the society on a particular news from Twitter posts. The key idea of the paper is to increase the accuracy of classification by including Natural Language Processing Techniques (NLP) especially semantics and Word Sense Disambiguation. The mined text information is subjected to Ensemble classification to analyze the sentiment. Ensemble classification involves combining the effect of various independent classifiers on a particular classification problem. Experiments conducted demonstrate that ensemble classifier outperforms traditional machine learning classifiers by 3-5%.", "title": "" }, { "docid": "587ee07095b4bd1189e3bb0af215fa95", "text": "This paper discusses dynamic factor analysis, a technique for estimating common trends in multivariate time series. Unlike more common time series techniques such as spectral analysis and ARIMA models, dynamic factor analysis can analyse short, non-stationary time series containing missing values. Typically, the parameters in dynamic factor analysis are estimated by direct optimisation, which means that only small data sets can be analysed if computing time is not to become prohibitively long and the chances of obtaining sub-optimal estimates are to be avoided. This paper shows how the parameters of dynamic factor analysis can be estimated using the EM algorithm, allowing larger data sets to be analysed. The technique is illustrated on a marine environmental data set.", "title": "" }, { "docid": "66fa9b79b1034e1fa3bf19857b5367c2", "text": "We propose a boundedly-rational model of opinion formation in which individuals are subject to persuasion bias; that is, they fail to account for possible repetition in the information they receive. We show that persuasion bias implies the phenomenon of social influence, whereby one’s influence on group opinions depends not only on accuracy, but also on how well-connected one is in the social network that determines communication. Persuasion bias also implies the phenomenon of unidimensional opinions; that is, individuals’ opinions over a multidimensional set of issues converge to a single “left-right” spectrum. We explore the implications of our model in several natural settings, including political science and marketing, and we obtain a number of novel empirical implications. DeMarzo and Zwiebel: Graduate School of Business, Stanford University, Stanford CA 94305, Vayanos: MIT Sloan School of Management, 50 Memorial Drive E52-437, Cambridge MA 02142. This paper is an extensive revision of our paper, “A Model of Persuasion – With Implication for Financial Markets,” (first draft, May 1997). We are grateful to Nick Barberis, Gary Becker, Jonathan Bendor, Larry Blume, Simon Board, Eddie Dekel, Stefano DellaVigna, Darrell Duffie, David Easley, Glenn Ellison, Simon Gervais, Ed Glaeser, Ken Judd, David Kreps, Edward Lazear, George Loewenstein, Lee Nelson, Anthony Neuberger, Matthew Rabin, José Scheinkman, Antoinette Schoar, Peter Sorenson, Pietro Veronesi, Richard Zeckhauser, three anonymous referees, and seminar participants at the American Finance Association Annual Meetings, Boston University, Cornell, Carnegie-Mellon, ESSEC, the European Summer Symposium in Financial Markets at Gerzensee, HEC, the Hoover Institution, Insead, MIT, the NBER Asset Pricing Conference, the Northwestern Theory Summer Workshop, NYU, the Stanford Institute for Theoretical Economics, Stanford, Texas A&M, UCLA, U.C. Berkeley, Université Libre de Bruxelles, University of Michigan, University of Texas at Austin, University of Tilburg, and the Utah Winter Finance Conference for helpful comments and discussions. All errors are our own.", "title": "" }, { "docid": "fc431a3c46bdd4fa4ad83b9af10c0922", "text": "The importance of the kidney's role in glucose homeostasis has gained wider understanding in recent years. Consequently, the development of a new pharmacological class of anti-diabetes agents targeting the kidney has provided new treatment options for the management of type 2 diabetes mellitus (T2DM). Sodium glucose co-transporter type 2 (SGLT2) inhibitors, such as dapagliflozin, canagliflozin, and empagliflozin, decrease renal glucose reabsorption, which results in enhanced urinary glucose excretion and subsequent reductions in plasma glucose and glycosylated hemoglobin concentrations. Modest reductions in body weight and blood pressure have also been observed following treatment with SGLT2 inhibitors. SGLT2 inhibitors appear to be generally well tolerated, and have been used safely when given as monotherapy or in combination with other oral anti-diabetes agents and insulin. The risk of hypoglycemia is low with SGLT2 inhibitors. Typical adverse events appear to be related to the presence of glucose in the urine, namely genital mycotic infection and lower urinary tract infection, and are more often observed in women than in men. Data from long-term safety studies with SGLT2 inhibitors and from head-to-head SGLT2 inhibitor comparator studies are needed to fully determine their benefit-risk profile, and to identify any differences between individual agents. However, given current safety and efficacy data, SGLT2 inhibitors may present an attractive option for T2DM patients who are failing with metformin monotherapy, especially if weight is part of the underlying treatment consideration.", "title": "" }, { "docid": "d041a5fc5f788b1abd8abf35a26cb5d2", "text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.", "title": "" }, { "docid": "a1cd5424dea527e365f038fce60fd821", "text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.", "title": "" }, { "docid": "ead196a54f4ea7b5a1fe4b5b85f0b2c6", "text": "Supervised machine learning and opinion lexicon are the most frequent approaches for opinion mining, but they require considerable effort to prepare the training data and to build the opinion lexicon, respectively. In this paper, a novel unsupervised clustering approach is proposed for opinion mining. Three swarm algorithms based on Particle Swarm Optimization are evaluated using three corpora with different levels of complexity with respect to size, number of opinions, domains, languages, and class balancing. K-means and Agglomerative clustering algorithms, as well as, the Artificial Bee Colony and Cuckoo Search swarm-based algorithms were selected for comparison. The proposed swarm-based algorithms achieved better accuracy using the word bigram feature model as the pre-processing technique, the Global Silhouette as optimization function, and on datasets with two classes: positive and negative. Although the swarm-based algorithms obtained lower result for datasets with three classes, they are still competitive considering that neither labeled data, nor opinion lexicons are required for the opinion clustering approach.", "title": "" }, { "docid": "50f5bb2f0c71bf0d529a0e65cd6066b3", "text": "It would be a significant understatement to say that sales promotion is enjoying a dominant role in the promotional mixes of most consumer goods companies. The 1998 Cox Direct 20th Annual Survey of Promotional Practices suggests that many companies spend as much as 75% of their total promotional budgets on sales promotion and only 25% on advertising. This is up from 57% spent on sales promotions in 1981 (Landler and DeGeorge). The reasons for this unprecedented growth have been welldocumented. Paramount among these is the desire on the part of many organizations for a quick bolstering of sales. The obvious corollary to this is the desire among consumer groups for increased value in the products they buy. Value can be defined as the ratio of perceived benefits to price, and is linked to performance and meeting consumers' expectations (Zeithaml 1988). In today's value-conscious environment, marketers must stress the overall value of their products (Blackwell, Miniard and Engel 2001). Consumers have reported that coupons, price promotions and good value influence 75 80% of their brand choice decisions (Cox 1998). Today, \"many Americans, brought up on a steady diet of commercials, view advertising with cynicism or indifference. With less money to shop, they're far more apt to buy on price\" (Landler and DeGeorge 1991, 68).", "title": "" }, { "docid": "8b64d5f3c59737369e2e6d8a12fc4c20", "text": "A microcontroller based advanced technique of generating sine wave with lowest harmonics is designed and implemented in this paper. The main objective of our proposed technique is to design a low cost, low harmonics voltage source inverter. In our project we used PIC16F73 microcontroller to generate 4 KHz pwm switching signal. The design is essentially focused upon low power electronic appliances such as light, fan, chargers, television etc. In our project we used STP55NF06 NMOSFET, which is a depletion type N channel MOSFET. For driving the MOSFET we used TLP250 and totem pole configuration as a MOSFET driver. The inverter input is 12VDC and its output is 220VAC across a transformer. The complete design is modeled in proteus software and its output is verified practically.", "title": "" }, { "docid": "2fbd1b2e25473affb40990195b26a88b", "text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.", "title": "" }, { "docid": "3bd6674bec87cd46d8e43d4e4ec09574", "text": "We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.", "title": "" }, { "docid": "5f22c60d28394ff73f7b2b73d68de5a0", "text": "Educational programming environments such as Microsoft Research's Kodu Game Lab are often used to introduce novices to computer science concepts and programming. Unlike many other educational languages that rely on scripting and Java-like syntax, the Kodu language is entirely event-driven and programming takes the form of \"when\" do' clauses. Despite this simplistic programing model, many computer science concepts can be expressed using Kodu. We identify and measure the frequency of these concepts in 346 Kodu programs created by users, and find that most programs exhibit sophistication through the use of complex control flow and boolean logic. Through Kodu's non-traditional language, we show that users express and explore fundamental computer science concepts.", "title": "" }, { "docid": "09dc061dfb788aa8ef2d1e88188157d6", "text": "A wideband dual-polarized slot-coupled stacked patch antenna operating in the UMTS (1920-2170 MHz), WLAN (2.4-2.484 GHz), and UMTS II (2500-2690 MHz) frequency bands is described. Measurements on a prototype of the proposed patch antenna confirm good performance in terms of both impedance matching and isolation", "title": "" } ]
scidocsrr
e1352b45ac4e6f358d9b0456554428ee
Intuitionistic fuzzy based DEMATEL method for developing green practices and performances in a green supply chain
[ { "docid": "becfa1ab7a936ea022f846fcf1466822", "text": "Supply chain management (SCM) has been considered as the most popular operations strategy for improving organizational competitiveness in the twenty-first century. In the early 1990s, agile manufacturing (AM) gained momentum and received due attention from both researchers and practitioners. In the mid-1990s, SCM began to attract interest. Both AM and SCM appear to differ in philosophical emphasis, but each complements the other in objectives for improving organizational competitiveness. For example, AM relies more on strategic alliances/partnerships (virtual enterprise environment) to achieve speed and flexibility. But the issues of cost and the integration of suppliers and customers have not been given due consideration in AM. By contrast, cost is given a great deal of attention in SCM, which focuses on the integration of suppliers and customers to achieve an integrated value chain with the help of information technologies and systems. Considering the significance of both AM and SCM for firms to improve their performance, an attempt has been made in this paper to analyze both AM and SCM with the objective of developing a framework for responsive supply chain (RSC). We compare their characteristics and objectives, review the selected literature, and analyze some case experiences on AM and SCM, and develop an integrated framework for a RSC. The proposed framework can be employed as a competitive strategy in a networked economy in which customized products/services are produced with virtual organizations and exchanged using e-commerce. 2007 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2", "text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet", "title": "" }, { "docid": "0bfba7797a0e7dcd4817c10d4df350db", "text": "Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.", "title": "" }, { "docid": "d16c25f4bc079650d12300b8872d589d", "text": "An important application of machine vision and image processing could be driver drowsiness detection system due to its high importance. In recent years there have been many research projects reported in the literature in this field. In this paper, unlike conventional drowsiness detection methods, which are based on the eye states alone, we used facial expressions to detect drowsiness. There are many challenges involving drowsiness detection systems. Among the important aspects are: change of intensity due to lighting conditions, the presence of glasses and beard on the face of the person. In this project, we propose and implement a hardware system which is based on infrared light and can be used in resolving these problems. In the proposed method, following the face detection step, the facial components that are more important and considered as the most effective for drowsiness, are extracted and tracked in video sequence frames. The system has been tested and implemented in a real environment.", "title": "" }, { "docid": "78952b9185a7fb1d8e7bd7723bb1021b", "text": "We develop and apply two new methods for analyzing file system behavior and evaluating file system changes. First, semantic block-level analysis (SBA) combines knowledge of on-disk data structures with a trace of disk traffic to infer file syste m behavior; in contrast to standard benchmarking approaches, S BA enables users to understand why the file system behaves as it does. Second, semantic trace playback (STP) enables traces of disk traffic to be easily modified to represent changes in the fi le system implementation; in contrast to directly modifying t he file system, STP enables users to rapidly gauge the benefits of new policies. We use SBA to analyze Linux ext3, ReiserFS, JFS, and Windows NTFS; in the process, we uncover many strengths and weaknesses of these journaling file systems. We also appl y STP to evaluate several modifications to ext3, demonstratin g the benefits of various optimizations without incurring the cos ts of a real implementation.", "title": "" }, { "docid": "4381ee2e578a640dda05e609ed7f6d53", "text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "title": "" }, { "docid": "6bfcd3a40e8be718225d252dad8bf80a", "text": "Twitter data offers an unprecedented opportunity to study demographic differences in public opinion across a virtually unlimited range of subjects. Whilst demographic attributes are often implied within user data, they are not always easily identified using computational methods. In this paper, we present a semi-automatic solution that combines automatic classification methods with a user interface designed to enable rapid resolution of ambiguous cases. TweetClass employs a two-step, interactive process to support the determination of gender and age attributes. At each step, the user is presented with feedback on the confidence levels of the automated analysis and can choose to refine ambiguous cases by examining key profile and content data. We describe how a user-centered design approach was used to optimise the interface and present the results of an evaluation which suggests that TweetClass can be used to rapidly boost demographic sample sizes in situations where high accuracy is required.", "title": "" }, { "docid": "b8f1c6553cd97fab63eae159ae01797e", "text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.02.004 * Corresponding author. E-mail address: malinda.desjarlais@gmail.com (M. Using computers with friends either in person or online has become ubiquitous in the life of most adolescents; however, little is known about the complex relation between this activity and friendship quality. This study examined direct support for the social compensation and rich-get-richer hypotheses among adolescent girls and boys by including social anxiety as a moderating factor. A sample of 1050 adolescents completed a survey in grade 9 and then again in grades 11 and 12. For girls, there was a main effect of using computers with friends on friendship quality; providing support for both hypotheses. For adolescent boys, however, social anxiety moderated this relation, supporting the social compensation hypothesis. These findings were identical for online communication and were stable throughout adolescence. Furthermore, participating in organized sports did not compensate for social anxiety for either adolescent girls or boys. Therefore, characteristics associated with using computers with friends may create a comfortable environment for socially anxious adolescents to interact with their peers which may be distinct from other more traditional adolescent activities. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7633171605190f4d5b643b3155ecf288", "text": "Effective robot manipulation requires a vision system which can extract features of the environment which determine what manipulation actions are possible. There is existing work in this direction under the broad banner of recognising “affordances”. We are particularly interested in possibilities for actions afforded by relationships among pairs of objects. For example if an object is “inside” another or “on top” of another. For this there is a need for a vision system which can recognise such relationships in a scene. We use an approach in which a vision system first segments an image, and then considers a pair of objects to determine their physical relationship. The system extracts surface patches for each object in the segmented image, and then compiles various histograms from looking at relationships between the surface patches of one object and those of the other object. From these histograms a classifier is trained to recognise the relationship between a pair of objects. Our results identify the most promising ways to construct histograms in order to permit classification of physical relationships with high accuracy. This work is important for manipulator robots who may be presented with novel scenes and must identify the salient physical relationships in order to plan manipulation activities.", "title": "" }, { "docid": "a06269431a16347154cf18d87b5c2ee8", "text": "1367-5788/$ see front matter Crown Copyright 2 http://dx.doi.org/10.1016/j.arcontrol.2012.03.012 q An earlier version of this paper appeared in Broggi control system for the VisLab Intercontinental Auton Transportation Systems (ITSC), Madeira, Portugal, Oct ⇑ Corresponding author. E-mail addresses: broggi@ce.unipr.it (A. Broggi), m zani@ce.unipr.it (P. Zani), coati@ce.unipr.it (A. Coati), pa 1 This test, carried out by VisLab in summer 2010 vehicles drive themselves from Parma, Italy, to Shang route, mostly across regions for which digital map information were not available.", "title": "" }, { "docid": "726d0b31638e945b2620eca6824b84dd", "text": "Profanity detection is often thought to be an easy task. However, past work has shown that current, list-based systems are performing poorly. They fail to adapt to evolving profane slang, identify profane terms that have been disguised or only partially censored (e.g., @ss, f$#%) or intentionally or unintentionally misspelled (e.g., biatch, shiiiit). For these reasons, they are easy to circumvent and have very poor recall. Secondly, they are a one-size fits all solution – making assumptions that the definition, use and perceptions of profane or inappropriate holds across all contexts. In this article, we present work that attempts to move beyond list-based profanity detection systems by identifying the context in which profanity occurs. The proposed system uses a set of comments from a social news site labeled by Amazon Mechanical Turk workers for the presence of profanity. This system far surpasses the performance of listbased profanity detection techniques. The use of crowdsourcing in this task suggests an opportunity to build profanity detection systems tailored to sites and communities.", "title": "" }, { "docid": "f85c23e552fadbd64048eb71f947e294", "text": "This paper describes a voltage regulator system for ultra-low-power RFID tags (also called passive tags) in a 0.15 mum analog CMOS technology. These tags derive their power supply from the incoming RF energy through rectification instead of from a battery. The regulator is functional with just 110 nA current. Owing to the huge variation of the rectified voltage (by as much as tens of volts), voltage limiters and clamps are employed at various points along the regulation path. A limiter at the rectifier output clamps the rectifier voltage to a narrower range of 1.4 V. A fine-regulator, then, regulates the supply voltage close to a bandgap reference value of 1.25 V. The key aspect of this regulator is the dynamic bandwidth boosting that takes place in the regulator by sensing the excess current that is bypassed in the limter (during periods of excess energy) and increasing its bias current and hence bandwidth, accordingly. A higher bandwidth is necessary for quick recovery from line transients due to the burst nature of RF transmission, with a larger energy burst requiring a higher bandwidth to settle quickly without large line transients. The challenge of compensating such a regulator across various load currents and RF energy levels is described in this paper", "title": "" }, { "docid": "3679e362bb807e8b9122d3283ef7f1dc", "text": "Alternating optimization algorithms for canonical polyadic decomposition (with/without nonnegative constraints) often accompany update rules with low computational cost, but could face problems of swamps, bottlenecks, and slow convergence. All-at-once algorithms can deal with such problems, but always demand significant temporary extra-storage, and high computational cost. In this paper, we propose an all-at-once algorithm with low complexity for sparse and nonnegative tensor factorization based on the damped Gauss-Newton iteration. Especially, for low-rank approximations, the proposed algorithm avoids building up Hessians and gradients, reduces the computational cost dramatically. Moreover, we proposed selection strategies for regularization parameters. The proposed algorithm has been verified to overwhelmingly outperform “state-of-the-art” NTF algorithms for difficult benchmarks, and for real-world application such as clustering of the ORL face database.", "title": "" }, { "docid": "ac8a0b4ad3f2905bc4e37fa4b0fcbe0a", "text": "In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS’s operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.", "title": "" }, { "docid": "d15072fd8776d17e8a3b8b89af5fed08", "text": "PsV: psoriasis vulgaris INTRODUCTION Pityriasis amiantacea is a rare clinical condition characterized by masses of waxy and sticky scales that adhere to the scalp and tenaciously attach to hair bundles. Pityriasis amiantacea can be associated with psoriasis vulgaris (PsV).We examined a patient with pityriasis amiantacea caused by PsV who also had keratotic horns on the scalp, histopathologically fibrokeratomas. To the best of our knowledge, this is the first case of scalp fibrokeratoma stimulated by pityriasis amiantacea and PsV.", "title": "" }, { "docid": "2560535c3ad41b46e08b8b39f89f555b", "text": "Crises are unpredictable events that can impact on an organisation’s viability, credibility, and reputation, and few topics have generated greater interest in communication over the past 15 years. This paper builds on early theory such as Fink (1986), and extends the crisis life-cycle theoretical model to enable a better understanding and prediction of the changes and trends of mass media coverage during crises. This expanded model provides a framework to identify and understand the dynamic and multi-dimensional set of relationships that occurs during the crisis life cycle in a rapidly changing and challenging operational environment. Using the 2001 Ansett Airlines’ Easter groundings as a case study, this paper monitors mass media coverage during this organisational crisis. The analysis reinforces the view that, by using proactive strategies, public relations practitioners can better manage mass media crisis coverage. Further, the understanding gained by extending the crisis life cycle to track when and how mass media content changes may help public relations practitioners craft messages and supply information at the outset of each stage of the crisis, thereby maintaining control of the message.", "title": "" }, { "docid": "ba5904d3c5361208f75351eac49da0a3", "text": "This paper presents the results of an annotation study focused on the fine-grained analysis of argumentation structures in scientific publications. Our new annotation scheme specifies four types of binary argumentative relations between sentences, resulting in the representation of arguments as small graph structures. We developed an annotation tool that supports the annotation of such graphs and carried out an annotation study with four annotators on 24 scientific articles from the domain of educational research. For calculating the inter-annotator agreement, we adapted existing measures and developed a novel graphbased agreement measure which reflects the semantic similarity of different annotation graphs.", "title": "" }, { "docid": "3f629998235c1cfadf67cf711b07f8b9", "text": "The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.", "title": "" }, { "docid": "d89a5b253d188c28aa64facd3fef8b95", "text": "This paper presents a method for decomposing long, complex consumer health questions. Our approach largely decomposes questions using their syntactic structure, recognizing independent questions embedded in clauses, as well as coordinations and exemplifying phrases. Additionally, we identify elements specific to disease-related consumer health questions, such as the focus disease and background information. To achieve this, our approach combines rank-and-filter machine learning methods with rule-based methods. Our results demonstrate significant improvements over the heuristic methods typically employed for question decomposition that rely only on the syntactic parse tree.", "title": "" }, { "docid": "0858f3c76ea9570eeae23c33307f2eaf", "text": "Geometrical validation around the Calpha is described, with a new Cbeta measure and updated Ramachandran plot. Deviation of the observed Cbeta atom from ideal position provides a single measure encapsulating the major structure-validation information contained in bond angle distortions. Cbeta deviation is sensitive to incompatibilities between sidechain and backbone caused by misfit conformations or inappropriate refinement restraints. A new phi,psi plot using density-dependent smoothing for 81,234 non-Gly, non-Pro, and non-prePro residues with B < 30 from 500 high-resolution proteins shows sharp boundaries at critical edges and clear delineation between large empty areas and regions that are allowed but disfavored. One such region is the gamma-turn conformation near +75 degrees,-60 degrees, counted as forbidden by common structure-validation programs; however, it occurs in well-ordered parts of good structures, it is overrepresented near functional sites, and strain is partly compensated by the gamma-turn H-bond. Favored and allowed phi,psi regions are also defined for Pro, pre-Pro, and Gly (important because Gly phi,psi angles are more permissive but less accurately determined). Details of these accurate empirical distributions are poorly predicted by previous theoretical calculations, including a region left of alpha-helix, which rates as favorable in energy yet rarely occurs. A proposed factor explaining this discrepancy is that crowding of the two-peptide NHs permits donating only a single H-bond. New calculations by Hu et al. [Proteins 2002 (this issue)] for Ala and Gly dipeptides, using mixed quantum mechanics and molecular mechanics, fit our nonrepetitive data in excellent detail. To run our geometrical evaluations on a user-uploaded file, see MOLPROBITY (http://kinemage.biochem.duke.edu) or RAMPAGE (http://www-cryst.bioc.cam.ac.uk/rampage).", "title": "" }, { "docid": "b01fb8d54ca7b2ac4a9c895c01d54047", "text": "With the growth of Cloud Computing, more and more companies are offering different cloud services. From the customer's point of view, it is always difficult to decide whose services they should use, based on users' requirements. Currently there is no software framework which can automatically index cloud providers based on their needs. In this work, we propose a framework and a mechanism, which measure the quality and prioritize Cloud services. Such framework can make significant impact and will create healthy competition among Cloud providers to satisfy their Service Level Agreement (SLA) and improve their Quality of Services (QoS).", "title": "" } ]
scidocsrr
f5f7088eac6be6d025b37382fc62269e
Characterizing Driving Styles with Deep Learning
[ { "docid": "6a4cd21704bfbdf6fb3707db10f221a8", "text": "Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.", "title": "" }, { "docid": "75e46bf5c1bcf73a9918026b0a4ad4f0", "text": "Recently, the hybrid deep neural network (DNN)- hidden Markov model (HMM) has been shown to significantly improve speech recognition performance over the conventional Gaussian mixture model (GMM)-HMM. The performance improvement is partially attributed to the ability of the DNN to model complex correlations in speech features. In this paper, we show that further error rate reduction can be obtained by using convolutional neural networks (CNNs). We first present a concise description of the basic CNN and explain how it can be used for speech recognition. We further propose a limited-weight-sharing scheme that can better model speech features. The special structure such as local connectivity, weight sharing, and pooling in CNNs exhibits some degree of invariance to small shifts of speech features along the frequency axis, which is important to deal with speaker and environment variations. Experimental results show that CNNs reduce the error rate by 6%-10% compared with DNNs on the TIMIT phone recognition and the voice search large vocabulary speech recognition tasks.", "title": "" } ]
[ { "docid": "f94764347d07af17cd034e40be54bc4a", "text": "Device level Self-Heating (SH) is becoming a limiting factor during traditional DC Hot Carrier stresses in bulk and SOI technologies. Consideration is given to device layout and design for Self-Heating minimization during HCI stress in SOI technologies, the effect of SH on activation energy (Ea) and the SH induced enhancement to degradation. Applying a methodology for SH temperature correction of extracted device lifetime, correlation is established between DC device level stress and AC device stress using a specially designed ring oscillator.", "title": "" }, { "docid": "6ccad3fd0fea9102d15bd37306f5f562", "text": "This paper reviews deposition, integration, and device fabrication of ferroelectric PbZrxTi1−xO3 (PZT) films for applications in microelectromechanical systems. As examples, a piezoelectric ultrasonic micromotor and pyroelectric infrared detector array are presented. A summary of the published data on the piezoelectric properties of PZT thin films is given. The figures of merit for various applications are discussed. Some considerations and results on operation, reliability, and depolarization of PZT thin films are presented.", "title": "" }, { "docid": "5551c139bf9bdb144fabce6a20fda331", "text": "A common prerequisite for a number of debugging and performanceanalysis techniques is the injection of auxiliary program code into the application under investigation, a process called instrumentation. To accomplish this task, source-code preprocessors are often used. Unfortunately, existing preprocessing tools either focus only on a very specific aspect or use hard-coded commands for instrumentation. In this paper, we examine which basic constructs are required to specify a user-defined routine entry/exit instrumentation. This analysis serves as a basis for a generic instrumentation component working on the source-code level where the instructions to be inserted can be flexibly configured. We evaluate the identified constructs with our prototypical implementation and show that these are sufficient to fulfill the needs of a number of todays’ performance-analysis tools.", "title": "" }, { "docid": "9164bd704cdb8ca76d0b5f7acda9d4ef", "text": "In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.", "title": "" }, { "docid": "0552c786fe0030df69b2095d78c20485", "text": "In recent years, real-time processing and analytics systems for big data--in the context of Business Intelligence (BI)--have received a growing attention. The traditional BI platforms that perform regular updates on daily, weekly or monthly basis are no longer adequate to satisfy the fast-changing business environments. However, due to the nature of big data, it has become a challenge to achieve the real-time capability using the traditional technologies. The recent distributed computing technology, MapReduce, provides off-the-shelf high scalability that can significantly shorten the processing time for big data; Its open-source implementation such as Hadoop has become the de-facto standard for processing big data, however, Hadoop has the limitation of supporting real-time updates. The improvements in Hadoop for the real-time capability, and the other alternative real-time frameworks have been emerging in recent years. This paper presents a survey of the open source technologies that support big data processing in a real-time/near real-time fashion, including their system architectures and platforms.", "title": "" }, { "docid": "19359356fe18c5ca4028696c145001dd", "text": "Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add, Kulisch accumulation and tapered encodings from Gustafson’s posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9% top-1 and 0.2% top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8/38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96× the power and 1.12× the area of 8/32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59× the power and 0.68× the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well.", "title": "" }, { "docid": "32dd24b2c3bcc15dd285b2ffacc1ba43", "text": "In this paper, we present for the first time the realization of a 77 GHz chip-to-rectangular waveguide transition realized in an embedded Wafer Level Ball Grid Array (eWLB) package. The chip is contacted with a coplanar waveguide (CPW). For the transformation of the transverse electromagnetic (TEM) mode of the CPW line to the transverse electric (TE) mode of the rectangular waveguide an insert is used in the eWLB package. This insert is based on radio-frequency (RF) printed circuit board (PCB) technology. Micro vias formed in the insert are used to realize the sidewalls of the rectangular waveguide structure in the fan-out area of the eWLB package. The redistribution layers (RDLs) on the top and bottom surface of the package form the top and bottom wall, respectively. We present two possible variants of transforming the TEM mode to the TE mode. The first variant uses a via realized in the rectangular waveguide structure. The second variant uses only the RDLs of the eWLB package for mode conversion. We present simulation and measurement results of both variants. We obtain an insertion loss of 1.5 dB and return loss better than 10 dB. The presented results show that this approach is an attractive candidate for future low loss and highly integrated RF systems.", "title": "" }, { "docid": "05eb344fb8b671542f6f0228774a5524", "text": "This paper presents an improved hardware structure for the computation of the Whirlpool hash function. By merging the round key computation with the data compression and by using embedded memories to perform part of the Galois Field (28) multiplication, a core can be implemented in just 43% of the area of the best current related art while achieving a 12% higher throughput. The proposed core improves the Throughput per Slice compared to the state of the art by 160%, achieving a throughput of 5.47 Gbit/s with 2110 slices and 32 BRAMs on a VIRTEX II Pro FPGA. Results for a real application are also presented by considering a polymorphic computational approach.", "title": "" }, { "docid": "cb8845ab2bcc7e9bba120fe3c66815ab", "text": "This study presents an initial set of findings from an empirical study of social processes, technical system configurations, organizational contexts, and interrelationships that give rise to open software. The focus is directed at understanding the requirements for open software development efforts, and how the development of these requirements differs from those traditional to software engineering and requirements engineering. Four open software development communities are described, examined, and compared to help discover what these differences may be. Informal software descriptions and the online social discourse that surrounds these narrative descriptions are found to play a critical role in the elicitation, analysis, specification, validation, and management of requirements for developing open software systems. DRAFT – Circulated for comment only, July 2001. 2 Understanding Requirements for Developing Open Source", "title": "" }, { "docid": "034713fa057b206703d9bcffd9efccd4", "text": "Researchers may describe different aspects of past scientific publications in their publications and the descriptions may keep changing in the evolution of science. The diverse and changing descriptions (i.e., citation context) on a publication characterize the impact and contributions of the past publication. In this article, we aim to provide an approach to understanding the changing and complex roles of a publication characterized by its citation context. We described a method to represent the publications’ dynamic roles in science community in different periods as a sequence of vectors by training temporal embedding models. The temporal representations can be used to quantify how much the roles of publications changed and interpret how they changed. Our study in the biomedical domain shows that our metric on the changes of publications’ roles is stable over time at the population level but significantly distinguish individuals. We also show the interpretability of our methods by a concrete example. Conference Topic Indicators, Methods and techniques, Act of citations, in-text citations and Content Citation Analysis", "title": "" }, { "docid": "8c5a76124b7d37929cef1a7a67eae3ba", "text": "This paper describes the ongoing development of a highly configurable word processing environment developed using a pragmatic, obstacle-by-obstacle approach to alleviating some of the visual problems encountered by dyslexic computer users. The paper describes the current version of the software and the development methodology as well as the results of a pilot study which indicated that visual environment individually configured using the SeeWord software improved reading accuracy as well as subjectively rated reading comfort.", "title": "" }, { "docid": "10f1e89998a7e463f2996270099bebdc", "text": "This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes.", "title": "" }, { "docid": "470a363ba2e5b480e638f372c06bc140", "text": "In this paper, we describe a miniature climbing robot, 96 x 46 x 64 [mm], able to climb ferromagnetic surfaces and to make inner plane to plane transition using only two degrees of freedom. Our robot, named TRIPILLAR, combines magnetic caterpillars and magnets to climb planar ferromagnetic surfaces. Two triangular tracks are mounted in a differential drive mode, which allows squid steering and on spot turning. Exploiting the particular geometry and magnetic properties of this arrangement, TRIPILLAR is able to transit between intersecting surfaces. The intersection angle ranges from -10° to 90° on the pitch angle of the coordinate system of the robot regardless of the orientation of gravity. A possible path is to move from ground to ceiling and back. This achievement opens new avenues for mobile robotics inspection of ferromagnetic industrial structure with stringent size restriction, like the one encountered in power plants.", "title": "" }, { "docid": "6d89321d33ba5d923a7f31589888f430", "text": "OBJECTIVE\nThe pain experienced by burn patients during physical therapy range of motion exercises can be extreme and can discourage patients from complying with their physical therapy. We explored the novel use of immersive virtual reality (VR) to distract patients from pain during physical therapy.\n\n\nSETTING\nThis study was conducted at the burn care unit of a regional trauma center.\n\n\nPATIENTS\nTwelve patients aged 19 to 47 years (average of 21% total body surface area burned) performed range of motion exercises of their injured extremity under an occupational therapist's direction.\n\n\nINTERVENTION\nEach patient spent 3 minutes of physical therapy with no distraction and 3 minutes of physical therapy in VR (condition order randomized and counter-balanced).\n\n\nOUTCOME MEASURES\nFive visual analogue scale pain scores for each treatment condition served as the dependent variables.\n\n\nRESULTS\nAll patients reported less pain when distracted with VR, and the magnitude of pain reduction by VR was statistically significant (e.g., time spent thinking about pain during physical therapy dropped from 60 to 14 mm on a 100-mm scale). The results of this study may be examined in more detail at www.hitL.washington.edu/projects/burn/.\n\n\nCONCLUSIONS\nResults provided preliminary evidence that VR can function as a strong nonpharmacologic pain reduction technique for adult burn patients during physical therapy and potentially for other painful procedures or pain populations.", "title": "" }, { "docid": "ab0994331a2074fe9b635342fed7331c", "text": "This paper investigates to identify the requirement and the development of machine learning-based mobile big data analysis through discussing the insights of challenges in the mobile big data (MBD). Furthermore, it reviews the state-of-the-art applications of data analysis in the area of MBD. Firstly, we introduce the development of MBD. Secondly, the frequently adopted methods of data analysis are reviewed. Three typical applications of MBD analysis, namely wireless channel modeling, human online and offline behavior analysis, and speech recognition in the internet of vehicles, are introduced respectively. Finally, we summarize the main challenges and future development directions of mobile big data analysis.", "title": "" }, { "docid": "66fd3e27e89554e4c6ea5eef294a345b", "text": "Large-scale distributed training of deep neural networks suffer from the generalization gap caused by the increase in the effective mini-batch size. Previous approaches try to solve this problem by varying the learning rate and batch size over epochs and layers, or some ad hoc modification of the batch normalization. We propose an alternative approach using a second-order optimization method that shows similar generalization capability to first-order methods, but converges faster and can handle larger minibatches. To test our method on a benchmark where highly optimized first-order methods are available as references, we train ResNet-50 on ImageNet. We converged to 75% Top-1 validation accuracy in 35 epochs for mini-batch sizes under 16,384, and achieved 75% even with a mini-batch size of 131,072, which took 100 epochs.", "title": "" }, { "docid": "93f8ba979ea679d6b9be6f949f8ee6ed", "text": "This paper presents a method for Simultaneous Localization and Mapping (SLAM), relying on a monocular camera as the only sensor, which is able to build outdoor, closed-loop maps much larger than previously achieved with such input. Our system, based on the Hierarchical Map approach [1], builds independent local maps in real-time using the EKF-SLAM technique and the inverse depth representation proposed in [2]. The main novelty in the local mapping process is the use of a data association technique that greatly improves its robustness in dynamic and complex environments. A new visual map matching algorithm stitches these maps together and is able to detect large loops automatically, taking into account the unobservability of scale intrinsic to pure monocular SLAM. The loop closing constraint is applied at the upper level of the Hierarchical Map in near real-time. We present experimental results demonstrating monocular SLAM as a human carries a camera over long walked trajectories in outdoor areas with people and other clutter, even in the more difficult case of forward-looking camera, and show the closing of loops of several hundred meters.", "title": "" }, { "docid": "4df52d891c63975a1b9d4cd6c74571db", "text": "DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.", "title": "" }, { "docid": "ee7193740e341a10d839bc9d3180c509", "text": "Large-scale databases of human activity in social media have captured scientific and policy attention, producing a flood of research and discussion. This paper considers methodological and conceptual challenges for this emergent field, with special attention to the validity and representativeness of social media big data analyses. Persistent issues include the over-emphasis of a single platform, Twitter, sampling biases arising from selection by hashtags, and vague and unrepresentative sampling frames. The sociocultural complexity of user behavior aimed at algorithmic invisibility (such as subtweeting, mock-retweeting, use of “screen captures” for text, etc.) further complicate interpretation of big data social media. Other challenges include accounting for field effects, i.e. broadly consequential events that do not diffuse only through the network under study but affect the whole society. The application of network methods from other fields to the study of human social activity may not always be appropriate. The paper concludes with a call to action on practical steps to improve our analytic capacity in this promising, rapidly-growing field.", "title": "" }, { "docid": "2ad79b7f6d2c3e6c3aa46fed256ee1cc", "text": "Emotions like regret and envy share a common origin: they are motivated by the counterfactual thinking of what would have happened had we made a different choice. When we contemplate the outcome of a choice we made, we may use the information on the outcome of a choice we did not make. Regret is the purely private comparison between two choices that we could have taken, envy adds to this the information on outcome of choices of others. However, envy has a distinct social component, in that it adds the change in the social ranking that follows a difference in the outcomes. We study the theoretical foundation and the experimental test of this view.", "title": "" } ]
scidocsrr
f3b6372e49cf4eeacaddafb1758e6dcd
Best Practices for Automated Traceability
[ { "docid": "6b9d8ff2c31b672832e2a81fbbcde583", "text": "ion in Rationale Models. The design goal of KBSA-ADM was to offer a coherent series of rationale models based on results of the REMAP project (Ramesh and Dhar 1992) for maintaining rationale at different levels of detail. Figure 19: Simple Rationale Model The model sketched in Figure 19 is used for capturing rationale at a simple level of detail. It links an OBJECT with its RATIONALE. The model in Figure 19 also provides for the explicit representation of ASSUMPTIONS and DEPENDENCIES among them. Thus, using this model, the assumptions providing justifications to the creation of objects can be explicitly identified and reasoned with. As changes in such assumptions are a primary factor in the", "title": "" }, { "docid": "8bcc51e311ab55fab6a4f60e6271716b", "text": "An approach for the semi-automated recovery of traceability links between software documentation and source code is presented. The methodology is based on the application of information retrieval techniques to extract and analyze the semantic information from the source code and associated documentation. A semi-automatic process is defined based on the proposed methodology. The paper advocates the use of latent semantic indexing (LSI) as the supporting information retrieval technique. Two case studies using existing software are presented comparing this approach with others. The case studies show positive results for the proposed approach, especially considering the flexibility of the methods used.", "title": "" } ]
[ { "docid": "4aad195a8dd20cd2531f0429ed6b0966", "text": "To solve problems associated with conventional 2D fingerprint acquisition processes including skin deformations and print smearing, we developed a noncontact 3D fingerprint scanner employing structured light illumination that, in order to be backwards compatible with existing 2D fingerprint recognition systems, requires a method of unwrapping the 3D scans into 2D equivalent prints. For the latter purpose of virtually flattening a 3D print, this paper introduces a fit-sphere unwrapping algorithm. Taking advantage of detailed 3D information, the proposed method defuses the unwrapping distortion by controlling the distances between neighboring points. Experimental results will demonstrate the high quality and recognition performance of the 3D unwrapped prints versus traditionally collected 2D prints. Furthermore, by classifying the 3D database into high- and low-quality data sets, we demonstrate that the relationship between quality and recognition performance holding for conventional 2D prints is achieved for 3D unwrapped fingerprints.", "title": "" }, { "docid": "fea31b71829803d78dabf784dfdb0093", "text": "Tag recommendation is helpful for the categorization and searching of online content. Existing tag recommendation methods can be divided into collaborative filtering methods and content based methods. In this paper, we put our focus on the content based tag recommendation due to its wider applicability. Our key observation is the tag-content co-occurrence, i.e., many tags have appeared multiple times in the corresponding content. Based on this observation, we propose a generative model (Tag2Word), where we generate the words based on the tag-word distribution as well as the tag itself. Experimental evaluations on real data sets demonstrate that the proposed method outperforms several existing methods in terms of recommendation accuracy, while enjoying linear scalability.", "title": "" }, { "docid": "089808010a2925a7eaca71736fbabcaf", "text": "In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of images, the global motion can be described by independent motion models. On the other hand, in a sequence there exist as many as \u000e pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fitting a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (ie. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.", "title": "" }, { "docid": "8ec7edd2d963501b714be80cb2ea8535", "text": "The problem of recognizing text in images taken in the wild ha s g ined significant attention from the computer vision community in recent years. The scene text recognition task is more challenging compare d to the traditional problem of recognizing text in printed documents. We focus on this problem, and recognize text extracted from natural sce ne images and the web. Significant attempts have been made to address this p roblem in the recent past, for example [1, 2]. However, many of these wo rks benefit from the availability of strong context, which naturall y imits their applicability. In this work, we present a framework to overc ome these restrictions. Our model introduces a higher order prior com puted from an English dictionary to recognize a word, which may or may not b e a part of the dictionary. We present experimental analysis on stan dard as well as new benchmark datasets. The main contributions of this work are: (1) We present a fram ework, which incorporates higher order statistical language mode ls to recognize words in an unconstrained manner, i.e. we overcome the need for restricted word lists. (2) We achieve significant improvement (more than 20%) in word recognition accuracies in a general setting. (3 ) We introduce a large word recognition dataset (atleast 5 times large r than other public datasets) with character level annotation and bench mark it.", "title": "" }, { "docid": "9b646ef8c6054f9a4d85cf25e83d415c", "text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed", "title": "" }, { "docid": "b1599614c7d91462d05d35808d7e2983", "text": "Hyponatremia and hypernatremia are complex clinical problems that occur frequently in full term newborns and in preterm infants admitted to the Neonatal Intensive Care Unit (NICU) although their real frequency and etiology are incompletely known. Pathogenetic mechanisms and clinical timing of hypo-hypernatremia are well known in adult people whereas in the newborn is less clear how and when hypo-hypernatremia could alter cerebral osmotic equilibrium and after how long time brain cells adapt themselves to the new hypo-hypertonic environment. Aim of this review is to present a practical approach and management of hypo-hypernatremia in newborns, especially in preterms.", "title": "" }, { "docid": "1c659ce90c89f3d9de8dcf901372b0da", "text": "Many online social networks thrive on automatic sharing of friends' activities to a user through activity feeds, which may influence the user's next actions. However, identifying such social influence is tricky because these activities are simultaneously impacted by influence and homophily. We propose a statistical procedure that uses commonly available network and observational data about people's actions to estimate the extent of copy-influence---mimicking others' actions that appear in a feed. We assume that non-friends don't influence users; thus, comparing how a user's activity correlates with friends versus non-friends who have similar preferences can help tease out the effect of copy-influence.\n Experiments on datasets from multiple social networks show that estimates that don't account for homophily overestimate copy-influence by varying, often large amounts. Further, copy-influence estimates fall below 1% of total actions in all networks: most people, and almost all actions, are not affected by the feed. Our results question common perceptions around the extent of copy-influence in online social networks and suggest improvements to diffusion and recommendation models.", "title": "" }, { "docid": "cefcd3d79c2d7d16b9fdce8cbd5c3a4e", "text": "The purpose of this paper is to review the knowledge available from aggregated research (primarily through 2000) on the characteristics of social interactions and social relationships among young children with autism, with special attention to strategies and tactics that promote competence or improved performance in this area. In its commissioning letter for the initial version of this paper, the Committee on Educational Interventions for Children with Autism of the National Research Council requested \"a critical, scholarly review of the empirical research on interventions to facilitate the social interactions of children with autism, considering adult-child interactions (where information is available) as well as child-child interactions, and including treatment of [one specific question]: What is the empirical evidence that social irregularities of children with autism are amenable to remediation?\" To do this, the paper (a) reviews the extent and quality of empirical literature on social interaction for young children with autism; (b) reviews existing descriptive and experimental research that may inform us of relations between autism and characteristics that support social development, and efforts to promote improved social outcomes (including claims for effectiveness for several specific types of intervention); (c) highlights some possible directions for future research; and (d) summarizes recommendations for educational practices that can be drawn from this research.", "title": "" }, { "docid": "38b93f50d4fc5a1029ebedb5a544987a", "text": "We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.", "title": "" }, { "docid": "48ccac834d6591d22e57148f4ad58322", "text": "This paper models household fertility decisions by using a generalized Poisson regression model. Since the fertility data used in the paper exhibit under-dispersion, the generalized Poisson regression model has statistical advantages over both standard Poisson regression and negative binomial regression models, and is suitable for analysis of count data that exhibit either over-dispersion or under-dispersion. The model is estimated by maximum likelihood procedure. Approximate tests for the dispersion and goodness-of-fit measures for comparing alternative models are discussed. Based on observations from the Bangladesh Demographic Health Survey 2011, the empirical results support the generalized Poisson regression to model the under dispersed data.", "title": "" }, { "docid": "3ef6a2d1c125d5c7edf60e3ceed23317", "text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.", "title": "" }, { "docid": "0713b8668b5faf037b4553517151f9ab", "text": "Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.", "title": "" }, { "docid": "85a20d907bdab04c17d778a72c8646cc", "text": "Online social networks have gained significant popularity recently. The problem of influence maximization in online social networks has been extensively studied. However, in prior works, influence propagation in the physical world, which is also an indispensable factor, is not considered. The Location-Based Social Networks (LBSNs) are a special kind of online social networks in which people can share location-embedded information. In this paper, we make use of mobile crowdsourced data obtained from location-based social network services to study influence maximization in LBSNs. A novel network model and an influence propagation model taking influence propagation in both online social networks and the physical world into consideration are proposed. An event activation position selection problem is formalized and a corresponding solution is provided. The experimental results indicate that the proposed influence propagation model is meaningful and the activation position selection algorithm has high performance.", "title": "" }, { "docid": "aae97dd982300accb15c05f9aa9202cd", "text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.", "title": "" }, { "docid": "1d56775bf9e993e0577c4d03131e7dc4", "text": "Canonical correlation analysis (CCA) is a classical method for seeking correlations between two multivariate data sets. During the last ten years, it has received more and more attention in the machine learning community in the form of novel computational formulations and a plethora of applications. We review recent developments in Bayesian models and inference methods for CCA which are attractive for their potential in hierarchical extensions and for coping with the combination of large dimensionalities and small sample sizes. The existing methods have not been particularly successful in fulfilling the promise yet; we introduce a novel efficient solution that imposes group-wise sparsity to estimate the posterior of an extended model which not only extracts the statistical dependencies (correlations) between data sets but also decomposes the data into shared and data set-specific components. In statistics literature the model is known as inter-battery factor analysis (IBFA), for which we now provide a Bayesian treatment.", "title": "" }, { "docid": "b8cfa6dd369f088384c9ef5dff4be5e4", "text": "The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).", "title": "" }, { "docid": "b4c3b17b43767c0edffbdb32132a6ad5", "text": "We study the security and privacy of private browsing modes recently added to all major browsers. We first propose a clean definition of the goals of private browsing and survey its implementation in different browsers. We conduct a measurement study to determine how often it is used and on what categories of sites. Our results suggest that private browsing is used differently from how it is marketed. We then describe an automated technique for testing the security of private browsing modes and report on a few weaknesses found in the Firefox browser. Finally, we show that many popular browser extensions and plugins undermine the security of private browsing. We propose and experiment with a workable policy that lets users safely run extensions in private browsing mode.", "title": "" }, { "docid": "61b907a27871d6a2980c57bc445b89a3", "text": "This paper investigates whether and how individual managers affect corporate behavior and performance. We construct a manager-firm matched panel data set which enables us to track the top managers across different firms over time. We find that manager fixed effects matter for a wide range of corporate decisions. A significant extent of the heterogeneity in investment, financial and organizational practices of firms can be explained by the presence of manager fixed effects. We identify specific patterns in managerial decision making that appear to indicate general differences in “style” across managers. Moreover, we show that management style is significantly related to manager fixed effects in performance and that managers with higher performance fixed effects receive higher compensation and are more likely to be found in better governed firms. In a final step, we tie back these findings to observable managerial characteristics. We find that executives from earlier birth cohorts appear on average to be more conservative; on the other hand, managers who hold an MBA degree seem to follow on average more aggressive strategies. ∗We thank the editors (Lawrence Katz and Edward Glaeser), three anonymous referees, Kent Daniel, Rebecca Henderson, Steven Kaplan, Kevin J. Murphy, Sendhil Mullainathan, Canice Prendergast, David Scharfstein, Jerry Warner, Michael Weisbach, seminar participants at Harvard University, the Kellogg Graduate School of Management, the Massachusetts Institute of Technology, the University of Chicago Graduate School of Business, the University of Illinois at Urbana-Champaign, Rochester University and the Stockholm School of Economics for many helpful comments. We thank Kevin J. Murphy and Robert Parrino for generously providing us with their data. Jennifer Fiumara and Mike McDonald provided excellent research assistance. E-mail: marianne.bertrand@gsb.uchicago.edu; aschoar@mit.edu.", "title": "" }, { "docid": "f923f1a0b2e6748cc5aef14a17036461", "text": "In the early days of email, widely-used conventions for indicating quoted reply content and email signatures made it easy to segment email messages into their functional parts. Today, the explosion of different email formats and styles, coupled with the ad hoc ways in which people vary the structure and layout of their messages, means that simple techniques for identifying quoted replies that used to yield 95% accuracy now find less than 10% of such content. In this paper, we describe Zebra, an SVM-based system for segmenting the body text of email messages into nine zone types based on graphic, orthographic and lexical cues. Zebra performs this task with an accuracy of 87.01%; when the number of zones is abstracted to two or three zone classes, this increases to 93.60% and 91.53% respectively.", "title": "" } ]
scidocsrr
a3bb27e4a0edf019419da69e6b527e84
A new compact wide band 8-way SIW power divider at X-band
[ { "docid": "7e17c1842a70e416f0a90bdcade31a8e", "text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.", "title": "" }, { "docid": "40252c2047c227fbbeee4d492bee9bc6", "text": "A planar integrated multi-way broadband SIW power divider is proposed. It can be combined by the fundamental modules of T-type or Y-type two-way power dividers and an SIW bend directly. A sixteen way SIW power divider prototype was designed, fabricated and measured. The whole structure is made by various metallic-vias on the same substrate. Hence, it can be easily fabricated and conveniently integrated into microwave and millimeter-wave integrated circuits for mass production with low cost and small size.", "title": "" }, { "docid": "2cebd2fd12160d2a3a541989293f10be", "text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.", "title": "" } ]
[ { "docid": "13d9b338b83a5fcf75f74607bf7428a7", "text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.", "title": "" }, { "docid": "e051c1dafe2a2f45c48a79c320894795", "text": "In this paper we present a graph-based model that, utilizing relations between groups of System-calls, detects whether an unknown software sample is malicious or benign, and classifies a malicious software to one of a set of known malware families. More precisely, we utilize the System-call Dependency Graphs (or, for short, ScD-graphs), obtained by traces captured through dynamic taint analysis. We design our model to be resistant against strong mutations applying our detection and classification techniques on a weighted directed graph, namely Group Relation Graph, or Gr-graph for short, resulting from ScD-graph after grouping disjoint subsets of its vertices. For the detection process, we propose the $$\\Delta $$ Δ -similarity metric, and for the process of classification, we propose the SaMe-similarity and NP-similarity metrics consisting the SaMe-NP similarity. Finally, we evaluate our model for malware detection and classification showing its potentials against malicious software measuring its detection rates and classification accuracy.", "title": "" }, { "docid": "9faec965b145160ee7f74b80a6c2d291", "text": "Several skin substitutes are available that can be used in the management of hand burns; some are intended as temporary covers to expedite healing of shallow burns and others are intended to be used in the surgical management of deep burns. An understanding of skin biology and the relative benefits of each product are needed to determine the optimal role of these products in hand burn management.", "title": "" }, { "docid": "c12e906e6841753657ffe7630145708b", "text": "We present here a complete dynamic model of a lithium ion battery that is suitable for virtual-prototyping of portable battery-powered systems. The model accounts for nonlinear equilibrium potentials, rateand temperature-dependencies, thermal effects and response to transient power demand. The model is based on publicly available data such as the manufacturers’ data sheets. The Sony US18650 is used as an example. The model output agrees both with manufacturer’s data and with experimental results. The model can be easily modified to fit data from different batteries and can be extended for wide dynamic ranges of different temperatures and current rates.", "title": "" }, { "docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb", "text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.", "title": "" }, { "docid": "a6d26826ee93b3b5dec8282d0c632f8e", "text": "Superficial Acral Fibromyxoma is a rare tumor of soft tissues. It is a relatively new entity described in 2001 by Fetsch et al. It probably represents a fibrohistiocytic tumor with less than 170 described cases. We bring a new case of SAF on the 5th toe of the right foot, in a 43-year-old woman. After surgical excision with safety margins which included the nail apparatus, it has not recurred (22 months of follow up). We carried out a review of the location of all SAF published up to the present day.", "title": "" }, { "docid": "1ea55074ab304cbf308968fc8611c0d6", "text": "•Movies alter societal thinking patterns in previously unexplored social phenomena, by exposing the individual to what is shown on screen as the “norm” •Typical studies focus on audio/video modalities to estimate differences along factors such as gender •Linguistic analysis provides complementary information to the audio/video based analytics •We examine differences across gender, race and age", "title": "" }, { "docid": "c9d0e46417146f31d8d79280146e3ca1", "text": "Generating images from a text description is as challenging as it is interesting. The Adversarial network performs in a competitive fashion where the networks are the rivalry of each other. With the introduction of Generative Adversarial Network, lots of development is happening in the field of Computer Vision. With generative adversarial networks as the baseline model, studied Stack GAN consisting of two-stage GANS step-by-step in this paper that could be easily understood. This paper presents visual comparative study of other models attempting to generate image conditioned on the text description. One sentence can be related to many images. And to achieve this multi-modal characteristic, conditioning augmentation is also performed. The performance of Stack-GAN is better in generating images from captions due to its unique architecture. As it consists of two GANS instead of one, it first draws a rough sketch and then corrects the defects yielding a high-resolution image.", "title": "" }, { "docid": "6f0b8c09aa460752ab95296cb5816043", "text": "Aircraft type recognition plays an important role in remote sensing image interpretation. Traditional methods suffer from bad generalization performance, while deep learning methods require large amounts of data with type labels, which are quite expensive and time-consuming to obtain. To overcome the aforementioned problems, in this paper, we propose an aircraft type recognition framework based on conditional generative adversarial networks (GANs). First, we design a new method to precisely detect aircrafts’ keypoints, which are used to generate aircraft masks and locate the positions of the aircrafts. Second, a conditional GAN with a region of interest (ROI)-weighted loss function is trained on unlabeled aircraft images and their corresponding masks. Third, an ROI feature extraction method is carefully designed to extract multi-scale features from the GAN in the regions of aircrafts. After that, a linear support vector machine (SVM) classifier is adopted to classify each sample using their features. Benefiting from the GAN, we can learn features which are strong enough to represent aircrafts based on a large unlabeled dataset. Additionally, the ROI-weighted loss function and the ROI feature extraction method make the features more related to the aircrafts rather than the background, which improves the quality of features and increases the recognition accuracy significantly. Thorough experiments were conducted on a challenging dataset, and the results prove the effectiveness of the proposed aircraft type recognition framework.", "title": "" }, { "docid": "69f2773d7901ac9d477604a85fb6a591", "text": "We propose an expert-augmented actor-critic algorithm, which we evaluate on two environments with sparse rewards: Montezuma’s Revenge and a demanding maze from the ViZDoom suite. In the case of Montezuma’s Revenge, an agent trained with our method achieves very good results consistently scoring above 27,000 points (in many experiments beating the first world). With an appropriate choice of hyperparameters, our algorithm surpasses the performance of the expert data. In a number of experiments, we have observed an unreported bug in Montezuma’s Revenge which allowed the agent to score more than 800, 000 points.", "title": "" }, { "docid": "a341bcf8efb975c078cc452e0eecc183", "text": "We show that, during inference with Convolutional Neural Networks (CNNs), more than 2× to 8× ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties. We demonstrate a practical application with Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per layer precision variability and dynamic fine-grain precision reduction for activations, and optionally the naturally occurring sparse effectual bit content of activations to improve performance and energy efficiency. TCL benefits both sparse and dense CNNs, natively supports both convolutional and fully-connected layers, and exploits properties of all activations to reduce storage, communication, and computation demands. While TCL does not require changes to the CNN to deliver benefits, it does reward any technique that would amplify any of the aforementioned weight and activation value properties. Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a variant of TCL improves performance by 5.05× and is 2.98× more energy efficient while requiring 22% more area.", "title": "" }, { "docid": "d2efdbf8df3e0cd50adad299ab2d3018", "text": "Fuzzy controllers are efficient and interpretable system controllers for continuous state and action spaces. To date, such controllers have been constructed manually or trained automatically either using expert-generated problem-specific cost functions or incorporating detailed knowledge about the optimal control strategy. Both requirements for automatic training processes are not found in most real-world reinforcement learning (RL) problems. In such applications, online learning is often prohibited for safety reasons because it requires exploration of the problem’s dynamics during policy training. We introduce a fuzzy particle swarm reinforcement learning (FPSRL) approach that can construct fuzzy RL policies solely by training parameters on world models that simulate real system dynamics. These world models are created by employing an autonomous machine learning technique that uses previously generated transition samples of a real system. To the best of our knowledge, this approach is the first to relate self-organizing fuzzy controllers to model-based batch RL. FPSRL is intended to solve problems in domains where online learning is prohibited, system dynamics are relatively easy to model from previously generated default policy transition samples, and it is expected that a relatively easily interpretable control policy exists. The efficiency of the proposed approach with problems from such domains is demonstrated using three standard RL benchmarks, i.e., mountain car, cart-pole balancing, and cart-pole swing-up. Our experimental results demonstrate high-performing, interpretable fuzzy policies.", "title": "" }, { "docid": "811c430ff9efd0f8a61ff40753f083d4", "text": "The Waikato Environment for Knowledge Analysis (Weka) is a comprehensive suite of Java class libraries that implement many state-of-the-art machine learning and data mining algorithms. Weka is freely available on the World-Wide Web and accompanies a new text on data mining [1] which documents and fully explains all the algorithms it contains. Applications written using the Weka class libraries can be run on any computer with a Web browsing capability; this allows users to apply machine learning techniques to their own data regardless of computer platform.", "title": "" }, { "docid": "6168c4c547dca25544eedf336e369d95", "text": "Big Data means a very large amount of data and includes a range of methodologies such as big data collection, processing, storage, management, and analysis. Since Big Data Text Mining extracts a lot of features and data, clustering and classification can result in high computational complexity and the low reliability of the analysis results. In particular, a TDM (Term Document Matrix) obtained through text mining represents term-document features but features a sparse matrix. In this paper, the study focuses on selecting a set of optimized features from the corpus. A Genetic Algorithm (GA) is used to extract terms (features) as desired according to term importance calculated by the equation found. The study revolves around feature selection method to lower computational complexity and to increase analytical performance.We designed a new genetic algorithm to extract features in text mining. TF-IDF is used to reflect document-term relationships in feature extraction. Through the repetitive process, features are selected as many as the predetermined number. We have conducted clustering experiments on a set of spammail documents to verify and to improve feature selection performance. And we found that the proposal FSGA algorithm shown better performance of Text Clustering and Classification than using all of features.", "title": "" }, { "docid": "1cc7f97c7195f7f2dc45e07e3a4a8f78", "text": "Translucent materials are ubiquitous, and simulating their appearance requires accurate physical parameters. However, physically-accurate parameters for scattering materials are difficult to acquire. We introduce an optimization framework for measuring bulk scattering properties of homogeneous materials (phase function, scattering coefficient, and absorption coefficient) that is more accurate, and more applicable to a broad range of materials. The optimization combines stochastic gradient descent with Monte Carlo rendering and a material dictionary to invert the radiative transfer equation. It offers several advantages: (1) it does not require isolating single-scattering events; (2) it allows measuring solids and liquids that are hard to dilute; (3) it returns parameters in physically-meaningful units; and (4) it does not restrict the shape of the phase function using Henyey-Greenstein or any other low-parameter model. We evaluate our approach by creating an acquisition setup that collects images of a material slab under narrow-beam RGB illumination. We validate results by measuring prescribed nano-dispersions and showing that recovered parameters match those predicted by Lorenz-Mie theory. We also provide a table of RGB scattering parameters for some common liquids and solids, which are validated by simulating color images in novel geometric configurations that match the corresponding photographs with less than 5% error.", "title": "" }, { "docid": "7c10bb58d4698293944728978934a1e4", "text": "Molybdenum disulfide (MoS(2)) of single- and few-layer thickness was exfoliated on SiO(2)/Si substrate and characterized by Raman spectroscopy. The number of S-Mo-S layers of the samples was independently determined by contact-mode atomic force microscopy. Two Raman modes, E(1)(2g) and A(1g), exhibited sensitive thickness dependence, with the frequency of the former decreasing and that of the latter increasing with thickness. The results provide a convenient and reliable means for determining layer thickness with atomic-level precision. The opposite direction of the frequency shifts, which cannot be explained solely by van der Waals interlayer coupling, is attributed to Coulombic interactions and possible stacking-induced changes of the intralayer bonding. This work exemplifies the evolution of structural parameters in layered materials in changing from the three-dimensional to the two-dimensional regime.", "title": "" }, { "docid": "88627adba2bbd994f25cfc486a443103", "text": "BACKGROUND\nSeveral new targeted genes and clinical subtypes have been identified since publication in 2008 of the report of the last international consensus meeting on diagnosis and classification of epidermolysis bullosa (EB). As a correlate, new clinical manifestations have been seen in several subtypes previously described.\n\n\nOBJECTIVE\nWe sought to arrive at an updated consensus on the classification of EB subtypes, based on newer data, both clinical and molecular.\n\n\nRESULTS\nIn this latest consensus report, we introduce a new approach to classification (\"onion skinning\") that takes into account sequentially the major EB type present (based on identification of the level of skin cleavage), phenotypic characteristics (distribution and severity of disease activity; specific extracutaneous features; other), mode of inheritance, targeted protein and its relative expression in skin, gene involved and type(s) of mutation present, and--when possible--specific mutation(s) and their location(s).\n\n\nLIMITATIONS\nThis classification scheme critically takes into account all published data through June 2013. Further modifications are likely in the future, as more is learned about this group of diseases.\n\n\nCONCLUSION\nThe proposed classification scheme should be of value both to clinicians and researchers, emphasizing both clinical and molecular features of each EB subtype, and has sufficient flexibility incorporated in its structure to permit further modifications in the future.", "title": "" }, { "docid": "65f2651ec987ece0de560d9ac65e06a8", "text": "This paper describes neural network models that we prepared for the author profiling task of PAN@CLEF 2017. In previous PAN series, statistical models using a machine learning method with a variety of features have shown superior performances in author profiling tasks. We decided to tackle the author profiling task using neural networks. Neural networks have recently shown promising results in NLP tasks. Our models integrate word information and character information with multiple neural network layers. The proposed models have marked joint accuracies of 64–86% in the gender identification and the language variety identification of four languages.", "title": "" }, { "docid": "4b57b59f475a643b281a1ee5e49c87bd", "text": "In this paper we present a Model Predictive Control (MPC) approach for combined braking and steering systems in autonomous vehicles. We start from the result presented in (Borrelli et al. (2005)) and (Falcone et al. (2007a)), where a Model Predictive Controller (MPC) for autonomous steering systems has been presented. As in (Borrelli et al. (2005)) and (Falcone et al. (2007a)) we formulate an MPC control problem in order to stabilize a vehicle along a desired path. In the present paper, the control objective is to best follow a given path by controlling the front steering angle and the brakes at the four wheels independently, while fulfilling various physical and design constraints.", "title": "" }, { "docid": "fa7916c0afe0b18956f19b4fc8006971", "text": "INTRODUCTION\nPrevious studies demonstrated that multiple treatments using focused ultrasound can be effective as an non-invasive method for reducing unwanted localized fat deposits. The objective of the study is to investigate the safety and efficacy of this focused ultrasound device in body contouring in Asians.\n\n\nMETHOD\nFifty-three (51 females and 2 males) patients were enrolled into the study. Subjects had up to three treatment sessions with approximately 1-month interval in between treatment. Efficacy was assessed by changes in abdominal circumference, ultrasound fat thickness, and caliper fat thickness. Weight change was monitored to distinguish weight loss induced changes in these measurements. Patient questionnaire was completed after each treatment. The level of pain or discomfort, improvement in body contour and overall satisfaction were graded with a score of 1-5 (1 being the least). Any adverse effects such as erythema, pain during treatment or blistering were recorded.\n\n\nRESULT\nThe overall satisfaction amongst subjects was poor. Objective measurements by ultrasound, abdominal circumference, and caliper did not show significant difference after treatment. There is a negative correlation between the abdominal fat thickness and number of shots per treatment session.\n\n\nCONCLUSION\nFocused ultrasound is not effective for non-invasive body contouring among Southern Asians as compared with Caucasian. Such observation is likely due to smaller body figures. Design modifications can overcome this problem and in doing so, improve clinical outcome.", "title": "" } ]
scidocsrr
37621a0a54f5cb0d7ba28aba9ce9ac4b
Flat2Sphere: Learning Spherical Convolution for Fast Features from 360° Imagery
[ { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "becf84b237cc500ab8c8d02ad2b217fc", "text": "Numerous scale-invariant feature matching algorithms using scale-space analysis have been proposed for use with perspective cameras, where scale-space is defined as convolution with a Gaussian. The contribution of this work is a method suitable for use with wide angle cameras. Given an input image, we map it to the unit sphere and obtain scale-space images by convolution with the solution of the spherical diffusion equation on the sphere which we implement in the spherical Fourier domain. Using such an approach, the scale-space response of a point in space is independent of its position on the image plane for a camera subject to pure rotation. Scale-invariant features are then found as local extrema in scale-space. Given this set of scale-invariant features, we then generate feature descriptors by considering a circular support region defined on the sphere whose size is selected relative to the feature scale. We compare our method to a naive implementation of SIFT where the image is treated as perspective, where our results show an improvement in matching performance.", "title": "" }, { "docid": "db433a01dd2a2fd80580ffac05601f70", "text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.", "title": "" } ]
[ { "docid": "5f1360377ba7ee167eb196ba60624b90", "text": "In the present paper, a pitch determination algorithm (PDA) based on Subharmonic-to-Harmonic Ratio (SHR) is proposed. The algorithm is motivated by the results of a recent study on the perceived pitch of alternate pulse cycles in speech [1]. The algorithm employs a logarithmic frequency scale and a spectrum shifting technique to obtain the amplitude summation of harmonics and subharmonics, respectively. Through comparing the amplitude ratio of subharmonics and harmonics with the pitch perception results, the pitch of normal speech as well as speech with alternate pulse cycles (APC) can be determined. . Evaluation of the algorithm is performed on CSTR’s database and on synthesized speech with APC. The results show that this algorithm is one of the most reliable PDAs. Furthermore, superior to most other algorithms, it handles subharmonics reasonably well.", "title": "" }, { "docid": "84d2e697b2f2107d34516909f22768c6", "text": "PURPOSE\nSchema therapy was first applied to individuals with borderline personality disorder (BPD) over 20 years ago, and more recent work has suggested efficacy across a range of disorders. The present review aimed to systematically synthesize evidence for the efficacy and effectiveness of schema therapy in reducing early maladaptive schema (EMS) and improving symptoms as applied to a range of mental health disorders in adults including BPD, other personality disorders, eating disorders, anxiety disorders, and post-traumatic stress disorder.\n\n\nMETHODS\nStudies were identified through electronic searches (EMBASE, PsycINFO, MEDLINE from 1990 to January 2016).\n\n\nRESULTS\nThe search produced 835 titles, of which 12 studies were found to meet inclusion criteria. A significant number of studies of schema therapy treatment were excluded as they failed to include a measure of schema change. The Clinical Trial Assessment Measure was used to rate the methodological quality of studies. Schema change and disorder-specific symptom change was found in 11 of the 12 studies.\n\n\nCONCLUSIONS\nSchema therapy has demonstrated initial significant results in terms of reducing EMS and improving symptoms for personality disorders, but formal mediation analytical studies are lacking and rigorous evidence for other mental health disorders is currently sparse.\n\n\nPRACTITIONER POINTS\nFirst review to investigate whether schema therapy leads to reduced maladaptive schemas and symptoms across mental health disorders. Limited evidence for schema change with schema therapy in borderline personality disorder (BPD), with only three studies conducting correlational analyses. Evidence for schema and symptom change in other mental health disorders is sparse, and so use of schema therapy for disorders other than BPD should be based on service user/patient preference and clinical expertise and/or that the theoretical underpinnings of schema therapy justify the use of it therapeutically. Further work is needed to develop the evidence base for schema therapy for other disorders.", "title": "" }, { "docid": "bb6ec993e0d573f4307a37588d6732ae", "text": "Beaudry and Pinsonneault (2005) IT related coping behaviors System users choose different adaptation strategies based on a combination of primary appraisal (i.e., a user’s assessment of the expected consequences of an IT event) and secondary appraisal (i.e., a user’s assessment of his/her control over the situation). Users will perform different actions in response to a combination of cognitive and behavioral efforts, both of which have been categorized as either problemor emotion-focused. Whole system", "title": "" }, { "docid": "af8fdea69016ec8e61e935c84f1c72be", "text": "Many developing countries are suffering from air pollution recently. Governments have built a few air quality monitoring stations in cities to inform people the concentration of air pollutants. Unfortunately, urban air quality is highly skewed in a city, depending on multiple complex factors, such as the meteorology, traffic volume, and land uses. Building more monitoring stations is very costly in terms of money, land uses, and human resources. As a result, people do not really know the fine-grained air quality of a location without a monitoring station. In this paper, we introduce a cloud-based knowledge discovery system that infers the real-time and fine-grained air quality information throughout a city based on the (historical and realtime) air quality data reported by existing monitor stations and a variety of data sources observed in the city, such as meteorology, traffic flow, human mobility, structure of road networks, and point of interests (POIs). The system also provides a mobile client, with which a user can monitor the air quality of multiple locations in a city (e.g. the current location, home and work places), and a web service that allows other applications to call the air quality of any location. The system has been evaluated based on the real data from 9 cities in China, including Beijing, Shanghai, Guanzhou, and Shenzhen, etc. The system is running on Microsoft Azure and the mobile client is publicly available in Window Phone App Store, entitled Urban Air. Our system gives a cost-efficient example for enabling a knowledge discovery prototype involving big data on the cloud.", "title": "" }, { "docid": "58984ddb8d4c28dc63caa29bc245e259", "text": "OpenCL is an open standard to write parallel applications for heterogeneous computing systems. Since its usage is restricted to a single operating system instance, programmers need to use a mix of OpenCL and MPI to program a heterogeneous cluster. In this paper, we introduce an MPI-OpenCL implementation of the LINPACK benchmark for a cluster with multi-GPU nodes. The LINPACK benchmark is one of the most widely used benchmark applications for evaluating high performance computing systems. Our implementation is based on High Performance LINPACK (HPL) and uses the blocked LU decomposition algorithm. We address that optimizations aimed at reducing the overhead of CPUs are necessary to overcome the performance gap between the CPUs and the multiple GPUs. Our LINPACK implementation achieves 93.69 Tflops (46 percent of the theoretical peak) on the target cluster with 49 nodes, each node containing two eight-core CPUs and four GPUs.", "title": "" }, { "docid": "214d1911eb4c439402d3b5a81eebf647", "text": "Crop type mapping and studying the dynamics of agricultural fields in arid and semi-arid environments are of high importance since these ecosystems have witnessed an unprecedented rate of area decline during the last decades. Crop type mapping using medium spatial resolution imagery data has been considered as one of the most important management tools. Remotely sensed data provide reliable, cost and time effective information for monitoring, analyzing and mapping of agricultural land areas. This research was conducted to explore the utility of Landsat 8 imagery data for crop type mapping in a highly fragmented and heterogeneous agricultural landscape in Najaf-Abad Hydrological Unit, Iran. Based on the phenological information from long-term field surveys, five Landsat 8 image scenes (from March to October) were processed to classify the main crop types. In this regard, wheat, barley, alfalfa, and fruit trees have been classified applying inventive decision tree algorithms and Support Vector Machine was used to categorize rice, potato, vegetables, and greenhouse vegetable crops. Accuracy assessment was then undertaken based on spring and summer crop maps (two confusion matrices) that resulted in Kappa coefficients of 0.89. The employed images and classification methods could form a basis for better crop type mapping in central Iran that is undergoing severe drought condition. 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "45ea01d82897401058492bc2f88369b3", "text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.", "title": "" }, { "docid": "6a2a7b5831f6b3608eb88f5ccda6d520", "text": "In this paper we examine currently used programming contest systems. We discuss possible reasons why we do not expect any of the currently existing contest systems to be adopted by a major group of different programming contests. We suggest to approach the design of a contest system as a design of a secure IT system, using known methods from the area of computer", "title": "" }, { "docid": "ed33b5fae6bc0af64668b137a3a64202", "text": "In this study the effect of the Edmodo social learning environment on mobile assisted language learning (MALL) was examined by seeking the opinions of students. Using a quantitative experimental approach, this study was conducted by conducting a questionnaire before and after using the social learning network Edmodo. Students attended lessons with their mobile devices. The course materials were shared in the network via Edmodo group sharing tools. The students exchanged idea and developed projects, and felt as though they were in a real classroom setting. The students were also able to access various multimedia content. The results of the study indicate that Edmodo improves students’ foreign language learning, increases their success, strengthens communication among students, and serves as an entertaining learning environment for them. The educationally suitable sharing structure and the positive user opinions described in this study indicate that Edmodo is also usable in other lessons. Edmodo can be used on various mobile devices, including smartphones, in addition to the web. This advantageous feature contributes to the usefulness of Edmodo as a scaffold for education.", "title": "" }, { "docid": "e1cd6cce0c895691df5112a9dbcbb46b", "text": "This paper presents a framework for component-based face alignment and representation that demonstrates improvements in matching performance over the more common holistic approach to face alignment and representation. This work is motivated by recent evidence from the cognitive science community demonstrating the efficacy of component-based facial representations. The component-based framework presented in this paper consists of the following major steps: 1) landmark extraction using Active Shape Models (ASM), 2) alignment and cropping of components using Procrustes Analysis, 3) representation of components with Multiscale Local Binary Patterns (MLBP), 4) per-component measurement of facial similarity, and 5) fusion of per-component similarities. We demonstrate on three public datasets and an operational dataset consisting of face images of 8000 subjects, that the proposed component-based representation provides higher recognition accuracies over holistic-based representations. Additionally, we show that the proposed component-based representations: 1) are more robust to changes in facial pose, and 2) improve recognition accuracy on occluded face images in forensic scenarios.", "title": "" }, { "docid": "65b2a4e2532b838a74b44a5d2a665e1d", "text": "We present a robust shape model for localizing a set of feature points on a 2D image. Previous shape alignment models assume Gaussian observation noise and attempt to fit a regularized shape using all the observed data. However, such an assumption is vulnerable to gross feature detection errors resulted from partial occlusions or spurious background features. We address this problem by using a hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate object shape and pose hypotheses from randomly sampled partial shapes - subsets of feature points. The hypotheses are then evaluated to find the one that minimizes the shape prediction error. The proposed model can effectively handle outliers and recover the object shape. We evaluate our approach on a challenging dataset which contains over 2,000 multi-view car images and spans a wide variety of types, lightings, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.", "title": "" }, { "docid": "22d878a735d649f5932be6cd0b3979c9", "text": "This study investigates the potential to introduce basic programming concepts to middle school children within the context of a classroom writing-workshop. In this paper we describe how students drafted, revised, and published their own digital stories using the introductory programming language Scratch and in the process learned fundamental CS concepts as well as the wider connection between programming and writing as interrelated processes of composition.", "title": "" }, { "docid": "e352ef343e8413b27be696344fe10259", "text": "The futuristic trend is toward the merging of cyber world with physical world leading to the development of Internet of Things (IoT) framework. Current research is focused on the scalar data-based IoT applications thus leaving the gap between services and benefits of IoT objects and multimedia objects. Multimedia IoT (IoMT) applications require new protocols to be developed to cope up with heterogeneity among the various communicating objects. In this paper, we have presented a cross-layer protocol for IoMT. In proposed methodology, we have considered the cross communication of physical, data link, and routing layers for multimedia applications. Response time should be less, and communication among the devices must be energy efficient in multimedia applications. IoMT has considered both the issues and the comparative simulations in MATLAB have shown that it outperforms over the traditional protocols and presents the optimized solution for IoMT.", "title": "" }, { "docid": "4c811ed0f6c69ca5485f6be7d950df89", "text": "Fairness has emerged as an important category of analysis for machine learning systems in some application areas. In extending the concept of fairness to recommender systems, there is an essential tension between the goals of fairness and those of personalization. However, there are contexts in which equity across recommendation outcomes is a desirable goal. It is also the case that in some applications fairness may be a multisided concept, in which the impacts on multiple groups of individuals must be considered. In this paper, we examine two different cases of fairness-aware recommender systems: consumer-centered and provider-centered. We explore the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the Sparse Linear Method (SLIM) can be used to improve the balance of user and item neighborhoods, with the result of achieving greater outcome fairness in real-world datasets with minimal loss in ranking performance.", "title": "" }, { "docid": "e8a51d5b917d300154d5c3524c61c702", "text": "There has been several research on car license plate recognition (LPR). However, the number of research on Thai LPR is limited, besides, most of which are published in Thai. The existing work on Thai LPR have faced problems caused by low- quality license plates and a great number of similar characters that exist in Thai alphabets. Generally, car license plates in Thailand come in different conditions, ranging from new license plates of excellent quality to low-quality ones with screws on or even some paint already peeled off. Thai characters that appear on Thai license plates are also generally shape-complicated. Area-based methods, such as conventional template matching, are ineffective to low-quality or resembling characters. To cope with these problems, this paper presents a new method, which recognizes the character patterns relying only on essential elements of characters. This method lies on the concept that Thai characters can be distinguished by essential elements that form up different shapes. Similar characters are handled by directly focusing on their obvious differences among them. The overall success rate of the proposed method, tested on 300 actual license plates of various qualities, is 85.33%.", "title": "" }, { "docid": "3965c4ea31759759d2ea6669df6c18f3", "text": "Named Entity Disambiguation is the task of disambiguating named entity mentions in unstructured text and linking them to their corresponding entries in a large knowledge base such as Freebase. Practically, each text match in a given document should be mapped to the correct entity out of the corresponding entities in the knowledge base or none of them if no correct entity is found (Empty Entry). The case of an empty entry makes the problem at hand more complex, but by solving it, one can successfully cope with missing and erroneous data as well as unknown entities. In this work we present AOL's Named Entity Resolver which was designed to handle real life scenarios including empty entries. As part of the automated news analysis platform, it processes over 500K news articles a day, entities from each article are extracted and disambiguated. According to our experiments, AOL's resolver shows much better results in disambiguating entities mapped to Wikipedia or Freebase compared to industry leading products.", "title": "" }, { "docid": "7cebca46f584b2f31fd9d2c8ef004f17", "text": "Wirelessly networked systems of intra-body sensors and actuators could enable revolutionary applications at the intersection between biomedical science, networking, and control with a strong potential to advance medical treatment of major diseases of our times. Yet, most research to date has focused on communications along the body surface among devices interconnected through traditional electromagnetic radio-frequency (RF) carrier waves; while the underlying root challenge of enabling networked intra-body miniaturized sensors and actuators that communicate through body tissues is substantially unaddressed. The main obstacle to enabling this vision of networked implantable devices is posed by the physical nature of propagation in the human body. The human body is composed primarily (65 percent) of water, a medium through which RF electromagnetic waves do not easily propagate, even at relatively low frequencies. Therefore, in this article we take a different perspective and propose to investigate and study the use of ultrasonic waves to wirelessly internetwork intra-body devices. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack.", "title": "" }, { "docid": "9a89587e8f3a2ead5dbcd32b0e4d07e0", "text": "This paper proposes a high-efficiency wireless power transfer system with an asymmetric four-coil resonator. It presents a theoretical analysis, an optimal design method, and experimental results. Multicoil systems which have more than three coils between the primary and secondary side provide the benefits of a good coupling coefficient, a long transfer distance, and a wide operating frequency range. The conventional four-coil system has a symmetric coil configuration. In the primary side, there are source and transmitter coils, and the secondary side contains receiver and load coils. On the other hand, in the proposed asymmetric four-coil system, the primary side consists of a source coil and two transmitter coils which are called intermediate coils, and in the secondary side, a load coil serves as a receiver coil. In the primary side, two intermediate coils boost the apparent coupling coefficient at around the operating frequency. Because of this double boosting effect, the system with an asymmetric four-coil resonator has a higher efficiency than that of the conventional symmetric four-coil system. A prototype of the proposed system with the asymmetric four-coil resonator is implemented and experimented on to verify the validity of the proposed system. The prototype operates at 90 kHz of switching frequency and has 200 mm of the power transmission distance between the primary side and the secondary side. An ac-dc overall system efficiency of 96.56% has been achieved at 3.3 kW of output power.", "title": "" }, { "docid": "8439f9d3e33fdbc43c70f1d46e2e143e", "text": "Redacting text documents has traditionally been a mostly manual activity, making it expensive and prone to disclosure risks. This paper describes a semi-automated system to ensure a specified level of privacy in text data sets. Recent work has attempted to quantify the likelihood of privacy breaches for text data. We build on these notions to provide a means of obstructing such breaches by framing it as a multi-class classification problem. Our system gives users fine-grained control over the level of privacy needed to obstruct sensitive concepts present in that data. Additionally, our system is designed to respect a user-defined utility metric on the data (such as disclosure of a particular concept), which our methods try to maximize while anonymizing. We describe our redaction framework, algorithms, as well as a prototype tool built in to Microsoft Word that allows enterprise users to redact documents before sharing them internally and obscure client specific information. In addition we show experimental evaluation using publicly available data sets that show the effectiveness of our approach against both automated attackers and human subjects.The results show that we are able to preserve the utility of a text corpus while reducing disclosure risk of the sensitive concept.", "title": "" } ]
scidocsrr
3f01b064a90e75e43416d2d8031ffec4
The WDC Gold Standards for Product Feature Extraction and Product Matching
[ { "docid": "2e088ce4f7e5b3633fa904eab7563875", "text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.", "title": "" }, { "docid": "7aded3885476c7d37228855916255d79", "text": "The web is a rich resource of structured data. There has been an increasing interest in using web structured data for many applications such as data integration, web search and question answering. In this paper, we present DEXTER, a system to find product sites on the web, and detect and extract product specifications from them. Since product specifications exist in multiple product sites, our focused crawler relies on search queries and backlinks to discover product sites. To perform the detection, and handle the high diversity of specifications in terms of content, size and format, our system uses supervised learning to classify HTML fragments (e.g., tables and lists) present in web pages as specifications or not. To perform large-scale extraction of the attribute-value pairs from the HTML fragments identified by the specification detector, DEXTER adopts two lightweight strategies: a domain-independent and unsupervised wrapper method, which relies on the observation that these HTML fragments have very similar structure; and a combination of this strategy with a previous approach, which infers extraction patterns by annotations generated by automatic but noisy annotators. The results show that our crawler strategy to locate product specification pages is effective: (1) it discovered 1.46M product specification pages from 3, 005 sites and 9 different categories; (2) the specification detector obtains high values of F-measure (close to 0.9) over a heterogeneous set of product specifications; and (3) our efficient wrapper methods for attribute-value extraction get very high values of precision (0.92) and recall (0.95) and obtain better results than a state-of-the-art, supervised rule-based wrapper.", "title": "" }, { "docid": "71da7722f6ce892261134bd60ca93ab7", "text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.", "title": "" } ]
[ { "docid": "e1064861857e32be6184d3e9852f2c48", "text": "Alzheimer's disease (AD) represents the most frequent neurodegenerative disease of the human brain worldwide. Currently practiced treatment strategies for AD only include some less effective symptomatic therapeutic interventions, which unable to counteract the disease course of AD. New therapeutic attempts aimed to prevent, reduce, or remove the extracellular depositions of the amyloid-β protein did not elicit beneficial effects on cognitive deficits or functional decline of AD. In view of the failure of these amyloid-β-based therapeutic trials and the close correlation between the brain pathology of the cytoskeletal tau protein and clinical AD symptoms, therapeutic attention has since shifted to the tau cytoskeletal protein as a novel drug target. The abnormal hyperphosphorylation and intraneuronal aggregation of this protein are early events in the evolution of the AD-related neurofibrillary pathology, and the brain spread of the AD-related tau aggregation pathology may possibly follow a corruptive protein templating and seeding-like mechanism according to the prion hypothesis. Accordingly, immunotherapeutic targeting of the tau aggregation pathology during the very early pre-tangle phase is currently considered to represent an effective and promising therapeutic approach for AD. Recent studies have shown that the initial immunoreactive tau aggregation pathology already prevails in several subcortical regions in the absence of any cytoskeletal changes in the cerebral cortex. Thus, it may be hypothesized that the subcortical brain regions represent the \"port of entry\" for the pathogenetic agent from which the disease ascends anterogradely as an \"interconnectivity pathology\".", "title": "" }, { "docid": "0c4ca5a63c7001e6275b05da7771a7a6", "text": "We present a new data structure for the c-approximate near neighbor problem (ANN) in the Euclidean space. For n points in R, our algorithm achieves Oc(n + d log n) query time and Oc(n + d log n) space, where ρ ≤ 0.73/c + O(1/c) + oc(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality-sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ICS 2011). By known reductions we obtain a data structure for the Hamming space and l1 norm with ρ ≤ 0.73/c+O(1/c) + oc(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998). Thesis Supervisor: Piotr Indyk Title: Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "bf079d5c13d37a57e835856df572a306", "text": "Paraphrase Detection is the task of examining if two sentences convey the same meaning or not. Here, in this paper, we have chosen a sentence embedding by unsupervised RAE vectors for capturing syntactic as well as semantic information. The RAEs learn features from the nodes of the parse tree and chunk information along with unsupervised word embedding. These learnt features are used for measuring phrase wise similarity between two sentences. Since sentences are of varying length, we use dynamic pooling for getting a fixed sized representation for sentences. This fixed sized sentence representation is the input to the classifier. The DPIL (Detecting Paraphrases in Indian Languages) dataset is used for paraphrase identification here. Initially, paraphrase identification is defined as a 2-class problem and then later, it is extended to a 3-class problem. Word2vec and Glove embedding techniques producing 100, 200 and 300 dimensional vectors are used to check variation in accuracies. The baseline system accuracy obtained using word2vec for 2-class problem is 77.67% and the same for 3-class problem is 66.07%. Glove gave an accuracy of 77.33% for 2-class and 65.42% for 3-classproblem. The results are also compared with the existing open source word embedding and our system using Word2vec embedding is found to outperform better. This is a first attempt using chunking based approach for identification of Malayalam paraphrases.", "title": "" }, { "docid": "496864f6ccafbc23e52d8cead505eac7", "text": "Hotel guests’ expectations and actual experiences on hotel service quality often fail to coincide due to guests’ unusually high anticipations, hotels’ complete breakdowns in delivering their standard, or the combination of both. Moreover, this disconfirmation could be augmented contingent upon the level of hotel segment (hotel star-classification) and the overall rating manifested by previous guests. By incorporating a 2 2 matrix design in which a hotel star-classification configures one dimension (2 versus 4 stars) and a customers’ overall rating (lower versus higher overall ratings) configures the other, this explorative multiple case study uses conjoint analyses to examine the differences in the comparative importance of the six hotel attributes (value, location, sleep quality, rooms, cleanliness, and service) among four prominent hotel chain brands located in the United States. Four major and eight minor propositions are suggested for future empirical research based on the results of the four combined studies. Through the analysis of online data, this study may enlighten hotel managers with various ways to accommodate hotel guests’ needs. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2359295e109766126c5427b71031f4a0", "text": "Recently, aggressive voltage scaling was shown as an important technique in achieving highly energy-efficient circuits. Specifically, scaling Vdd to near or sub-threshold regions was proposed for energy-constrained sensor systems to enable long lifetime and small system volume [1][2][4]. However, energy efficiency degrades below a certain voltage, Vmin, due to rapidly increasing leakage energy consumption, setting a fundamental limit on the achievable energy efficiency. In addition, voltage scaling degrades performance and heightens delay variability due to large Id sensitivity to PVT variations in the ultra-low voltage (ULV) regime. This paper uses circuit and architectural methods to further reduce the minimum energy point, or Emin, and establish a new lower limit on energy efficiency, while simultaneously improving performance and robustness. The approaches are demonstrated on an FFT core in 65nm CMOS.", "title": "" }, { "docid": "b0cc7d5313acaa47eb9cba9e830fa9af", "text": "Data-driven intelligent transportation systems utilize data resources generated within intelligent systems to improve the performance of transportation systems and provide convenient and reliable services. Traffic data refer to datasets generated and collected on moving vehicles and objects. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of related data processing techniques, and summarizes existing methods for depicting the temporal, spatial, numerical, and categorical properties of traffic data.", "title": "" }, { "docid": "7e623e6510f6b029cf3f49da98cd777e", "text": "Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.", "title": "" }, { "docid": "b3f423e513c543ecc9fe7003ff9880ea", "text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.", "title": "" }, { "docid": "d1c84b1131f8cb2abbbb0383c83bc0d2", "text": "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.", "title": "" }, { "docid": "7d1470edd8d8c6bd589ea64a73189705", "text": "Background modeling plays an important role for video surveillance, object tracking, and object counting. In this paper, we propose a novel deep background modeling approach utilizing fully convolutional network. In the network block constructing the deep background model, three atrous convolution branches with different dilate are used to extract spatial information from different neighborhoods of pixels, which breaks the limitation that extracting spatial information of the pixel from fixed pixel neighborhood. Furthermore, we sample multiple frames from original sequential images with increasing interval, in order to capture more temporal information and reduce the computation. Compared with classical background modeling approaches, our approach outperforms the state-of-art approaches both in indoor and outdoor scenes.", "title": "" }, { "docid": "4bbc30fe3bdba1a749d4eca3016bb371", "text": "Autonomous vehicle (AV) technology is rapidly becoming a reality on U.S. roads, offering the promise of improvements in traffic management, safety, and the comfort and efficiency of vehicular travel. The California Department of Motor Vehicles (DMV) reports that between 2014 and 2017, manufacturers tested 144 AVs, driving a cumulative 1,116,605 autonomous miles, and reported 5,328 disengagements and 42 accidents involving AVs on public roads. This paper investigates the causes, dynamics, and impacts of such AV failures by analyzing disengagement and accident reports obtained from public DMV databases. We draw several conclusions. For example, we find that autonomous vehicles are 15 - 4000× worse than human drivers for accidents per cumulative mile driven; that drivers of AVs need to be as alert as drivers of non-AVs; and that the AVs' machine-learning-based systems for perception and decision-and-control are the primary cause of 64% of all disengagements.", "title": "" }, { "docid": "14d3712efca71981103ba3ab44c39dd2", "text": "This paper is survey of computational approaches for paraphrasing. Paraphrasing methods such as generation, identification and acquisition of phrases or sentences is a process that conveys same information. Paraphrasing is a process of expressing semantic content of source using different words to achieve the greater clarity. The task of generating or identifying the semantic equivalence for different elements of language such as words sentences; is an essential part of the natural language processing. Paraphrasing is being used for various natural language applications. This paper discuses paraphrase impact on few applications and also various paraphrasing methods.", "title": "" }, { "docid": "8ef4e3c1b3b0008d263e0a55e92750cc", "text": "We review recent breakthroughs in the silicon photonic technology and components, and describe progress in silicon photonic integrated circuits. Heterogeneous silicon photonics has recently demonstrated performance that significantly outperforms native III/V components. The impact active silicon photonic integrated circuits could have on interconnects, telecommunications, sensors, and silicon electronics is reviewed.", "title": "" }, { "docid": "54cd27447dffe93350eba701e5c89a10", "text": "From recalling long forgotten experiences based on a familiar scent or on a piece of music, to lip reading aided conversation in noisy environments or travel sickness caused by mismatch of the signals from vision and the vestibular system, the human perception manifests countless examples of subtle and effortless joint adoption of the multiple senses provided to us by evolution. Emulating such multisensory (or multimodal, i.e., comprising multiple types of input modes or modalities) processing computationally offers tools for more effective, efficient, or robust accomplishment of many multimedia tasks using evidence from the multiple input modalities. Information from the modalities can also be analyzed for patterns and connections across them, opening up interesting applications not feasible with a single modality, such as prediction of some aspects of one modality based on another. In this dissertation, multimodal analysis techniques are applied to selected video tasks with accompanying modalities. More specifically, all the tasks involve some type of analysis of videos recorded by non-professional videographers using mobile devices. Fusion of information from multiple modalities is applied to recording environment classification from video and audio as well as to sport type classification from a set of multi-device videos, corresponding audio, and recording device motion sensor data. The environment classification combines support vector machine (SVM) classifiers trained on various global visual low-level features with audio event histogram based environment classification using k nearest neighbors (k-NN). Rule-based fusion schemes with genetic algorithm (GA)-optimized modality weights are compared to training a SVM classifier to perform the multimodal fusion. A comprehensive selection of fusion strategies is compared for the task of classifying the sport type of a set of recordings from a common event. These include fusion prior to, simultaneously with, and after classification; various approaches for using modality quality estimates; and fusing soft confidence scores as well as crisp single-class predictions. Additionally, different strategies are examined for aggregating the decisions of single videos to a collective prediction from the set of videos recorded concurrently with multiple devices. In both tasks multimodal analysis shows clear advantage over separate classification of the modalities. Another part of the work investigates cross-modal pattern analysis and audio-based video editing. This study examines the feasibility of automatically timing shot cuts of multi-camera concert recordings according to music-related cutting patterns learnt from professional concert videos. Cut timing is a crucial part of automated creation of multicamera mashups, where shots from multiple recording devices from a common event are alternated with the aim at mimicing a professionally produced video. In the framework, separate statistical models are formed for typical patterns of beat-quantized cuts in short segments, differences in beats between consecutive cuts, and relative deviation of cuts from exact beat times. Based on music meter and audio change point analysis of a new", "title": "" }, { "docid": "1b24b5d1936377c3659273a68aafeb35", "text": "In this paper, hand dorsal images acquired under infrared light are used to design an accurate personal authentication system. Each of the image is segmented into palm dorsal and fingers which are subsequently used to extract palm dorsal veins and infrared hand geometry features respectively. A new quality estimation algorithm is proposed to estimate the quality of palm dorsal which assigns low values to the pixels containing hair or skin texture. Palm dorsal is enhanced using filtering. For vein extraction, information provided by the enhanced image and the vein quality is consolidated using a variational approach. The proposed vein extraction can handle the issues of hair, skin texture and variable width veins so as to extract the genuine veins accurately. Several post processing techniques are introduced in this paper for accurate feature extraction of infrared hand geometry features. Matching scores are obtained by matching palm dorsal veins and infrared hand geometry features. These are eventually fused for authentication. For performance evaluation, a database of 1500 hand images acquired from 300 different hands is created. Experimental results demonstrate the superiority of the proposed system over existing", "title": "" }, { "docid": "b1bb8eda4f7223a4c6dd8201ff5abfae", "text": "Recommender systems are constructed to search the content of interest from overloaded information by acquiring useful knowledge from massive and complex data. Since the amount of information and the complexity of the data structure grow, it has become a more interesting and challenging topic to find an efficient way to process, model, and analyze the information. Due to the Global Positioning System (GPS) data recording the taxi's driving time and location, the GPS-equipped taxi can be regarded as the detector of an urban transport system. This paper proposes a Taxi-hunting Recommendation System (Taxi-RS) processing the large-scale taxi trajectory data, in order to provide passengers with a waiting time to get a taxi ride in a particular location. We formulated the data offline processing system based on HotSpotScan and Preference Trajectory Scan algorithms. We also proposed a new data structure for frequent trajectory graph. Finally, we provided an optimized online querying subsystem to calculate the probability and the waiting time of getting a taxi. Taxi-RS is built based on the real-world trajectory data set generated by 12 000 taxis in one month. Under the condition of guaranteeing the accuracy, the experimental results show that our system can provide more accurate waiting time in a given location compared with a naïve algorithm.", "title": "" }, { "docid": "6b7594aa4ace0f56884d970a9e254dc5", "text": "Recent work has explored the use of hidden Markov models for unsupervised discourse and conversation modeling, where each segment or block of text such as a message in a conversation is associated with a hidden state in a sequence. We extend this approach to allow each block of text to be a mixture of multiple classes. Under our model, the probability of a class in a text block is a log-linear function of the classes in the previous block. We show that this model performs well at predictive tasks on two conversation data sets, improving thread reconstruction accuracy by up to 15 percentage points over a standard HMM. Additionally, we show quantitatively that the induced word clusters correspond to speech acts more closely than baseline models.", "title": "" }, { "docid": "892c75c6b719deb961acfe8b67b982bb", "text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.", "title": "" }, { "docid": "f40125e7cc8279a5514deaf1146684de", "text": "Summary Several models explain how a complex integrated system like the rodent mandible can arise from multiple developmental modules. The models propose various integrating mechanisms, including epigenetic effects of muscles on bones. We test five for their ability to predict correlations found in the individual (symmetric) and fluctuating asymmetric (FA) components of shape variation. We also use exploratory methods to discern patterns unanticipated by any model. Two models fit observed correlation matrices from both components: (1) parts originating in same mesenchymal condensation are integrated, (2) parts developmentally dependent on the same muscle form an integrated complex as do those dependent on teeth. Another fits the correlations observed in FA: each muscle insertion site is an integrated unit. However, no model fits well, and none predicts the complex structure found in the exploratory analyses, best described as a reticulated network. Furthermore, no model predicts the correlation between proximal parts of the condyloid and coronoid, which can exceed the correlations between proximal and distal parts of the same process. Additionally, no model predicts the correlation between molar alveolus and ramus and/or angular process, one of the highest correlations found in the FA component. That correlation contradicts the basic premise of all five developmental models, yet it should be anticipated from the epigenetic effects of mastication, possibly the primary morphogenetic process integrating the jaw coupling forces generated by muscle contraction with those experienced at teeth.", "title": "" } ]
scidocsrr
eabe324a2abbd5aa247017c3b62cc6c5
Investigation into Big Data Impact on Digital Marketing
[ { "docid": "a2047969c4924a1e93b805b4f7d2402c", "text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.", "title": "" }, { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" } ]
[ { "docid": "1145d2375414afbdd5f1e6e703638028", "text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).", "title": "" }, { "docid": "00277e4562f707d37844e6214d1f8777", "text": "Video super-resolution (SR) aims at estimating a high-resolution video sequence from a low-resolution (LR) one. Given that the deep learning has been successfully applied to the task of single image SR, which demonstrates the strong capability of neural networks for modeling spatial relation within one single image, the key challenge to conduct video SR is how to efficiently and effectively exploit the temporal dependence among consecutive LR frames other than the spatial relation. However, this remains challenging because the complex motion is difficult to model and can bring detrimental effects if not handled properly. We tackle the problem of learning temporal dynamics from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence. Inspired by the inception module in GoogLeNet [1], filters of various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated, in order to fully exploit the temporal relation among the consecutive LR frames. Second, we decrease the complexity of motion among neighboring frames using a spatial alignment network that can be end-to-end trained with the temporal adaptive network and has the merit of increasing the robustness to complex motion and the efficiency compared with the competing image alignment methods. We provide a comprehensive evaluation of the temporal adaptation and the spatial alignment modules. We show that the temporal adaptive design considerably improves the SR quality over its plain counterparts, and the spatial alignment network is able to attain comparable SR performance with the sophisticated optical flow-based approach, but requires a much less running time. Overall, our proposed model with learned temporal dynamics is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. More information can be found in http://www.ifp.illinois.edu/~dingliu2/videoSR/.", "title": "" }, { "docid": "c27fb42cf33399c9c84245eeda72dd46", "text": "The proliferation of technology has empowered the web applications. At the same time, the presences of Cross-Site Scripting (XSS) vulnerabilities in web applications have become a major concern for all. Despite the many current detection and prevention approaches, attackers are exploiting XSS vulnerabilities continuously and causing significant harm to the web users. In this paper, we formulate the detection of XSS vulnerabilities as a prediction model based classification problem. A novel approach based on text-mining and pattern-matching techniques is proposed to extract a set of features from source code files. The extracted features are used to build prediction models, which can discriminate the vulnerable code files from the benign ones. The efficiency of the developed models is evaluated on a publicly available labeled dataset that contains 9408 PHP labeled (i.e. safe, unsafe) source code files. The experimental results depict the superiority of the proposed approach over existing ones.", "title": "" }, { "docid": "b4796891108f41b1faf054636d3eefd2", "text": "Business process analysis ranges from model verification at design-time to the monitoring of processes at runtime. Much progress has been achieved in process verificatio n. Today we are able to verify the entire reference model of SAP without any problems. Moreover, more and more pr ocesses leave their “trail” in the form of event logs. This makes it interesting to apply process minin g to these logs. Interestingly, practical applications of process mining reveal that reality is often quite differe nt from the idealized models, also referred to as “PowerPoint reality”. Future process-aware information s ystems will need to provide full support of the entire life-cycle of business processes. Recent results in busine s process analysis show that this is indeed possible, e.g., the possibilities offered by process mining tools suc h as ProM are breathtaking both from a scientific and practical perspective.", "title": "" }, { "docid": "76071bd6bf0874191e2cdd3b491dc6c6", "text": "Steganography is collection of methods to hide secret information (“payload”) within non-secret information (“container”). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier. In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversarial Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications.", "title": "" }, { "docid": "3132db67005f04591f93e77a2855caab", "text": "Money laundering refers to activities pertaining to hiding the true income, evading taxes, or converting illegally earned money for normal use. These activities are often performed through shell companies that masquerade as real companies but where actual the purpose is to launder money. Shell companies are used in all the three phases of money laundering, namely, placement, layering, and integration, often simultaneously. In this paper, we aim to identify shell companies. We propose to use only bank transactions since that is easily available. In particular, we look at all incoming and outgoing transactions from a particular bank account along with its various attributes, and use anomaly detection techniques to identify the accounts that pertain to shell companies. Our aim is to create an initial list of potential shell company candidates which can be investigated by financial experts later. Due to lack of real data, we propose a banking transactions simulator (BTS) to simulate both honest as well as shell company transactions by studying a host of actual real-world fraud cases. We apply anomaly detection algorithms to detect candidate shell companies. Results indicate that we are able to identify the shell companies with a high degree of precision and recall.1", "title": "" }, { "docid": "dfcc6b34f008e4ea9d560b5da4826f4d", "text": "The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. Keywords—Gesture recognition, Kinect, shadow play animation, VRPN.", "title": "" }, { "docid": "94e2bfa218791199a59037f9ea882487", "text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.", "title": "" }, { "docid": "c41038d0e3cf34e8a1dcba07a86cce9a", "text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.", "title": "" }, { "docid": "1a962bcbd5b670e532d841a74c2fe724", "text": "In SCADA systems, there are many RTUs (Remote Terminal Units) are used for field data collection as well as sending data to master node through the communication system. In such case master node represents the collected data and enables manager to handle the remote controlling activities. The RTU is nothing but the unit of data acquisition in standalone manner. The processor used in RTU is vulnerable to random faults due to harsh environment around RTUs. Faults may lead to the failure of RTU unit and hence it becomes inaccessible for information acquisition. For long running methods, fault tolerance is major concern and research problem since from last two decades. Using the SCADA systems increase the problem of fault tolerance is becoming servered. To handle the faults in oreder to perform the message passing through all the layers of communication system fo the SCADA that time need the efficient fault tolerance. The faults like RTU, message passing layer faults in communication system etc. SCADA is nothing but one of application of MPI. The several techniques for the fault tolerance has been described for MPI which are utilized in different applications such as SCADA. The goal of this paper is to present the study over the different fault tolerance techniques which can be used to optimize the SCADA system availability by mitigating the faults in RTU devices and communication systems.", "title": "" }, { "docid": "f89107f7ae4a250af36630aba072b7a9", "text": "The new HTML5 standard provides much more access to client resources, such as user location and local data storage. Unfortunately, this greater access may create new security risks that potentially can yield new threats to user privacy and web attacks. One of these security risks lies with the HTML5 client-side database. It appears that data stored on the client file system is unencrypted. Therefore, any stored data might be at risk of exposure. This paper explains and performs a security investigation into how the data is stored on client local file systems. The investigation was undertaken using Firefox and Chrome web browsers, and Encase (a computer forensic tool), was used to examine the stored data. This paper describes how the data can be retrieved after an application deletes the client side database. Finally, based on our findings, we propose a solution to correct any potential issues and security risks, and recommend ways to store data securely on local file systems.", "title": "" }, { "docid": "e2762e01ccf8319c726f3702867eeb8e", "text": "Balance maintenance and upright posture recovery under unexpected environmental forces are key requirements for safe and successful co-existence of humanoid robots in normal human environments. In this paper we present a two-phase control strategy for robust balance maintenance under a force disturbance. The first phase, called the reflex phase, is designed to withstand the immediate effect of the force. The second phase is the recovery phase where the system is steered back to a statically stable “home” posture. The reflex control law employs angular momentum and is characterized by its counter-intuitive quality of “yielding” to the disturbance. The recovery control employs a general scheme of seeking to maximize the potential energy and is robust to local ground surface feature. Biomechanics literature indicates a similar strategy in play during human balance maintenance.", "title": "" }, { "docid": "bef119e43fcc9f2f0b50fdf521026680", "text": "Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution. 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "dc1cfdda40b23849f11187ce890c8f8b", "text": "Controlled sharing of information is needed and desirable for many applications and is supported in operating systems by access control mechanisms. This paper shows how to extend programming languages to provide controlled sharing. The extension permits expression of access constraints on shared data. Access constraints can apply both to simple objects, and to objects that are components of larger objects, such as bank account records in a bank's data base. The constraints are stated declaratively, and can be enforced by static checking similar to type checking. The approach can be used to extend any strongly-typed language, but is particularly suitable for extending languages that support the notion of abstract data types.", "title": "" }, { "docid": "00e5acdfb1e388b149bc729a7af108ee", "text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574", "title": "" }, { "docid": "b1ef75c4a0dc481453fb68e94ec70cdc", "text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.", "title": "" }, { "docid": "6e4f0a770fe2a34f99957f252110b6bd", "text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.", "title": "" }, { "docid": "cad54b58e3dd47e1e92078519660e71d", "text": "Web images come in hand with valuable contextual information. Although this information has long been mined for various uses such as image annotation, clustering of images, inference of image semantic content, etc., insufficient attention has been given to address issues in mining this contextual information. In this paper, we propose a webpage segmentation algorithm targeting the extraction of web images and their contextual information based on their characteristics as they appear on webpages. We conducted a user study to obtain a human-labeled dataset to validate the effectiveness of our method and experiments demonstrated that our method can achieve better results compared to an existing segmentation algorithm.", "title": "" }, { "docid": "7df97d3a5c393053b22255a0414e574a", "text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.", "title": "" }, { "docid": "5d9112213e6828d5668ac4a33d4582f9", "text": "This paper describes four patients whose chief symptoms were steatorrhoea and loss of weight. Despite the absence of a history of abdominal pain investigations showed that these patients had chronic pancreatitis, which responded to medical treatment. The pathological findings in two of these cases and in six which came to necropsy are reported.", "title": "" } ]
scidocsrr
282fd5e1ecc75d94544a575d6877d55f
Emotional responses to a romantic partner's imaginary rejection: the roles of attachment anxiety, covert narcissism, and self-evaluation.
[ { "docid": "c5beaa8be086776c769caedc30815aa8", "text": "Three studies were conducted to examine the correlates of adult attachment. In Study 1, an 18-item scale to measure adult attachment style dimensions was developed based on Kazan and Shaver's (1987) categorical measure. Factor analyses revealed three dimensions underlying this measure: the extent to which an individual is comfortable with closeness, feels he or she can depend on others, and is anxious or fearful about such things as being abandoned or unloved. Study 2 explored the relation between these attachment dimensions and working models of self and others. Attachment dimensions were found to be related to self-esteem, expressiveness, instrumentality, trust in others, beliefs about human nature, and styles of loving. Study 3 explored the role of attachment style dimensions in three aspects of ongoing dating relationships: partner matching on attachment dimensions; similarity between the attachment of one's partner and caregiving style of one's parents; and relationship quality, including communication, trust, and satisfaction. Evidence was obtained for partner matching and for similarity between one's partner and one's parents, particularly for one's opposite-sex parent. Dimensions of attachment style were strongly related to how each partner perceived the relationship, although the dimension of attachment that best predicted quality differed for men and women. For women, the extent to which their partner was comfortable with closeness was the best predictor of relationship quality, whereas the best predictor for men was the extent to which their partner was anxious about being abandoned or unloved.", "title": "" }, { "docid": "c700a8a3dc4aa81c475e84fc1bbf9516", "text": "A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.", "title": "" } ]
[ { "docid": "a4ff1ce29fb5f2be87c1868dcf96bd29", "text": "Web ontologies provide shared concepts for describing domain entities and thus enable semantic interoperability between applications. To facilitate concept sharing and ontology reusing, we developed Falcons Concept Search, a novel keyword-based ontology search engine. In this paper, we illustrate how the proposed mode of interaction helps users quickly find ontologies that satisfy their needs and present several supportive techniques including a new method of constructing virtual documents of concepts for keyword search, a popularity-based scheme to rank concepts and ontologies, and a way to generate query-relevant structured snippets. We also report the results of a usability evaluation as well as user feedback.", "title": "" }, { "docid": "7076f898c65a0e93a94357b757f92fc8", "text": "Understanding how to control how the brain's functioning mediates mental experience and the brain's processing to alter cognition or disease are central projects of cognitive and neural science. The advent of real-time functional magnetic resonance imaging (rtfMRI) now makes it possible to observe the biology of one's own brain while thinking, feeling and acting. Recent evidence suggests that people can learn to control brain activation in localized regions, with corresponding changes in their mental operations, by observing information from their brain while inside an MRI scanner. For example, subjects can learn to deliberately control activation in brain regions involved in pain processing with corresponding changes in experienced pain. This may provide a novel, non-invasive means of observing and controlling brain function, potentially altering cognitive processes or disease.", "title": "" }, { "docid": "65446279fb385c7a1f25f7b5ab3b4c2a", "text": "Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes were differentiated by taste and smell sensitivity and movement-related sensory behavior. Further, sensory processing subtypes predicted communication competence and maladaptive behavior. The findings of this study lay the foundation for the generation of more specific hypotheses regarding the mechanisms of sensory processing dysfunction in autism, and support the continued use of sensory-based interventions in the remediation of communication and behavioral difficulties in autism.", "title": "" }, { "docid": "9efd74df34775bc4c7a08230e67e990b", "text": "OBJECTIVE\nFirearm violence is a significant public health problem in the United States, and alcohol is frequently involved. This article reviews existing research on the relationships between alcohol misuse; ownership, access to, and use of firearms; and the commission of firearm violence, and discusses the policy implications of these findings.\n\n\nMETHOD\nNarrative review augmented by new tabulations of publicly-available data.\n\n\nRESULTS\nAcute and chronic alcohol misuse is positively associated with firearm ownership, risk behaviors involving firearms, and risk for perpetrating both interpersonal and self-directed firearm violence. In an average month, an estimated 8.9 to 11.7 million firearm owners binge drink. For men, deaths from alcohol-related firearm violence equal those from alcohol-related motor vehicle crashes. Enforceable policies restricting access to firearms for persons who misuse alcohol are uncommon. Policies that restrict access on the basis of other risk factors have been shown to reduce risk for subsequent violence.\n\n\nCONCLUSION\nThe evidence suggests that restricting access to firearms for persons with a documented history of alcohol misuse would be an effective violence prevention measure. Restrictions should rely on unambiguous definitions of alcohol misuse to facilitate enforcement and should be rigorously evaluated.", "title": "" }, { "docid": "122e3e4c10e4e5f2779773bde106d068", "text": "In recent years, research on image generation methods has been developing fast. The auto-encoding variational Bayes method (VAEs) was proposed in 2013, which uses variational inference to learn a latent space from the image database and then generates images using the decoder. The generative adversarial networks (GANs) came out as a promising framework, which uses adversarial training to improve the generative ability of the generator. However, the generated pictures by GANs are generally blurry. The deep convolutional generative adversarial networks (DCGANs) were then proposed to leverage the quality of generated images. Since the input noise vectors are randomly sampled from a Gaussian distribution, the generator has to map from a whole normal distribution to the images. This makes DCGANs unable to reflect the inherent structure of the training data. In this paper, we propose a novel deep model, called generative adversarial networks with decoder-encoder output noise (DE-GANs), which takes advantage of both the adversarial training and the variational Bayesain inference to improve the performance of image generation. DE-GANs use a pre-trained decoder-encoder architecture to map the random Gaussian noise vectors to informative ones and pass them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained by the same images as the generators, the output vectors could carry the intrinsic distribution information of the original images. Moreover, the loss function of DE-GANs is different from GANs and DCGANs. A hidden-space loss function is added to the adversarial loss function to enhance the robustness of the model. Extensive empirical results show that DE-GANs can accelerate the convergence of the adversarial training process and improve the quality of the generated images.", "title": "" }, { "docid": "cd5210231c5fa099be6b858a3069414d", "text": "Fat grafting to the aging face has become an integral component of esthetic surgery. However, the amount of fat to inject to each area of the face is not standardized and has been based mainly on the surgeon’s experience. The purpose of this study was to perform a systematic review of injected fat volume to different facial zones. A systematic review of the literature was performed through a MEDLINE search using keywords “facial,” “fat grafting,” “lipofilling,” “Coleman technique,” “autologous fat transfer,” and “structural fat grafting.” Articles were then sorted by facial subunit and analyzed for: author(s), year of publication, study design, sample size, donor site, fat preparation technique, average and range of volume injected, time to follow-up, percentage of volume retention, and complications. Descriptive statistics were performed. Nineteen articles involving a total of 510 patients were included. Rhytidectomy was the most common procedure performed concurrently with fat injection. The mean volume of fat injected to the forehead is 6.5 mL (range 4.0–10.0 mL); to the glabellar region 1.4 mL (range 1.0–4.0 mL); to the temple 5.9 mL per side (range 2.0–10.0 mL); to the eyebrow 5.5 mL per side; to the upper eyelid 1.7 mL per side (range 1.5–2.5 mL); to the tear trough 0.65 mL per side (range 0.3–1.0 mL); to the infraorbital area (infraorbital rim to lower lid/cheek junction) 1.4 mL per side (range 0.9–3.0 mL); to the midface 1.4 mL per side (range 1.0–4.0 mL); to the nasolabial fold 2.8 mL per side (range 1.0–7.5 mL); to the mandibular area 11.5 mL per side (range 4.0–27.0 mL); and to the chin 6.7 mL (range 1.0–20.0 mL). Data on exactly how much fat to inject to each area of the face in facial fat grafting are currently limited and vary widely based on different methods and anatomical terms used. This review offers the ranges and the averages for the injected volume in each zone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "06ba6c64fd0f45f61e4c2ca20c41f9d7", "text": "About ten years ago, the eld of range searching, especially simplex range searching, was wide open. At that time, neither e cient algorithms nor nontrivial lower bounds were known for most range-searching problems. A series of papers by Haussler and Welzl [161], Clarkson [88, 89], and Clarkson and Shor [92] not only marked the beginning of a new chapter in geometric searching, but also revitalized computational geometry as a whole. Led by these and a number of subsequent papers, tremendous progress has been made in geometric range searching, both in terms of developing e cient data structures and proving nontrivial lower bounds. From a theoretical point of view, range searching is now almost completely solved. The impact of general techniques developed for geometric range searching | \"-nets, 1=rcuttings, partition trees, multi-level data structures, to name a few | is evident throughout computational geometry. This volume provides an excellent opportunity to recapitulate the current status of geometric range searching and to summarize the recent progress in this area. Range searching arises in a wide range of applications, including geographic information systems, computer graphics, spatial databases, and time-series databases. Furthermore, a variety of geometric problems can be formulated as a range-searching problem. A typical range-searching problem has the following form. Let S be a set of n points in R , and let", "title": "" }, { "docid": "6997284b9a3b8c8e7af639e92399db46", "text": "Research into rehabilitation robotics has grown rapidly and the number of therapeutic rehabilitation robots has expanded dramatically during the last two decades. Robotic rehabilitation therapy can deliver high-dosage and high-intensity training, making it useful for patients with motor disorders caused by stroke or spinal cord disease. Robotic devices used for motor rehabilitation include end-effector and exoskeleton types; herein, we review the clinical use of both types. One application of robot-assisted therapy is improvement of gait function in patients with stroke. Both end-effector and the exoskeleton devices have proven to be effective complements to conventional physiotherapy in patients with subacute stroke, but there is no clear evidence that robotic gait training is superior to conventional physiotherapy in patients with chronic stroke or when delivered alone. In another application, upper limb motor function training in patients recovering from stroke, robot-assisted therapy was comparable or superior to conventional therapy in patients with subacute stroke. With end-effector devices, the intensity of therapy was the most important determinant of upper limb motor recovery. However, there is insufficient evidence for the use of exoskeleton devices for upper limb motor function in patients with stroke. For rehabilitation of hand motor function, either end-effector and exoskeleton devices showed similar or additive effects relative to conventional therapy in patients with chronic stroke. The present evidence supports the use of robot-assisted therapy for improving motor function in stroke patients as an additional therapeutic intervention in combination with the conventional rehabilitation therapies. Nevertheless, there will be substantial opportunities for technical development in near future.", "title": "" }, { "docid": "0360bfbb47af9e661114ea8d367a166f", "text": "Critical Discourse Analysis (CDA) is discourse analytical research that primarily studies the way social-power abuse and inequality are enacted, reproduced, legitimated, and resisted by text and talk in the social and political context. With such dissident research, critical discourse analysts take an explicit position and thus want to understand, expose, and ultimately challenge social inequality. This is also why CDA may be characterized as a social movement of politically committed discourse analysts. One widespread misunderstanding of CDA is that it is a special method of doing discourse analysis. There is no such method: in CDA all methods of the cross-discipline of discourse studies, as well as other relevant methods in the humanities and social sciences, may be used (Wodak and Meyer 2008; Titscher et al. 2000). To avoid this misunderstanding and to emphasize that many methods and approaches may be used in the critical study of text and talk, we now prefer the more general term critical discourse studies (CDS) for the field of research (van Dijk 2008b). However, since most studies continue to use the well-known abbreviation CDA, this chapter will also continue to use it. As an analytical practice, CDA is not one direction of research among many others in the study of discourse. Rather, it is a critical perspective that may be found in all areas of discourse studies, such as discourse grammar, Conversation Analysis, discourse pragmatics, rhetoric, stylistics, narrative analysis, argumentation analysis, multimodal discourse analysis and social semiotics, sociolinguistics, and ethnography of communication or the psychology of discourse-processing, among others. In other words, CDA is discourse study with an attitude. Some of the tenets of CDA could already be found in the critical theory of the Frankfurt School before World War II (Agger 1992b; Drake 2009; Rasmussen and Swindal 2004). Its current focus on language and discourse was initiated with the", "title": "" }, { "docid": "52e492ff5e057a8268fd67eb515514fe", "text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.", "title": "" }, { "docid": "e56bd360fe21949d0617c6e1ddafefff", "text": "This study addresses the problem of identifying the meaning of unknown words or entities in a discourse with respect to the word embedding approaches used in neural language models. We proposed a method for on-the-fly construction and exploitation of word embeddings in both the input and output layers of a neural model by tracking contexts. This extends the dynamic entity representation used in Kobayashi et al. (2016) and incorporates a copy mechanism proposed independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we construct a new task and dataset called Anonymized Language Modeling for evaluating the ability to capture word meanings while reading. Experiments conducted using our novel dataset show that the proposed variant of RNN language model outperformed the baseline model. Furthermore, the experiments also demonstrate that dynamic updates of an output layer help a model predict reappearing entities, whereas those of an input layer are effective to predict words following reappearing entities.", "title": "" }, { "docid": "51fe6376956593cb8a2e4de3b37cb8fe", "text": "The human musculoskeletal system is supposed to play an important role in doing various static and dynamic tasks. From this standpoint, some musculoskeletal humanoid robots have been developed in recent years. However, existing musculoskeletal robots did not have upper body with several DOFs to balance their bodies statically or did not have enough power to perform dynamic tasks. We think the musculoskeletal structure has two significant properties: whole-body flexibility and whole-body coordination. Using these two properties can enable us to make robots' performance better than before. In this study, we developed a humanoid robot with a musculoskeletal system that is driven by pneumatic artificial muscles. To demonstrate the robot's capability in static and dynamic tasks, we conducted two experiments. As a static task, we conducted a standing experiment using a simple feedback control and evaluated the stability by applying an impulse to the robot. As a dynamic task, we conducted a walking experiment using a feedforward controller with human muscle activation patterns and confirmed that the robot was able to perform the dynamic task.", "title": "" }, { "docid": "6e22c766fe7caaeb53251bdd9c6401e9", "text": "Task-space control of redundant robot systems based on analytical models is known to be susceptive to modeling errors. Data-driven model learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an ill-posed problem. In particular, the same input data point can yield many different output values, which can form a nonconvex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally ill-posed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local kernel-based learning approach for online model learning for task-space tracking control. We propose a parametrization for the local model, which makes an application in task-space tracking control of redundant robots possible. The model parametrization further allows us to apply the kernel-trick and, therefore, enables a formulation within the kernel learning framework. In our evaluations, we show the ability of the method for online model learning for task-space tracking control of redundant robots.", "title": "" }, { "docid": "2eb5b8c0626ccce0121d8d3f9e01d274", "text": "Like full-text translation, cross-language information retrieval (CLIR) is a task that requires some form of knowledge transfer across languages. Although robust translation resources are critical for constructing high quality translation tools, manually constructed resources are limited both in their coverage and in their adaptability to a wide range of applications. Automatic mining of translingual knowledge makes it possible to complement hand-curated resources. This chapter describes a growing body of work that seeks to mine translingual knowledge from text data, in particular, data found on the Web. We review a number of mining and filtering strategies, and consider them in the context of statistical machine translation, showing that these techniques can be effective in collecting large quantities of translingual knowledge necessary", "title": "" }, { "docid": "1c9c93d1eff3904941516516a6390cdf", "text": "BACKGROUND\nSyndesmosis sprains can contribute to chronic pain and instability, which are often indications for surgical intervention. The literature lacks sufficient objective data detailing the complex anatomy and localized osseous landmarks essential for current surgical techniques.\n\n\nPURPOSE\nTo qualitatively and quantitatively analyze the anatomy of the 3 syndesmotic ligaments with respect to surgically identifiable bony landmarks.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nSixteen ankle specimens were dissected to identify the anterior inferior tibiofibular ligament (AITFL), posterior inferior tibiofibular ligament (PITFL), interosseous tibiofibular ligament (ITFL), and bony anatomy. Ligament lengths, footprints, and orientations were measured in reference to bony landmarks by use of an anatomically based coordinate system and a 3-dimensional coordinate measuring device.\n\n\nRESULTS\nThe syndesmotic ligaments were identified in all specimens. The pyramidal-shaped ITFL was the broadest, originating from the distal interosseous membrane expansion, extending distally, and terminating 9.3 mm (95% CI, 8.3-10.2 mm) proximal to the central plafond. The tibial cartilage extended 3.6 mm (95% CI, 2.8-4.4 mm) above the plafond, a subset of which articulated directly with the fibular cartilage located 5.2 mm (95% CI, 4.6-5.8 mm) posterior to the anterolateral corner of the tibial plafond. The primary AITFL band(s) originated from the tibia 9.3 mm (95% CI, 8.6-10.0 mm) superior and medial to the anterolateral corner of the tibial plafond and inserted on the fibula 30.5 mm (95% CI, 28.5-32.4 mm) proximal and anterior to the inferior tip of the lateral malleolus. Superficial fibers of the PITFL originated along the distolateral border of the posterolateral tubercle of the tibia 8.0 mm (95% CI, 7.5-8.4 mm) proximal and medial to the posterolateral corner of the plafond and inserted along the medial border of the peroneal groove 26.3 mm (95% CI, 24.5-28.1 mm) superior and posterior to the inferior tip of the lateral malleolus.\n\n\nCONCLUSION\nThe qualitative and quantitative anatomy of the syndesmotic ligaments was reproducibly described and defined with respect to surgically identifiable bony prominences.\n\n\nCLINICAL RELEVANCE\nData regarding anatomic attachment sites and distances to bony prominences can optimize current surgical fixation techniques, improve anatomic restoration, and reduce the risk of iatrogenic injury from malreduction or misplaced implants. Quantitative data also provide the consistency required for the development of anatomic reconstructions.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8c47d9a93e3b9d9f31b77b724bf45578", "text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.", "title": "" }, { "docid": "87f3c12df54f395b9a24ccfc4dd10aa8", "text": "The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.", "title": "" }, { "docid": "879282128be8b423114401f6ec8baf8a", "text": "Yelp is one of the largest online searching and reviewing systems for kinds of businesses, including restaurants, shopping, home services et al. Analyzing the real world data from Yelp is valuable in acquiring the interests of users, which helps to improve the design of the next generation system. This paper targets the evaluation of Yelp dataset, which is provided in the Yelp data challenge. A bunch of interesting results are found. For instance, to reach any one in the Yelp social network, one only needs 4.5 hops on average, which verifies the classical six degree separation theory; Elite user mechanism is especially effective in maintaining the healthy of the whole network; Users who write less than 100 business reviews dominate. Those insights are expected to be considered by Yelp to make intelligent business decisions in the future.", "title": "" }, { "docid": "70fafdedd05a40db5af1eabdf07d431c", "text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.", "title": "" } ]
scidocsrr
c77759cf3b0e60ce2a7bb6f14f1bba31
ADDING TIME-OFFSETS TO SCHEDULABILITY ANALYSIS
[ { "docid": "f560dbe8f3ff47731061d67b596ec7b0", "text": "This paper considers the problem of fixed priority scheduling of periodic tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the Liu and Layland bound. The results are shown to provide a basis for developing predictable distributed real-time systems.", "title": "" } ]
[ { "docid": "969b49b20271f2714ad96d739bf79f08", "text": "Control of a robot manipulator in contact with the environment is usually conducted by the direct feedback control system using a force-torque sensor or the indirect impedance control scheme. Although these methods have been successfully applied to many applications, simultaneous control of force and position cannot be achieved. Furthermore, collision safety has been of primary concern in recent years with emergence of service robots in direct contact with humans. To cope with such problems, redundant actuation has been used to enhance the performance of a position/force controller. In this paper, the novel design of a double actuator unit (DAU) composed of double actuators and a planetary gear train is proposed to provide the capability of simultaneous control of position and force as well as the improved collision safety. Since one actuator controls position and the other actuator modulates stiffness, DAU can control the position and stiffness simultaneously at the same joint. The torque exerted on the joint can be estimated without an expensive torque/force sensor. DAU is capable of detecting dynamic collision by monitoring the speed of the stiffness modulator. Upon detection of dynamic collision, DAU immediately reduces its joint stiffness according to the collision magnitude, thus providing the optimum collision safety. It is shown from various experiments that DAU can provide good performance of position tracking, force estimation and collision safety.", "title": "" }, { "docid": "7c0b7d55abdd6cce85730dbf1cd02109", "text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large", "title": "" }, { "docid": "79ca455db7e7348000c6590a442f9a4c", "text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis upon flap systems. It discusses existing electro-hydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance and life cycle costs. The paper then progresses to describe a full scale actuation demonstrator of the flap system, including the high speed electrical drive, step down gearbox and flaps. Detailed descriptions are given of the fault tolerant motor, power electronics, control architecture and position sensor systems, along with a range of test results, demonstrating the system in operation", "title": "" }, { "docid": "39bd9645fbe5bb4f7dcd486274710347", "text": "This paper presents the design of a first-order continuous-time sigma-delta modulator. It can accept input signal bandwidth of 10 kHz with oversampling ratio of 250. The modulator operates at 1.8 V supply voltage and uses 0.18 mum CMOS technology. It achieves a level of 60 dB SNR", "title": "" }, { "docid": "49575576bc5a0b949c81b0275cbc5f41", "text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.", "title": "" }, { "docid": "3f5f3a31cbf45065ea82cf60140a8bf5", "text": "This paper presents a nonholonomic path planning method, aiming at taking into considerations of curvature constraint, length minimization, and computational demand, for car-like mobile robot based on cubic spirals. The generated path is made up of at most five segments: at most two maximal-curvature cubic spiral segments with zero curvature at both ends in connection with up to three straight line segments. A numerically efficient process is presented to generate a Cartesian shortest path among the family of paths considered for a given pair of start and destination configurations. Our approach is resorted to minimization via linear programming over the sum of length of each path segment of paths synthesized based on minimal locomotion cubic spirals linking start and destination orientations through a selected intermediate orientation. The potential intermediate configurations are not necessarily selected from the symmetric mean circle for non-parallel start and destination orientations. The novelty of the presented path generation method based on cubic spirals is: (i) Practical: the implementation is straightforward so that the generation of feasible paths in an environment free of obstacles is efficient in a few milliseconds; (ii) Flexible: it lends itself to various generalizations: readily applicable to mobile robots capable of forward and backward motion and Dubins’ car (i.e. car with only forward driving capability); well adapted to the incorporation of other constraints like wall-collision avoidance encountered in robot soccer games; straightforward extension to planning a path connecting an ordered sequence of target configurations in simple obstructed environment. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4db8a0d39ef31b49f2b6d542a14b03a2", "text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.", "title": "" }, { "docid": "3ad25dabe3b740a91b939a344143ea9e", "text": "Recently, much attention in research and practice has been devoted to the topic of IT consumerization, referring to the adoption of private consumer IT in the workplace. However, research lacks an analysis of possible antecedents of the trend on an individual level. To close this gap, we derive a theoretical model for IT consumerization behavior based on the theory of planned behavior and perform a quantitative analysis. Our investigation shows that it is foremost determined by normative pressures, specifically the behavior of friends, co-workers and direct supervisors. In addition, behavioral beliefs and control beliefs were found to affect the intention to use non-corporate IT. With respect to the former, we found expected performance improvements and an increase in ease of use to be two of the key determinants. As for the latter, especially monetary costs and installation knowledge were correlated with IT consumerization intention.", "title": "" }, { "docid": "e5c6ed3e71cb971b5766a18facbc76f3", "text": "The main objective of the present paper is to develop a smart wireless sensor network (WSN) for an agricultural environment. Monitoring agricultural environment for various factors such as temperature and humidity along with other factors can be of significance. The advanced development in wireless sensor networks can be used in monitoring various parameters in agriculture. Due to uneven natural distribution of rain water it is very difficult for farmers to monitor and control the distribution of water to agriculture field in the whole farm or as per the requirement of the crop. There is no ideal irrigation method for all weather conditions, soil structure and variety of crops cultures. Farmers suffer large financial losses because of wrong prediction of weather and incorrect irrigation methods. Sensors are the essential device for precision agricultural applications. In this paper we have detailed about how to utilize the sensors in crop field area and explained about Wireless Sensor Network (WSN), Zigbee network, Protocol stack, zigbee Applications and the results are given, when implemented the zigbee network experimentally in real time environment.", "title": "" }, { "docid": "f90bc248d18b2b37f37f762d758a4cb3", "text": "postMessage is popular in HTML5 based web apps to allow the communication between different origins. With the increasing popularity of the embedded browser (i.e., WebView) in mobile apps (i.e., hybrid apps), postMessage has found utility in these apps. However, different from web apps, hybrid apps have a unique requirement that their native code (e.g., Java for Android) also needs to exchange messages with web code loaded in WebView. To bridge the gap, developers typically extend postMessage by treating the native context as a new frame, and allowing the communication between the new frame and the web frames. We term such extended postMessage \"hybrid postMessage\" in this paper. We find that hybrid postMessage introduces new critical security flaws: all origin information of a message is not respected or even lost during the message delivery in hybrid postMessage. If adversaries inject malicious code into WebView, the malicious code may leverage the flaws to passively monitor messages that may contain sensitive information, or actively send messages to arbitrary message receivers and access their internal functionalities and data. We term the novel security issue caused by hybrid postMessage \"Origin Stripping Vulnerability\" (OSV). In this paper, our contributions are fourfold. First, we conduct the first systematic study on OSV. Second, we propose a lightweight detection tool against OSV, called OSV-Hunter. Third, we evaluate OSV-Hunter using a set of popular apps. We found that 74 apps implemented hybrid postMessage, and all these apps suffered from OSV, which might be exploited by adversaries to perform remote real-time microphone monitoring, data race, internal data manipulation, denial of service (DoS) attacks and so on. Several popular development frameworks, libraries (such as the Facebook React Native framework, and the Google cloud print library) and apps (such as Adobe Reader and WPS office) are impacted. Lastly, to mitigate OSV from the root, we design and implement three new postMessage APIs, called OSV-Free. Our evaluation shows that OSV-Free is secure and fast, and it is generic and resilient to the notorious Android fragmentation problem. We also demonstrate that OSV-Free is easy to use, by applying OSV-Free to harden the complex \"Facebook React Native\" framework. OSV-Free is open source, and its source code and more implementation and evaluation details are available online.", "title": "" }, { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" }, { "docid": "0efd38b774996a931ebb8505d677bf8a", "text": "The hotel industry in Singapore is an important part of the hospitality and tourism infrastructure and a strategic part of Singapore’s growth story. Hotels are primarily viewed as a service industry with intangible areas of guest experience and service levels. The research objective of this paper is to better understand the hotel guest satisfaction and the areas that hotel management can change, in order to get better results. For this purpose, an analysis of hotel guest satisfaction ratings based on attributes such as Location, Sleep quality, Rooms, Service quality, Value for money and Cleanliness was performed. Further, text analysis of customer reviews was also performed to better understand the positive and negative sentiments of hotel guests. We focused on identifying the attributes that differentiate one hotel from another, and then using these attribute insights to make recommendation to hotel management, on how they can improve their operations, guest satisfaction and generally differentiate themselves from their competition. Data from an online website, Trip Advisor, was used to analyse and compare customer ratings and reviews on five hotels. Statistical data analysis techniques were used to identify the key attributes that are most important in choosing hotels and are critical to focus on in order to ensure guest satisfaction expectations are met. Based on text analytics, the key results from this study indicated that hotel guests look for a good room and a hotel with a pool and good service. Based on the ratings analysis, the most important attributes for guest satisfaction turned out to be Rooms, Value for money and Location.", "title": "" }, { "docid": "437ad5ac30619459627b8f76034da29d", "text": "In 1986, this author presented a paper at a conference, giving a sampling of computer and network security issues, and the tools of the day to address them. The purpose of this current paper is to revisit the topic of computer and network security, and see what changes, especially in types of attacks, have been brought about in 30 years. This paper starts by presenting a review of the state of computer and network security in 1986, along with how certain facets of it have changed. Next, it talks about today's security environment, and finally discusses some of today's many computer and network attack methods that are new or greatly updated since 1986. Many references for further study are provided. The classes of attacks that are known today are the same as the ones known in 1986, but many new methods of implementing the attacks have been enabled by new technologies and the increased pervasiveness of computers and networks in today's society. The threats and specific types of attacks faced by the computer community 30 years ago have not gone away. New threat methods and attack vectors have opened due to advancing technology, supplementing and enhancing, rather than replacing the long-standing threat methods.", "title": "" }, { "docid": "1f86ed06a01e7a37c5ce96d776b95511", "text": "This paper presents a technique for incorporating terrain traversability data into a global path planning method for field mobile robots operating on rough natural terrain. The focus of this approach is on assessing the traversability characteristics of the global terrain using a multi-valued map representation of traversal dificulty, and using this information to compute a traversal cost function to ensure robot survivability. The traversal cost is then utilized by a global path planner to find an optimally safe path through the terrain. A graphical simulator for the terrain-basedpath planning is presented. The path planner is applied to a commercial Pioneer 2-AT robot andfield test results are provided.", "title": "" }, { "docid": "aaf110cdf2a8ce96756c2ef0090d6e54", "text": "The heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web documents are not enough to find good answer documents. Link information and URL information compensates for the insufficiencies of content information. However, static combination of multiple evidences may lower the retrieval performance. We need different strategies to find target documents according to a query type. We can classify user queries as three categories, the topic relevance task, the homepage finding task, and the service finding task. In this paper, a user query classification scheme is proposed. This scheme uses the difference of distribution, mutual information, the usage rate as anchor texts, and the POS information for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.", "title": "" }, { "docid": "acebe0f450533ec1dc6fec61b1b1330e", "text": "Recently, CPG-based controllers have been widely explored to achieve robust biped locomotion. However, this approach has difficulties in tuning open parameters in the controller. In this paper, we present a learning framework for CPG-based biped locomotion with a policy gradient method. We demonstrate that appropriate sensory feedback in the CPG-based control architecture can be acquired using the proposed method within a thousand trials by numerical simulations. We analyze linear stability of a periodic orbit of the acquired biped walking considering a return map. Furthermore, we apply the learned controllers in numerical simulations to our physical 5-link robot in order to empirically evaluate the effectiveness of the proposed framework. Experimental results suggest the robustness of the acquired controllers against environmental changes and variations in the mass properties of the robot", "title": "" }, { "docid": "acbebfc6792bf5df888cf5a30498d4e8", "text": "In most manipulations, we use our fingertips to apply time-varying forces to the target object in controlled directions. Here we used microneurography to assess how single tactile afferents encode the direction of fingertip forces at magnitudes, rates, and directions comparable to those arising in everyday manipulations. Using a flat stimulus surface, we applied forces to a standard site on the fingertip while recording impulse activity in 196 tactile afferents with receptive fields distributed over the entire terminal phalanx. Forces were applied in one of five directions: normal force and forces at a 20 degrees angle from the normal in the radial, distal, ulnar, or proximal directions. Nearly all afferents responded, and the responses in most slowly adapting (SA)-I, SA-II, and fast adapting (FA)-I afferents were broadly tuned to a preferred direction of force. Among afferents of each type, the preferred directions were distributed in all angular directions with reference to the stimulation site, but not uniformly. The SA-I population was biased for tangential force components in the distal direction, the SA-II population was biased in the proximal direction, and the FA-I population was biased in the proximal and radial directions. Anisotropic mechanical properties of the fingertip and the spatial relationship between the receptive field center of the afferent and the stimulus site appeared to influence the preferred direction in a manner dependent on afferent type. We conclude that tactile afferents from the whole terminal phalanx potentially contribute to the encoding of direction of fingertip forces similar to those that occur when subjects manipulate objects under natural conditions.", "title": "" }, { "docid": "1137cdf90ff6229865ae20980739afc5", "text": "This paper addresses the role of policy and evidence in health promotion. The concept of von Wright’s “logic of events” is introduced and applied to health policy impact analysis. According to von Wright (1976), human action can be explained by a restricted number of determinants: wants, abilities, duties, and opportunities. The dynamics of action result from changes in opportunities (logic of events). Applied to the policymaking process, the present model explains personal wants as subordinated to political goals. Abilities of individual policy makers are part of organisational resources. Also, personal duties are subordinated to institutional obligations. Opportunities are mainly related to political context and public support. The present analysis suggests that policy determinants such as concrete goals, sufficient resources and public support may be crucial for achieving an intended behaviour change on the population level, while other policy determinants, e.g., personal commitment and organisational capacities, may especially relate to the policy implementation process. The paper concludes by indicating ways in which future research using this theoretical framework might contribute to health promotion practice for improved health outcomes across populations.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" }, { "docid": "52c9d8a1bf6fabbe0771eef75a64c1d8", "text": "This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.", "title": "" } ]
scidocsrr
83ee6c8ad96d1660a0b80edfe337647a
Detection of self intersection in synthetic hand pose generators
[ { "docid": "2b2398bf61847843e18d1f9150a1bccc", "text": "We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.", "title": "" } ]
[ { "docid": "3bf3546e686763259b953b31674e3cdc", "text": "In this paper, we concentrate on the automatic recognition of Egyptian Arabic speech using syllables. Arabic spoken digits were described by showing their constructing phonemes, triphones, syllables and words. Speaker-independent hidden markov models (HMMs)-based speech recognition system was designed using Hidden markov model toolkit (HTK). The database used for both training and testing consists from forty-four Egyptian speakers. Experiments show that the recognition rate using syllables outperformed the rate obtained using monophones, triphones and words by 2.68%, 1.19% and 1.79% respectively. A syllable unit spans a longer time frame, typically three phones, thereby offering a more parsimonious framework for modeling pronunciation variation in spontaneous speech. Moreover, syllable-based recognition has relatively smaller number of used units and runs faster than word-based recognition. Key-Words: Speech recognition, syllables, Arabic language, HMMs.", "title": "" }, { "docid": "02dab9e102d1b8f5e4f6ab66e04b3aad", "text": "CHILD CARE PRACTICES ANTECEDING THREE PATTERNS OF PRESCHOOL BEHAVIOR. STUDIED SYSTEMATICALLY CHILD-REARING PRACTICES ASSOCIATED WITH COMPETENCE IN THE PRESCHOOL CHILD. 2015 American Psychological Association PDF documents require Adobe Acrobat Reader.Effects of Authoritative Parental Control on Child Behavior, Child. Child care practices anteceding three patterns of preschool behavior. Genetic.She is best known for her work on describing parental styles of child care and. Anteceding Three Patterns of Preschool Behavior, Genetic Psychology.Child care practices anteceding three patterns of preschool behavior.", "title": "" }, { "docid": "304393092575799920363fdcea0daca4", "text": "We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention.\n ClearView (1) observes normal executions to learn invariants thatcharacterize the application's normal behavior, (2) uses error detectors to distinguish normal executions from erroneous executions, (3) identifies violations of learned invariants that occur during erroneous executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch.\n ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly appropriate for this context include its ability to generate patches without human intervention, apply and remove patchesto and from running applications without requiring restarts or otherwise perturbing the execution, and identify and discard ineffective or damaging patches by evaluating the continued behavior of patched applications.\n ClearView was evaluated in a Red Team exercise designed to test its ability to successfully survive attacks that exploit security vulnerabilities. A hostile external Red Team developed ten code injection exploits and used these exploits to repeatedly attack an application protected by ClearView. ClearView detected and blocked all of the attacks. For seven of the ten exploits, ClearView automatically generated patches that corrected the error, enabling the application to survive the attacks and continue on to successfully process subsequent inputs. Finally, the Red Team attempted to make Clear-View apply an undesirable patch, but ClearView's patch evaluation mechanism enabled ClearView to identify and discard both ineffective patches and damaging patches.", "title": "" }, { "docid": "0ad3b23d1965e00bdc12038f0652b096", "text": "The DC microgrid is connected to the AC utility by parallel bidirectional power converters (BPCs) to import/export large power, whose control directly affects the performance of the grid-connected DC microgrid. Much work has focused on the hierarchical control of the DC, AC, and hybrid microgrids, but little has considered the hierarchical control of multiple parallel BPCs that directly connect the DC microgrid to the AC utility. In this paper, we propose a hierarchical control for parallel BPCs of a grid-connected DC microgrid. To suppress the potential zero-sequence circulating current in the AC side among the parallel BPCs and realize feedback linearization of the voltage control, a d-q-0 control scheme instead of a conventional d-q control scheme is proposed in the inner current loop, and the square of the DC voltage is adopted in the inner voltage loop. DC side droop control is applied to realize DC current sharing among multiple BPCs at the primary control level, and this induces DC bus voltage deviation. The quantified relationship between the current sharing error and DC voltage deviation is derived, indicating that there is a trade-off between the DC voltage deviation and current sharing error. To eliminate the current sharing error and DC voltage deviation simultaneously, slope-adjusting and voltage-shifting approaches are adopted at the secondary control level. The proposed tertiary control realizes precise active and reactive power exchange through parallel BPCs for economical operation. The proposed hierarchical control is applied for parallel BPCs of a grid-connected DC microgrid and can operate coordinately with the control for controllable/uncontrollable distributional generation. The effectiveness of the proposed control method is verified by corresponding simulation tests based on Matlab/Simulink, and the performance of the hierarchical control is evaluated for practical applications.", "title": "" }, { "docid": "c59bbc28449aeb396bee12492633cd06", "text": "BACKGROUND\nThe distribution of the attachment of the maxillary labial frenum in the children of different ethnic backgrounds has not been studied extensively.\n\n\nAIM\nThe purpose of this cross-sectional study was to examine the prevalence of the various types of maxillary labial frenum attachment in the children of different ethnic backgrounds.\n\n\nDESIGN\nChildren (aged 1-18) attending a public health clinic in Lavrion, Greece, were clinically examined for maxillary frenum attachment location. Demographic information was recorded. Parents provided written informed consent.\n\n\nRESULTS\nThe examined children were 226, with mean (± standard deviation) age of 8.5 ± 3.0 years. They were of Greek (51%), Albanian (20%), Turkish (12%), and Afghan (11%) descent. The prevalence of the maxillary labial frenum attachment was mucosal (10.2%), gingival (41.6%), papillary (22.1%), and papillary penetrating (26.1%). Frenum attachment differed significantly by age (P = 0.001). The age of children with mucosal- or gingival-type frenum was significantly greater than the age of children with papillary penetrating-type frenum. Frenum attachment did not differ by gender or ethnic background (P ≥ 0.20).\n\n\nCONCLUSIONS\nThe results of this study suggest that, in children, ethnic background and gender are not associated with maxillary labial frenum attachment type, whereas age is strongly associated.", "title": "" }, { "docid": "353bbc5e68ec1d53b3cd0f7c352ee699", "text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "d8d068254761619ccbcd0bbab896d3b2", "text": "In this article we illustrate a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management.", "title": "" }, { "docid": "3ee772cb68d01c6080459820ee451657", "text": "We present a non-photorealistic rendering technique to transform color images and videos into painterly abstractions. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features, derived from the smoothed structure tensor. Contrary to conventional edge-preserving filters, our filter generates a painting-like flattening effect along the local feature directions while preserving shape boundaries. As opposed to conventional painting algorithms, it produces temporally coherent video abstraction without extra processing. The GPU implementation of our method processes video in real-time. The results have the clearness of cartoon illustrations but also exhibit directional information as found in oil paintings.", "title": "" }, { "docid": "1da4596bcfbf46684981595dea7f6e80", "text": "Consciousness results from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism's current state. We contrast the semantic pointer competition (SPC) theory of consciousness with the hypothesis that consciousness is the capacity of a system to integrate information (IIT). We describe computer simulations to show that SPC surpasses IIT in providing better explanations of key aspects of consciousness: qualitative features, onset and cessation, shifts in experiences, differences in kinds across different organisms, unity and diversity, and storage and retrieval.", "title": "" }, { "docid": "26b2b51deb40e41291fc116288a5d481", "text": "In the previous years, Skype has gained more and more popularity, since it is seen as the best VoIP software with good quality of sound, ease of use and one that works everywhere and with every OS. Because of its great diffusion, both the operators and the users are, for different reasons, interested in detecting Skype traffic. In this paper we propose a real-time algorithm (named Skype-Hunter) to detect and classify Skype traffic. In more detail, this novel method, by means of both signature-based and statistical procedures, is able to correctly reveal and classify the signaling traffic as well as the data traffic (calls and file transfers). To assess the effectiveness of the algorithm, experimental tests have been performed with several traffic data sets, collected in different network scenarios. Our system outperforms the ‘classical’ statistical traffic classifiers as well as the state-of-the-art ad hoc Skype classifier. Copyright 2011 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "70c82bb98d0e558280973d67429cea8a", "text": "We present an algorithm for separating the local gradient information and Lambertian color by using 4-source color photometric stereo in the presence of highlights and shadows. We assume that the surface reflectance can be approximated by the sum of a Lambertian and a specular component. The conventional photometric method is generalized for color images. Shadows and highlights in the input images are detected using either spectral or directional cues and excluded from the recovery process, thus giving more reliable estimates of local surface parameters.", "title": "" }, { "docid": "3f9247e281093d94b7c9846d29ecbda0", "text": "How do we determine the mutational effects in exome sequencing data with little or no statistical evidence? Can protein structural information fill in the gap of not having enough statistical evidence? In this work, we answer the two questions with the goal towards determining pathogenic effects of rare variants in rare disease. We take the approach of determining the importance of point mutation loci focusing on protein structure features. The proposed structure-based features contain information about geometric, physicochemical, and functional information of mutation loci and those of structural neighbors of the loci. The performance of the structure-based features trained on 80% of HumDiv and tested on 20% of HumDiv and on ClinVar datasets showed high levels of discernibility in the mutation’s pathogenic or benign effects: F score of 0.71 and 0.68 respectively using multi-layer perceptron. Combining structureand sequence-based feature further improve the accuracy: F score of 0.86 (HumDiv) and 0.75 (ClinVar). Also, careful examination of the rare variants in rare diseases cases showed that structure-based features are important in discerning importance of variant loci.", "title": "" }, { "docid": "b311ce7a34d3bdb21678ed765bcd0f0b", "text": "This paper focuses on the micro-blogging service Twitter, looking at source credibility for information shared in relation to the Fukushima Daiichi nuclear power plant disaster in Japan. We look at the sources, credibility, and between-language differences in information shared in the month following the disaster. Messages were categorized by user, location, language, type, and credibility of information source. Tweets with reference to third-party information made up the bulk of messages sent, and it was also found that a majority of those sources were highly credible, including established institutions, traditional media outlets, and highly credible individuals. In general, profile anonymity proved to be correlated with a higher propensity to share information from low credibility sources. However, Japanese-language tweeters, while more likely to have anonymous profiles, referenced lowcredibility sources less often than non-Japanese tweeters, suggesting proximity to the disaster mediating the degree of credibility of shared content.", "title": "" }, { "docid": "4b4cea4f58f33b9ace117fddd936d006", "text": "The paper presents a complete solution for recognition of textual and graphic structures in various types of documents acquired from the Internet. In the proposed approach, the document structure recognition problem is divided into sub-problems. The first one is localizing logical structure elements within the document. The second one is recognizing segmented logical structure elements. The input to the method is an image of document page, the output is the XML file containing all graphic and textual elements included in the document, preserving the reading order of document blocks. This file contains information about the identity and position of all logical elements in the document image. The paper describes all details of the proposed method and shows the results of the experiments validating its effectiveness. The results of the proposed method for paragraph structure recognition are comparable to the referenced methods which offer segmentation only.", "title": "" }, { "docid": "2564e804c862e3e40a5f8d0d6dada0c0", "text": "microRNAs (miRNAs) are short non-coding RNA species, which act as potent gene expression regulators. Accurate identification of miRNA targets is crucial to understanding their function. Currently, hundreds of thousands of miRNA:gene interactions have been experimentally identified. However, this wealth of information is fragmented and hidden in thousands of manuscripts and raw next-generation sequencing data sets. DIANA-TarBase was initially released in 2006 and it was the first database aiming to catalog published experimentally validated miRNA:gene interactions. DIANA-TarBase v7.0 (http://www.microrna.gr/tarbase) aims to provide for the first time hundreds of thousands of high-quality manually curated experimentally validated miRNA:gene interactions, enhanced with detailed meta-data. DIANA-TarBase v7.0 enables users to easily identify positive or negative experimental results, the utilized experimental methodology, experimental conditions including cell/tissue type and treatment. The new interface provides also advanced information ranging from the binding site location, as identified experimentally as well as in silico, to the primer sequences used for cloning experiments. More than half a million miRNA:gene interactions have been curated from published experiments on 356 different cell types from 24 species, corresponding to 9- to 250-fold more entries than any other relevant database. DIANA-TarBase v7.0 is freely available.", "title": "" }, { "docid": "16eff9f2b7626f53baa95463f18d518a", "text": "The need for fine-grained power management in digital ICs has led to the design and implementation of compact, scalable low-drop out regulators (LDOs) embedded deep within logic blocks. While analog LDOs have traditionally been used in digital ICs, the need for digitally implementable LDOs embedded in digital functional units for ultrafine grained power management is paramount. This paper presents a fully-digital, phase locked LDO implemented in 32 nm CMOS. The control model of the proposed design has been provided and limits of stability have been shown. Measurement results with a resistive load as well as a digital load exhibit peak current efficiency of 98%.", "title": "" }, { "docid": "931b8f97d86902f984338285e62c8ef8", "text": "One of the goals of Artificial intelligence (AI) is the realization of natural dialogue between humans and machines. in recent years, the dialogue systems, also known as interactive conversational systems are the fastest growing area in AI. Many companies have used the dialogue systems technology to establish various kinds of Virtual Personal Assistants(VPAs) based on their applications and areas, such as Microsoft's Cortana, Apple's Siri, Amazon Alexa, Google Assistant, and Facebook's M. However, in this proposal, we have used the multi-modal dialogue systems which process two or more combined user input modes, such as speech, image, video, touch, manual gestures, gaze, and head and body movement in order to design the Next-Generation of VPAs model. The new model of VPAs will be used to increase the interaction between humans and the machines by using different technologies, such as gesture recognition, image/video recognition, speech recognition, the vast dialogue and conversational knowledge base, and the general knowledge base. Moreover, the new VPAs system can be used in other different areas of applications, including education assistance, medical assistance, robotics and vehicles, disabilities systems, home automation, and security access control.", "title": "" }, { "docid": "60a655d6b6d79f55151e871d2f0d4d34", "text": "The clinical characteristics of drug hypersensitivity reactions are very heterogeneous as drugs can actually elicit all types of immune reactions. The majority of allergic reactions involve either drug-specific IgE or T cells. Their stimulation leads to quite distinct immune responses, which are classified according to Gell and Coombs. Here, an extension of this subclassification, which considers the distinct T-cell functions and immunopathologies, is presented. These subclassifications are clinically useful, as they require different treatment and diagnostic steps. Copyright © 2007 S. Karger AG, Basel", "title": "" }, { "docid": "5a0e6e8f6f9e11efa1450ec655a52012", "text": "This paper reviews and critiques the “opportunity discovery” approach to entrepreneurship and argues that entrepreneurship can be more thoroughly grounded, and more closely linked to more general problems of economic organization, by adopting the Cantillon-KnightMises understanding of entrepreneurship as judgment. I begin by distinguishing among occupational, structural, and functional approaches to entrepreneurship and distinguishing among two influential interpretations of the entrepreneurial function, discovery and judgment. I turn next to the contemporary literature on opportunity identification and argue that this literature misinterprets Kirzner’s instrumental use of the discovery metaphor and mistakenly makes “opportunities” the unit of analysis. I then describe an alternative approach in which investment is the unit of analysis and link this approach to Austrian capital theory. I close with some applications to organizational form and entrepreneurial teams. I thank Jay Barney, Nicolai Foss, Jeffrey Herbener, Joseph Mahoney, Mario Mondelli, Joseph Salerno, Sharon Alvarez (guest editor), two anonymous referees, and participants at the 2007 Washington University Conference on Opportunity Discovery and 2007 Academy of Management meetings for comments on previous versions.", "title": "" }, { "docid": "e8215231e8eb26241d5ac8ac5be4b782", "text": "This research is on the use of a decision tree approach for predicting students‟ academic performance. Education is the platform on which a society improves the quality of its citizens. To improve on the quality of education, there is a need to be able to predict academic performance of the students. The IBM Statistical Package for Social Studies (SPSS) is used to apply the Chi-Square Automatic Interaction Detection (CHAID) in producing the decision tree structure. Factors such as the financial status of the students, motivation to learn, gender were discovered to affect the performance of the students. 66.8% of the students were predicted to have passed while 33.2% were predicted to fail. It is observed that much larger percentage of the students were likely to pass and there is also a higher likely of male students passing than female students.", "title": "" } ]
scidocsrr
2cdf51107ab0af158b22f072186f0138
Online python tutor: embeddable web-based program visualization for cs education
[ { "docid": "c57d9c4f62606e8fccef34ddd22edaec", "text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.", "title": "" } ]
[ { "docid": "bc262b5366f1bf14e5120f68df8f5254", "text": "BACKGROUND\nThe aim of this study was to compare the results of laparoscopy-assisted total gastrectomy with those of open total gastrectomy for early gastric cancer.\n\n\nMETHODS\nPatients with gastric cancer who underwent total gastrectomy with curative intent in three Korean tertiary hospitals between January 2003 and December 2010 were included in this multicentre, retrospective, propensity score-matched cohort study. Cox proportional hazards regression models were used to evaluate the association between operation method and survival.\n\n\nRESULTS\nA total of 753 patients with early gastric cancer were included in the study. There were no significant differences in the matched cohort for overall survival (hazard ratio (HR) for laparoscopy-assisted versus open total gastrectomy 0.96, 95 per cent c.i. 0.57 to 1.65) or recurrence-free survival (HR 2.20, 0.51 to 9.52). The patterns of recurrence were no different between the two groups. The severity of complications, according to the Clavien-Dindo classification, was similar in both groups. The most common complications were anastomosis-related in the laparoscopy-assisted group (8.0 per cent versus 4.2 per cent in the open group; P = 0.015) and wound-related in the open group (1.6 versus 5.6 per cent respectively; P = 0.003). Postoperative death was more common in the laparoscopy-assisted group (1.6 versus 0.2 per cent; P = 0.045).\n\n\nCONCLUSION\nLaparoscopy-assisted total gastrectomy for early gastric cancer is feasible in terms of long-term results, including survival and recurrence. However, a higher postoperative mortality rate and an increased risk of anastomotic leakage after laparoscopic-assisted total gastrectomy are of concern.", "title": "" }, { "docid": "1c0e441afd88f00b690900c42b40841a", "text": "Convergence problems occur abundantly in all branches of mathematics or in the mathematical treatment of the sciences. Sequence transformations are principal tools to overcome convergence problems of the kind. They accomplish this by converting a slowly converging or diverging input sequence {sn} ∞ n=0 into another sequence {s ′ n }∞ n=0 with hopefully better numerical properties. Padé approximants, which convert the partial sums of a power series to a doubly indexed sequence of rational functions, are the best known sequence transformations, but the emphasis of the review will be on alternative sequence transformations which for some problems provide better results than Padé approximants.", "title": "" }, { "docid": "ec4d4e6d6f1c95ba3e5f0369562e25c4", "text": "In this paper we merge individual census data, individual patenting data, and individual IQ data from Finnish Defence Force to look at the probability of becoming an innovator and at the returns to invention. On the former, we find that: (i) it is strongly correlated with parental income; (ii) this correlation is greatly decreased when we control for parental education and child IQ. Turning to the returns to invention, we find that: (i) inventing increases the annual wage rate of the inventor by a significant amounts over a prolonged period after the invention; (ii) coworkers in the same firm also benefit from an innovation, the highest returns being earned by senior managers and entrepreneurs in the firm, especially in the long term. Finally, we find that becoming an inventor enhances both, intragenerational and intergenerational income mobility, and that inventors are very likely to make it to top income brackets.", "title": "" }, { "docid": "6d405b0f6b1381cec5e1d001e1102404", "text": "Consensus is an important building block for building replicated systems, and many consensus protocols have been proposed. In this paper, we investigate the building blocks of consensus protocols and use these building blocks to assemble a skeleton that can be configured to produce, among others, three well-known consensus protocols: Paxos, Chandra-Toueg, and Ben-Or. Although each of these protocols specifies only one quorum system explicitly, all also employ a second quorum system. We use the skeleton to implement a replicated service, allowing us to compare the performance of these consensus protocols under various workloads and failure scenarios.", "title": "" }, { "docid": "cac8aa7cfd50da05a6f973b019e8c4f5", "text": "Deep learning has led to remarkable advances when applied to problems where the data distribution does not change over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, and solve a diversity of tasks simultaneously. Furthermore, synapses in biological neurons are not simply real-valued scalars, but possess complex molecular machinery enabling non-trivial learning dynamics. In this study, we take a first step toward bringing this biological complexity into artificial neural networks. We introduce a model of intelligent synapses that accumulate task relevant information over time, and exploit this information to efficiently consolidate memories of old tasks to protect them from being overwritten as new tasks are learned. We apply our framework to learning sequences of related classification problems, and show that it dramatically reduces catastrophic forgetting while maintaining computational efficiency.", "title": "" }, { "docid": "14dec918e2b6b4678c38f533e0f1c9c1", "text": "A method is presented to assess stability changes in waves in early-stage ship design. The method is practical: the calculations can be completed quickly and can be applied as soon as lines are available. The intended use of the described method is for preliminary analysis. If stability changes that result in large roll motion are indicated early in the design process, this permits planning and budgeting for direct assessments using numerical simulations and/or model experiments. The main use of the proposed method is for the justification for hull form shape modification or for necessary additional analysis to better quantify potentially increased stability risk. The method is based on the evaluation of changing stability in irregular seas and can be applied to any type of ship. To demonstrate the robustness of the method, results for ten naval ship types are presented and discussed. The proposed method is shown to identify ships with known risk for large stability changes in waves.", "title": "" }, { "docid": "244b0b0029b4b440e1c5b953bda84aed", "text": "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.", "title": "" }, { "docid": "4e1414ce6a8fde64b0e7a89a2ced1a7e", "text": "Several innovative healthcare executives have recently introduced a new business strategy implementation tool: the Balanced Scorecard. The scorecard's measurement and management system provides the following potential benefits to healthcare organizations: It aligns the organization around a more market-oriented, customer-focused strategy It facilitates, monitors, and assesses the implementation of the strategy It provides a communication and collaboration mechanism It assigns accountability for performance at all levels of the organization It provides continual feedback on the strategy and promotes adjustments to marketplace and regulatory changes. We surveyed executives in nine provider organizations that were implementing the Balanced Scorecard. We asked about the following issues relating to its implementation and effect: 1. The role of the Balanced Scorecard in relation to a well-defined vision, mission, and strategy 2. The motivation for adopting the Balanced Scorecard 3. The difference between the Balanced Scorecard and other measurement systems 4. The process followed to develop and implement the Balanced Scorecard 5. The challenges and barriers during the development and implementation process 6. The benefits gained by the organization from adoption and use. The executives reported that the Balanced Scorecard strategy implementation and performance management tool could be successfully applied in the healthcare sector, enabling organizations to improve their competitive market positioning, financial results, and customer satisfaction. This article concludes with guidelines for other healthcare provider organizations to capture the benefits of the Balanced Scorecard performance management system.", "title": "" }, { "docid": "bc384d12513dc76bf76f11acd04d39f4", "text": "Traffic sign detection is an important task in traffic sign recognition systems. Chinese traffic signs have their unique features compared with traffic signs of other countries. Convolutional neural networks (CNNs) have achieved a breakthrough in computer vision tasks and made great success in traffic sign classification. In this paper, we present a Chinese traffic sign detection algorithm based on a deep convolutional network. To achieve real-time Chinese traffic sign detection, we propose an end-to-end convolutional network inspired by YOLOv2. In view of the characteristics of traffic signs, we take the multiple 1 × 1 convolutional layers in intermediate layers of the network and decrease the convolutional layers in top layers to reduce the computational complexity. For effectively detecting small traffic signs, we divide the input images into dense grids to obtain finer feature maps. Moreover, we expand the Chinese traffic sign dataset (CTSD) and improve the marker information, which is available online. All experimental results evaluated according to our expanded CTSD and German Traffic Sign Detection Benchmark (GTSDB) indicate that the proposed method is the faster and more robust. The fastest detection speed achieved was 0.017 s per image.", "title": "" }, { "docid": "fe536ac94342c96f6710afb4a476278b", "text": "The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.", "title": "" }, { "docid": "4eda25ffa01bb177a41a1d6d82db6a0c", "text": "For ontologiesto becost-efectively deployed,we requirea clearunderstandingof thevariouswaysthatontologiesarebeingusedtoday. To achieve this end,we presenta framework for understandingandclassifyingontology applications.We identify four main categoriesof ontologyapplications:1) neutralauthoring,2) ontologyasspecification, 3) commonaccessto information, and4) ontology-basedsearch. In eachcategory, we identify specific ontologyapplicationscenarios.For each,we indicatetheir intendedpurpose,therole of theontology, thesupporting technologies, who theprincipalactorsareandwhat they do. We illuminatethesimilaritiesanddifferencesbetween scenarios. We draw on work from othercommunities,suchassoftwaredevelopersandstandardsorganizations.We usea relatively broaddefinition of ‘ontology’, to show that muchof the work beingdoneby thosecommunitiesmay be viewedaspracticalapplicationsof ontologies.Thecommonthreadis theneedfor sharingthemeaningof termsin a givendomain,which is a centralrole of ontologies.An additionalaim of this paperis to draw attentionto common goalsandsupportingtechnologiesof theserelatively distinctcommunitiesto facilitateclosercooperationandfaster progress .", "title": "" }, { "docid": "c746704be981521aa38f7760a37d4b83", "text": "Myoelectric or electromyogram (EMG) signals can be useful in intelligently recognizing intended limb motion of a person. This paper presents an attempt to develop a four-channel EMG signal acquisition system as part of an ongoing research in the development of an active prosthetic device. The acquired signals are used for identification and classification of six unique movements of hand and wrist, viz. hand open, hand close, wrist flexion, wrist extension, ulnar deviation and radial deviation. This information is used for actuation of prosthetic drive. The time domain features are extracted, and their dimension is reduced using principal component analysis. The reduced features are classified using two different techniques: k nearest neighbor and artificial neural networks, and the results are compared.", "title": "" }, { "docid": "ad903f1d8998200d89234f0244452ad4", "text": "Within last two decades, social media has emerged as almost an alternate world where people communicate with each other and express opinions about almost anything. This makes platforms like Facebook, Reddit, Twitter, Myspace etc. a rich bank of heterogeneous data, primarily expressed via text but reflecting all textual and non-textual data that human interaction can produce. We propose a novel attention based hierarchical LSTM model to classify discourse act sequences in social media conversations, aimed at mining data from online discussion using textual meanings beyond sentence level. The very uniqueness of the task is the complete categorization of possible pragmatic roles in informal textual discussions, contrary to extraction of question-answers, stance detection or sarcasm identification which are very much role specific tasks. Early attempt was made on a Reddit discussion dataset. We train our model on the same data, and present test results on two different datasets, one from Reddit and one from Facebook. Our proposed model outperformed the previous one in terms of domain independence; without using platformdependent structural features, our hierarchical LSTM with word relevance attention mechanism achieved F1-scores of 71% and 66% respectively to predict discourse roles of comments in Reddit and Facebook discussions. Efficiency of recurrent and convolutional architectures in order to learn discursive representation on the same task has been presented and analyzed, with different word and comment embedding schemes. Our attention mechanism enables us to inquire into relevance ordering of text segments according to their roles in discourse. We present a human annotator experiment to unveil important observations about modeling and data annotation. Equipped with our text-based discourse identification model, we inquire into how Subhabrata Dutta Jadavpur University, Kolkata, India, e-mail: subha0009@gmail.com Tanmoy Chakraborty IIIT Delhi, India, e-mail: tanmoy@iiitd.ac.in Dipankar Das Jadavpur University, Kolkata, India, e-mail: dipankar.dipnil2005@gmail.com 1 ar X iv :1 80 8. 02 29 0v 1 [ cs .C L ] 7 A ug 2 01 8 2 Subhabrata Dutta, Tanmoy Chakraborty and Dipankar Das heterogeneous non-textual features like location, time, leaning of information etc. play their roles in charaterizing online discussions on Facebook.", "title": "" }, { "docid": "3ee39231fc2fbf3b6295b1b105a33c05", "text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.", "title": "" }, { "docid": "dbb2a53d4dfbf0840d96670a25f88113", "text": "In real-world recognition/classification tasks, limited by various objective factors, it is usually difficult to collect training samples to exhaust all classes when training a recognizer or classifier. A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers not only to accurately classify the seen classes, but also to effectively deal with the unseen ones. This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, experiment setup and evaluation metrics. Furthermore, we briefly analyze the relationships between OSR and its related tasks including zero-shot, one-shot (few-shot) recognition/learning techniques, classification with reject option, and so forth. Additionally, we also overview the open world recognition which can be seen as a natural extension of OSR. Importantly, we highlight the limitations of existing approaches and point out some promising subsequent research directions in this field.", "title": "" }, { "docid": "637a1bc6dd1e3445f5ef92df562a57bd", "text": "This paper deals with the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor. It is a challenging task as we consider the almost inevitable accumulation error issue in some previous sequential fusion methods and also the possible failure of surface tracking in a long sequence. Therefore, we propose a global non-rigid registration framework and tackle the drifting problem via an explicit loop closure. Our novel scheme starts with a fusion step to get multiple partial scans from the input sequence, followed by a pairwise non-rigid registration and loop detection step to obtain correspondences between neighboring partial pieces and those pieces that form a loop. Then, we perform a global registration procedure to align all those pieces together into a consistent canonical space as guided by those matches that we have established. Finally, our proposed model-update step helps fixing potential misalignments that still exist after the global registration. Both geometric and appearance constraints are enforced during our alignment; therefore, we are able to get the recovered model with accurate geometry as well as high fidelity color maps for the mesh. Experiments on both synthetic and various real datasets have demonstrated the capability of our approach to reconstruct complete and watertight deformable objects.", "title": "" }, { "docid": "8800dba6bb4cea195c8871eb5be5b0a8", "text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.", "title": "" }, { "docid": "1256f0799ed585092e60b50fb41055be", "text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.", "title": "" }, { "docid": "9bfcaa86b342147a6dd88da683c9dec7", "text": "Applying popular machine learning algorithms to large amounts of data raised new challenges for the ML practitioners. Traditional ML libraries does not support well processing of huge datasets, so that new approaches were needed. Parallelization using modern parallel computing frameworks, such as MapReduce, CUDA, or Dryad gained in popularity and acceptance, resulting in new ML libraries developed on top of these frameworks. We will briefly introduce the most prominent industrial and academic outcomes, such as Apache Mahout, GraphLab or Jubatus. We will investigate how cloud computing paradigm impacted the field of ML. First direction is of popular statistics tools and libraries (R system, Python) deployed in the cloud. A second line of products is augmenting existing tools with plugins that allow users to create a Hadoop cluster in the cloud and run jobs on it. Next on the list are libraries of distributed implementations for ML algorithms, and on-premise deployments of complex systems for data analytics and data mining. Last approach on the radar of this survey is ML as Software-as-a-Service, several BigData start-ups (and large companies as well) already opening their solutions to the market.", "title": "" }, { "docid": "8cbe0ff905a58e575f2d84e4e663a857", "text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. Œis survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Speci€cally, we list and review the di‚erent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-Œings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.", "title": "" } ]
scidocsrr
88bad70c141c23c821ea876c9072567b
D 4 . 1-Security definitions and attacker models for e-voting protocols
[ { "docid": "02bc71435bd53d8331e3ad2b30588c6d", "text": "Voting with cryptographic auditing, sometimes called open-audit voting, has remained, for the most part, a theoretical endeavor. In spite of dozens of fascinating protocols and recent ground-breaking advances in the field, there exist only a handful of specialized implementations that few people have experienced directly. As a result, the benefits of cryptographically audited elections have remained elusive. We present Helios, the first web-based, open-audit voting system. Helios is publicly accessible today: anyone can create and run an election, and any willing observer can audit the entire process. Helios is ideal for online software communities, local clubs, student government, and other environments where trustworthy, secretballot elections are required but coercion is not a serious concern. With Helios, we hope to expose many to the power of open-audit elections.", "title": "" } ]
[ { "docid": "49215cb8cb669aef5ea42dfb1e7d2e19", "text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author", "title": "" }, { "docid": "a9d0b367d4507bbcee55f4f25071f12e", "text": "The goal of sentence and document modeling is to accurately represent the meaning of sentences and documents for various Natural Language Processing tasks. In this work, we present Dependency Sensitive Convolutional Neural Networks (DSCNN) as a generalpurpose classification system for both sentences and documents. DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long ShortTerm Memory networks and subsequently extracting features with convolution operators. Compared with existing recursive neural models with tree structures, DSCNN does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentencelevel tasks. Moreover, unlike other CNNbased models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document. Experiment results demonstrate that our approach is achieving state-ofthe-art performance on several tasks, including sentiment analysis, question type classification, and subjectivity classification.", "title": "" }, { "docid": "bc4b545faba28a81202e3660c32c7ec2", "text": "This paper describes a novel two-stage fully-differential CMOS amplifier comprising two self-biased inverter stages, with optimum compensation and high efficiency. Although it relies on a class A topology, it is shown through simulations, that it achieves the highest efficiency of its class and comparable to the best class AB amplifiers. Due to the self-biasing, a low variability in the DC gain over process, temperature, and supply is achieved. A detailed circuit analysis, a design methodology for optimization and the most relevant simulation results are presented, together with a final comparison among state-of-the-art amplifiers.", "title": "" }, { "docid": "d2b7e61ecedf80f613d25c4f509ddaf6", "text": "We present a new image editing method, particularly effective for sharpening major edges by increasing the steepness of transition while eliminating a manageable degree of low-amplitude structures. The seemingly contradictive effect is achieved in an optimization framework making use of L0 gradient minimization, which can globally control how many non-zero gradients are resulted in to approximate prominent structure in a sparsity-control manner. Unlike other edge-preserving smoothing approaches, our method does not depend on local features, but instead globally locates important edges. It, as a fundamental tool, finds many applications and is particularly beneficial to edge extraction, clip-art JPEG artifact removal, and non-photorealistic effect generation.", "title": "" }, { "docid": "1db450f3e28907d6940c87d828fc1566", "text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.", "title": "" }, { "docid": "be03c10fb6c05de7d7b4a25d67fd6527", "text": "In this paper, a unified current controller is introduced for a bidirectional dc-dc converter which employs complementary switching between upper and lower switches. The unified current controller is to use one controller for both buck and boost modes. Such a controller may be designed with analog implementation that adopts current injection control method, which is difficult to be implemented in high power applications due to parasitic noises. The averaged current mode is thus proposed in this paper to avoid the current sensing related issues. Additional advantage with the unified digital controller is also found in smooth mode transition between battery charging and discharging modes where conventional analog controller tends to saturate and take a long delay to get out of saturation. The unified controller has been designed based on a proposed novel third- order bidirectional charging/discharging model and implemented with a TMS320F2808 based digital controller. The complete system has been simulated and verified with a high-power hardware prototype testing.", "title": "" }, { "docid": "a862ccdb188c7b559a4f27793c7873d8", "text": "Several behavioral assays are currently used for high-throughput neurophenotyping and screening of genetic mutations and psychotropic drugs in zebrafish (Danio rerio). In this protocol, we describe a battery of two assays to characterize anxiety-related behavioral and endocrine phenotypes in adult zebrafish. Here, we detail how to use the 'novel tank' test to assess behavioral indices of anxiety (including reduced exploration, increased freezing behavior and erratic movement), which are quantifiable using manual registration and computer-aided video-tracking analyses. In addition, we describe how to analyze whole-body zebrafish cortisol concentrations that correspond to their behavior in the novel tank test. This protocol is an easy, inexpensive and effective alternative to other methods of measuring stress responses in zebrafish, thus enabling the rapid acquisition and analysis of large amounts of data. As will be shown here, fish anxiety-like behavior can be either attenuated or exaggerated depending on stress or drug exposure, with cortisol levels generally expected to parallel anxiety behaviors. This protocol can be completed over the course of 2 d, with a variable testing duration depending on the number of fish used.", "title": "" }, { "docid": "3a651ab1f8c05cfae51da6a14f6afef8", "text": "The taxonomical relationship of Cylindrospermopsis raciborskii and Raphidiopsis mediterranea was studied by morphological and 16S rRNA gene diversity analyses of natural populations from Lake Kastoria, Greece. Samples were obtained during a bloom (23,830 trichomes mL ) in August 2003. A high diversity of apical cell, trichome, heterocyte and akinete morphology, trichome fragmentation and reproduction was observed. Trichomes were grouped into three dominant morphotypes: the typical and the non-heterocytous morphotype of C. raciborskii and the typical morphotype of R. mediterranea. A morphometric comparison of the dominant morphotypes showed significant differences in mean values of cell and trichome sizes despite the high overlap in the range of the respective size values. Additionally, two new morphotypes representing developmental stages of the species are described while a new mode of reproduction involving a structurally distinct reproductive cell is described for the first time in planktic Nostocales. A putative life-cycle, common for C. raciborskii and R. mediterranea is proposed revealing that trichome reproduction of R. mediterranea gives rise both to R. mediterranea and C. raciborskii non-heterocytous morphotypes. The phylogenetic analysis of partial 16S rRNA gene (ca. 920 bp) of the co-existing Cylindrospermopsis and Raphidiopsis morphotypes revealed only one phylotype which showed 99.54% similarity to R. mediterranea HB2 (China) and 99.19% similarity to C. raciborskii form 1 (Australia). We propose that all morphotypes comprised stages of the life cycle of C. raciborkii whereas R. mediterranea from Lake Kastoria (its type locality) represents non-heterocytous stages of Cylindrospermopsis complex life cycle. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1cdd599b49d9122077a480a75391aae8", "text": "Two aspects of children's early gender development-the spontaneous production of gender labels and gender-typed play-were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children's gender labeling as based on mothers' biweekly telephone interviews regarding their children's language from 9 through 21 months. Videotapes of children's play both alone and with mother during home visits at 17 and 21 months were independently analyzed for play with gender-stereotyped and gender-neutral toys. Finally, the relation between gender labeling and gender-typed play was examined. Children transitioned to using gender labels at approximately 19 months, on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in gender-typed play, suggesting that knowledge of gender categories might influence gender typing before the age of 2.", "title": "" }, { "docid": "c21c58dbdf413a54036ac5e6849f81e1", "text": "We discuss the problem of extending data mining approaches to cases in which data points arise in the form of individual graphs. Being able to find the intrinsic low-dimensionality in ensembles of graphs can be useful in a variety of modeling contexts, especially when coarse-graining the detailed graph information is of interest. One of the main challenges in mining graph data is the definition of a suitable pairwise similarity metric in the space of graphs. We explore two practical solutions to solving this problem: one based on finding subgraph densities, and one using spectral information. The approach is illustrated on three test data sets (ensembles of graphs); two of these are obtained from standard graph generating algorithms, while the graphs in the third example are sampled as dynamic snapshots from an evolving network simulation.", "title": "" }, { "docid": "2e7b628b1a3bcc51455e5822ca82bd56", "text": "The installed capacity of distributed generation (DG) based on renewable energy sources has increased continuously in power systems, and its market-oriented transaction is imperative. However, traditional transaction management based on centralized organizations has many disadvantages, such as high operation cost, low transparency, and potential risk of transaction data modification. Therefore, a decentralized electricity transaction mode for microgrids is proposed in this study based on blockchain and continuous double auction (CDA) mechanism. A buyer and seller initially complete the transaction matching in the CDA market. In view of the frequent price fluctuation in the CDA market, an adaptive aggressiveness strategy is used to adjust the quotation timely according to market changes. DG and consumer exchange digital certificate of power and expenditure on the blockchain system and the interests of consumers are then guaranteed by multi-signature when DG cannot generate power due to failure or other reasons. The digital certification of electricity assets is replaced by the sequence number with specific tags in the transaction script, and the size of digital certification can be adjusted according to transaction energy quantity. Finally, the feasibility of market mechanism through specific microgrid case and settlement process is also provided.", "title": "" }, { "docid": "85103fe857abc9961b69b9d996c51398", "text": "Insider threat is a great challenge for most organizations in today’s digital world. It has received substantial research attention as a significant source of information security threat that could cause more financial losses and damages than any other threats. However, designing an effective monitoring and detection framework is a very challenging task. In this paper, we examine the use of human bio-signals to detect the malicious activities and show that its applicability for insider threats detection. We employ a combination of the electroencephalography (EEG) and the electrocardiogram (ECG) signals to provide a framework for insider threat monitoring and detection. We empirically tested the framework with ten subjects and used several activities scenarios. We found that our framework able to achieve up to 90% detection accuracy of the malicious activities when using the electroencephalography (EEG) signals alone. We then examined the effectiveness of adding the electrocardiogram (ECG) signals to our framework and results show that by adding the ECG the accuracy of detecting the malicious activity increases by about 5%. Thus, our framework shows that human brain and heart signals can reveal valuable knowledge about the malicious behaviors and could be an effective solution for detecting insider threats.", "title": "" }, { "docid": "5325778a57d0807e9b149108ea9e57d8", "text": "This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the \"MICCAI 2007 Grand Challenge\" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.", "title": "" }, { "docid": "b6162e44ec33b7bb414aad547a9806d0", "text": "Visual data exploration allows users to analyze datasets based on visualizations of interesting data characteristics, to possibly discover interesting information about the data that users are a priori unaware of. In this context, both recommendations of queries selecting the data to be visualized and recommendations of visualizations that highlight interesting data characteristics support users in visual data exploration. So far, these two types of recommendations have been mostly considered in isolation of one another. We present a recommendation approach for visual data exploration that unifies query recommendation and visualization recommendation. The recommendations rely on two types of provenance, i.e., data provenance (aka lineage) and evolution provenance that tracks users’ interactions with a data exploration system. This paper presents the provenance data model as well as the overall system architecture. We then provide details on our provenance-based recommendation algorithms. A preliminary experimental evaluation showcases the applicability of our solution in practice.", "title": "" }, { "docid": "f6f769efffb1e1737ace0ab617ff0c33", "text": "It remains an open question how neural responses in motor cortex relate to movement. We explored the hypothesis that motor cortex reflects dynamics appropriate for generating temporally patterned outgoing commands. To formalize this hypothesis, we trained recurrent neural networks to reproduce the muscle activity of reaching monkeys. Models had to infer dynamics that could transform simple inputs into temporally and spatially complex patterns of muscle activity. Analysis of trained models revealed that the natural dynamical solution was a low-dimensional oscillator that generated the necessary multiphasic commands. This solution closely resembled, at both the single-neuron and population levels, what was observed in neural recordings from the same monkeys. Notably, data and simulations agreed only when models were optimized to find simple solutions. An appealing interpretation is that the empirically observed dynamics of motor cortex may reflect a simple solution to the problem of generating temporally patterned descending commands.", "title": "" }, { "docid": "817b1ec160974f41129e0dadd3cdaa27", "text": "Crude glycerin, the main by-product of biodiesel production, can replace dietary energy sources, such as corn. The objective of this study was to evaluate the inclusion of up to 30% of crude glycerin in dry matter (DM) of the total diets, and its effects on meat quality parameters of feedlot Nellore bulls. Thirty animals (227.7 ± 23.8 kg body weight; 18 months old) were housed in individual pens and fed 5 experimental diets, containing 0, 7.5, 15, 22.5 or 30% crude glycerin (DM basis). After 103 d (21 d adaptation) animals were slaughtered and the Longissimus muscle was collected. The characteristics assessed were chemical composition, fatty acid profile, cholesterol, shear force, pH, color, water-holding capacity, cooking loss and sensory properties. The increasing inclusion of crude glycerin in the diets did not affect the chemical composition of the Longissimus muscle (P > 0.10). A quadratic effect was observed when levels of crude glycerin were increased, on the concentration of pentadecanoic, palmitoleic and eicosenoic fatty acids in meat (P < 0.05), and on the activity of the delta-9 desaturase 16 and delta-9 desaturase 18 enzymes (P < 0.05). The addition of crude glycerin increased the gamma linolenic fatty acid concentration (P < 0.01), and altered the monounsaturated fatty acids in Longissimus muscle of animals (Pquad. < 0.05). Crude glycerin decreased cholesterol content in meat (P < 0.05), and promoted higher flavor score and greasy intensity perception of the meat (P < 0.01). The inclusion of up to 30% crude glycerin in Nellore cattle bulls`diets (DM basis) improves meat cholesterol and sensory attributes, such as flavor, without affecting significantly the physical traits, the main fatty acid concentrations and the chemical composition.", "title": "" }, { "docid": "71efff25f494a8b7a83099e7bdd9d9a8", "text": "Background: Problems with intubation of the ampulla Vateri during diagnostic and therapeutic endoscopic maneuvers are a well-known feature. The ampulla Vateri was analyzed three-dimensionally to determine whether these difficulties have a structural background. Methods: Thirty-five human greater duodenal papillae were examined by light and scanning electron microscopy as well as immunohistochemically. Results: Histologically, highly vascularized finger-like mucosal folds project far into the lumen of the ampulla Vateri. The excretory ducts of seromucous glands containing many lysozyme-secreting Paneth cells open close to the base of the mucosal folds. Scanning electron microscopy revealed large mucosal folds inside the ampulla that continued into the pancreatic and bile duct, comparable to valves arranged in a row. Conclusions: Mucosal folds form pocket-like valves in the lumen of the ampulla Vateri. They allow a unidirectional flow of secretions into the duodenum and prevent reflux from the duodenum into the ampulla Vateri. Subepithelial mucous gland secretions functionally clean the valvular crypts and protect the epithelium. The arrangement of pocket-like mucosal folds may explain endoscopic difficulties experienced when attempting to penetrate the papilla of Vater during endoscopic retrograde cholangiopancreaticographic procedures.", "title": "" }, { "docid": "e2b08d0d14a5561f2d6632d7cec87bcc", "text": "In recent years, market forecasting by machine learning methods has been flourishing. Most existing works use a past market data set, because they assume that each trader’s individual decisions do not affect market prices at all. Meanwhile, there have been attempts to analyze economic phenomena by constructing virtual market simulators, in which human and artificial traders really make trades. Since prices in a market are, in fact, determined by every trader’s decisions, a virtual market is more realistic, and the above assumption does not apply. In this work, we design several reinforcement learners on the futures market simulator U-Mart (Unreal Market as an Artificial Research Testbed) and compare our learners with the previous champions of U-Mart competitions empirically.", "title": "" }, { "docid": "c904e36191df6989a5f38a52bc206342", "text": "In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it’s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique. Keywords—lossy compression, DWT, quantization, Run length coding, Huffman coding, JPEG2000", "title": "" }, { "docid": "43ca9719740147e88e86452bb42f5644", "text": "Currently in the US, over 97% of food waste is estimated to be buried in landfills. There is nonetheless interest in strategies to divert this waste from landfills as evidenced by a number of programs and policies at the local and state levels, including collection programs for source separated organic wastes (SSO). The objective of this study was to characterize the state-of-the-practice of food waste treatment alternatives in the US and Canada. Site visits were conducted to aerobic composting and two anaerobic digestion facilities, in addition to meetings with officials that are responsible for program implementation and financing. The technology to produce useful products from either aerobic or anaerobic treatment of SSO is in place. However, there are a number of implementation issues that must be addressed, principally project economics and feedstock purity. Project economics varied by region based on landfill disposal fees. Feedstock purity can be obtained by enforcement of contaminant standards and/or manual or mechanical sorting of the feedstock prior to and after treatment. Future SSO diversion will be governed by economics and policy incentives, including landfill organics bans and climate change mitigation policies.", "title": "" } ]
scidocsrr
278f3ca9b99d29fcb5dc06055691f034
Ontology-Driven Geographic Information Systems
[ { "docid": "74e15be321ec4e2d207f3331397f0399", "text": "Interoperability has been a basic requirement for the modern information systems environment for over two decades. How have key requirements for interoperability changed over that time? How can we understand the full scope of interoperability issues? What has shaped research on information system interoperability? What key progress has been made? This chapter provides some of the answers to these questions. In particular, it looks at different levels of information system interoperability, while reviewing the changing focus of interoperability research themes, past achievements and new challenges in the emerging global information infrastructure (GII). It divides the research into three generations, and discusses some of achievements of the past. Finally, as we move from managing data to information, and in future knowledge, the need for achieving semantic interoperability is discussed and key components of solutions are introduced. Data and information interoperability has gained increasing attention for several reasons, including: • excellent progress in interconnection afforded by the Internet, Web and distributed computing infrastructures, leading to easy access to a large number of independently created and managed information sources of broad variety;", "title": "" } ]
[ { "docid": "7d8617c12c24e61b7ef003a5055fbf2f", "text": "We present the first approximation algorithms for a large class of budgeted learning problems. One classicexample of the above is the budgeted multi-armed bandit problem. In this problem each arm of the bandithas an unknown reward distribution on which a prior isspecified as input. The knowledge about the underlying distribution can be refined in the exploration phase by playing the arm and observing the rewards. However, there is a budget on the total number of plays allowed during exploration. After this exploration phase,the arm with the highest (posterior) expected reward is hosen for exploitation. The goal is to design the adaptive exploration phase subject to a budget constraint on the number of plays, in order to maximize the expected reward of the arm chosen for exploitation. While this problem is reasonably well understood in the infinite horizon discounted reward setting, the budgeted version of the problem is NP-Hard. For this problem and several generalizations, we provide approximate policies that achieve a reward within constant factor of the reward optimal policy. Our algorithms use a novel linear program rounding technique based on stochastic packing.", "title": "" }, { "docid": "53dabbc33a041872783a109f953afd0f", "text": "We present an analysis of parser performance on speech data, comparing word type and token frequency distributions with written data, and evaluating parse accuracy by length of input string. We find that parser performance tends to deteriorate with increasing length of string, more so for spoken than for written texts. We train an alternative parsing model with added speech data and demonstrate improvements in accuracy on speech-units, with no deterioration in performance on written text.", "title": "" }, { "docid": "a93cd1c2e04e2d33c5174f18909dae9d", "text": "For more than two decades, the key objective for synthesis of linear decompressors has been maximizing encoding efficiency. For combinational decompressors, encoding satisfiability is dynamically checked for each specified care bit. By contrast, for sequential linear decompressors (e.g. PRPGs), encoding is performed for each test cube; the resultant static encoding considers that a test cube is encodable only if all of its care bits are encodable. The paper introduces a new class of sequential linear decompressors that provides a trade-off between the computational complexity and the encoding efficiency of linear encoding. As a result, it becomes feasible to dynamically encode care bits before a test cube has been completed, and derive decompressor-implied scan cell values during test generation. The resultant dynamic encoding enables an identification of encoding conflicts during branch-and-bound search and a reduction of search space for dynamic compaction. Experimental results demonstrate that dynamic encoding consistently outperforms static encoding in a wide range of compression ratios.", "title": "" }, { "docid": "b1b842bed367be06c67952c34921f6f6", "text": "Definitions and uses of the concept of empowerment are wide-ranging: the term has been used to describe the essence of human existence and development, but also aspects of organizational effectiveness and quality. The empowerment ideology is rooted in social action where empowerment was associated with community interests and with attempts to increase the power and influence of oppressed groups (such as workers, women and ethnic minorities). Later, there was also growing recognition of the importance of the individual's characteristics and actions. Based on a review of the literature, this paper explores the uses of the empowerment concept as a framework for nurses' professional growth and development. Given the complexity of the concept, it is vital to understand the underlying philosophy before moving on to define its substance. The articles reviewed were classified into three groups on the basis of their theoretical orientation: critical social theory, organization theory and social psychological theory. Empowerment seems likely to provide for an umbrella concept of professional development in nursing.", "title": "" }, { "docid": "8069410a94a5039305b45fbd7c8ec809", "text": "Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.", "title": "" }, { "docid": "cbe1b2575db111cd3b22b7288c0e345c", "text": "A reversible gate has the equal number of inputs and outputs and one-to-one mappings between input vectors and output vectors; so that, the input vector states can be always uniquely reconstructed from the output vector states. This correspondence introduces a reversible full-adder circuit that requires only three reversible gates and produces least number of \"garbage outputs \", that is two. After that, a theorem has been proposed that proves the optimality of the propounded circuit in terms of number of garbage outputs. An efficient algorithm is also introduced in this paper that leads to construct a reversible circuit.", "title": "" }, { "docid": "8f24898cb21a259d9260b67202141d49", "text": "PROBLEM\nHow can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.\n\n\nMETHOD AND RESULTS\nThis paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct people's unfolding mindsets.\n\n\nIMPACT ON INDUSTRY\nIn an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how people's actions and assessments actually made sense at the time.", "title": "" }, { "docid": "55749da1639911c33ba86a2d7ddae0d2", "text": "Artificial intelligence (AI) tools, such as expert system, fuzzy logic, and neural network are expected to usher a new era in power electronics and motion control in the coming decades. Although these technologies have advanced significantly in recent years and have found wide applications, they have hardly touched the power electronics and mackine drives area. The paper describes these Ai tools and their application in the area of power electronics and motion control. The body of the paper is subdivided into three sections which describe, respectively, the principles and applications of expert system, fuzzy logic, and neural network. The theoretical portion of each topic is of direct relevance to the application of power electronics. The example applications in the paper are taken from the published literature. Hopefully, the readers will be able to formulate new applications from these examples.", "title": "" }, { "docid": "eaa449c51d453a8c3f639bc95b321cfe", "text": "The aim of this study was to demonstrate that ultrasonography may allow a precise assessment of the primary stabilizers of pisotriquetral joint (pisohamate, pisometacarpal, and ulnar pisotriquetral ligaments). This study was initially undertaken in eight cadavers. Metal markers were placed in the ligaments using ultrasonographic guidance, followed by the dissection of the wrists. High-resolution ultrasonography was then performed in 15 volunteers (30 wrists) for the analysis of the presence, appearance, and thickness of the ligaments. At dissection, the metal markers were located in the ligaments or immediately adjacent to them, confirming that they were correctly depicted using ultrasonography. The three ligaments could also be identified in each volunteer. The optimal positioning of the probe and the dynamic maneuvers of the wrist allowing the strain of these ligaments could be defined. No significant changes in the appearance and thickness of the ligaments could be observed. The three ligaments stabilizing the pisotriquetral joint can be identified using ultrasonography. Further studies are now required to know whether this knowledge may be useful in the assessment of pain involving the ulnar part of the wrist.", "title": "" }, { "docid": "e8f28a4e17650041350e535c1ac792ff", "text": "A compact multiple-input-multiple-output (MIMO) antenna with a small size of 26×40 mm2 is proposed for portable ultrawideband (UWB) applications. The antenna consists of two planar-monopole (PM) antenna elements with microstrip-fed printed on one side of the substrate and placed perpendicularly to each other to achieve good isolation. To enhance isolation and increase impedance bandwidth, two long protruding ground stubs are added to the ground plane on the other side and a short ground strip is used to connect the ground planes of the two PMs together to form a common ground. Simulation and measurement are used to study the antenna performance in terms of reflection coefficients at the two input ports, coupling between the two input ports, radiation pattern, realized peak gain, efficiency and envelope correlation coefficient for pattern diversity. Results show that the MIMO antenna has an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than -15 dB, and a low envelope correlation coefficient of less than 0.2 across the frequency band, making it a good candidate for portable UWB applications.", "title": "" }, { "docid": "9d43fd4ecad4a9d2a40c0c1d87026bf6", "text": "Generalizing recent attention to retrieving entities and not just documents, we introduce two entity retrieval tasks: list completion and entity ranking. For each task, we propose and evaluate several algorithms. One of the core challenges is to overcome the very limited amount of information that serves as input—to address this challenge we explore different representations of list descriptions and/or example entities, where entities are represented not just by a textual description but also by the description of related entities. For evaluation purposes we make use of the lists and categories available in Wikipedia. Experimental results show that cluster-based contexts improve retrieval results for both tasks.", "title": "" }, { "docid": "d1c88428d398caba2dc9a8f79f84a45f", "text": "In this article, a novel compact reconfigurable antenna based on substrate integrated waveguide (SIW) technology is introduced. The geometry of the proposed antennas is symmetric with respect to the horizontal center line. The electrical shape of the antenna is composed of double H-plane SIW based horn antennas and radio frequency micro electro mechanical system (RF-MEMS) actuators. The RF-MEMS actuators are integrated in the planar structure of the antenna for reconfiguring the radiation pattern by adding nulls to the pattern. The proper activation/deactivation of the switches alters the modes distributed in the structure and changes the radiation pattern. When different combinations of switches are on or off, the radiation patterns have 2, 4, 6, 8, . . . nulls with nearly similar operating frequencies. The attained peak gain of the proposed antenna is higher than 5 dB at any point on the far field radiation pattern except at the null positions. The design procedure and closed form formulation are provided for analytical determination of the antenna parameters. Moreover, the designed antenna with an overall dimensions of only 63.6 × 50 mm2 is fabricated and excited through standard SMA connector and compared with the simulated results. The measured results show that the antenna can clearly alters its beams using the switching components. The proposed antenna retains advantages of low cost, low cross-polarized radiation, and easy integration of configuration.", "title": "" }, { "docid": "0a0dc05f3f34822b71c32a786bf5ccd1", "text": "Chronic facial paralysis induces degenerative facial muscle changes on the involved side, thus, making the individual seem as older than their actual age. Furthermore, contralateral facial hypertrophy aggravates facial asymmetry. A thread-lifting procedure has been used widely for correction of a drooping or wrinkled face due to the aging process. In addition, botulinum toxin injection can be used to reduce facial hypertrophy. The aim of study was to evaluate the effectiveness of thread lifting with botulinum toxin injection for chronic facial paralysis. A total 34 of patients with chronic facial paralysis were enrolled from March to October 2014. Thread lifting for elevating loose facial muscles on the ipsilateral side and botulinum toxin A for controlling the facial muscle hypertrophy on the contralateral side were conducted. Facial function was evaluated using the Sunnybrook grading system and dynamic facial asymmetry ratios 1 year after treatment. All 34 patients displayed improved facial symmetry and showed improvement in Sunnybrook scores (37.4 vs. 83.3) and dynamic facial asymmetry ratios (0.58 vs 0.92). Of the 34 patients, 28 (82.4%) reported being satisfied with treatment. The application of subdermal suspension with a reabsorbable thread in conjunction with botulinum toxin A to optimize facial rejuvenation of the contralateral side constitutes an effective and safe procedure for face lifting and rejuvenation of a drooping face as a result of long-lasting facial paralysis. Die chronische Fazialisparese induziert degenerative Veränderungen der Gesichtsmuskulatur auf der betroffenen Seite. In der Folge wirkt der Patient älter, als er tatsächlich ist. Des Weiteren verstärkt eine kontralaterale Hypertrophie die Gesichtsasymmetrie. Ein Fadenliftingverfahren findet breite Anwendung zur Korrektur eines durch den Alterungsprozess hängenden oder faltigen Gesichts. Zusätzlich kann Botulinumtoxin injiziert werden, um die Gesichtshypertrophie zu verringern. In der vorliegenden Studie sollte die Wirksamkeit eines Fadenliftings mit Botulinumtoxininjektionen bei chronischer Fazialisparese beurteilt werden. Von März bis Oktober 2014 wurden insgesamt 34 Patienten mit chronischer Fazialisparese eingeschlossen. Ein Fadenlifting zur Hebung schlaffer Gesichtsmuskeln auf der ipsilateralen Seite und Botulinumtoxin-A-Injektionen zur Behandlung der Gesichtsmuskelhypertrophie auf der kontralateralen Seite wurden durchgeführt. Ein Jahr nach Behandlung wurde die Gesichtsfunktion mit dem Sunnybrook Grading System und anhand der dynamischen Gesichtsasymmetrieverhältnisse („dynamic facial asymmetry ratios“) beurteilt. Alle 34 Patienten hatten eine verbesserte Gesichtssymmetrie und zeigten Verbesserungen im Sunnybrook-Score (37,4 vs. 83,3) sowie in den dynamischen Gesichtsasymmetrieverhältnissen (0,58 vs. 0,92). Von den 34 Patienten äußerten 28 (82,4 %) ihre Zufriedenheit mit der Behandlung. Die Applikation einer subdermalen Suspension mit einem resorbierbaren Faden in Kombination mit Botulinumtoxin A, um die Gesichtsverjüngung auf der kontralateralen Seite zu optimieren, stellt ein wirksames und sicheres Verfahren zum Facelift und zur Verjüngung eines Gesichts dar, das bedingt durch eine lange bestehende Fazialisparese hängt.", "title": "" }, { "docid": "d54ec6879ea92e5dd7f2a766d2d1839e", "text": "In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior.", "title": "" }, { "docid": "088308b06392780058dd8fa1686c5c35", "text": "Every company should be able to demonstrate own efficiency and effectiveness by used metrics or other processes and standards. Businesses may be missing a direct comparison with competitors in the industry, which is only possible using appropriately chosen instruments, whether financial or non-financial. The main purpose of this study is to describe and compare the approaches of the individual authors. to find metric from reviewed studies which organization use to measuring own marketing activities with following separating into financial metrics and non-financial metrics. The paper presents advance in useable metrics, especially financial and non-financial metrics. Selected studies, focusing on different branches and different metrics, were analyzed by the authors. The results of the study is describing relevant metrics to prove efficiency in varied types of organizations in connection with marketing effectiveness. The studies also outline the potential methods for further research focusing on the application of metrics in a diverse environment. The study contributes to a clearer idea of how to measure performance and effectiveness.", "title": "" }, { "docid": "0eb75b719f523ca4e9be7fca04892249", "text": "In this study 2,684 people evaluated the credibility of two live Web sites on a similar topic (such as health sites). We gathered the comments people wrote about each siteís credibility and analyzed the comments to find out what features of a Web site get noticed when people evaluate credibility. We found that the ìdesign lookî of the site was mentioned most frequently, being present in 46.1% of the comments. Next most common were comments about information structure and information focus. In this paper we share sample participant comments in the top 18 areas that people noticed when evaluating Web site credibility. We discuss reasons for the prominence of design look, point out how future studies can build on what we have learned in this new line of research, and outline six design implications for human-computer interaction professionals.", "title": "" }, { "docid": "f136f8249bf597db706806a795ee8791", "text": "Automotive systems are constantly increasing in complexity and size. Beside the increase of requirements specifications and related test specification due to new systems and higher system interaction, we observe an increase of redundant specifications. As the predominant specification language (both for requirements and test cases) is still natural text, it is not easy to detect these redundancies. In principle, to detect these redundancies, each statement has to be compared to all others. This proves to be difficult because of number and informal expression of statements. In this paper we propose a solution to the problem of detecting redundant specification and test statements described in structured natural language. We propose a formalization process for requirements specification and test statements, allowing us to detect redundant statements and thus reduce the efforts for specification and validation. Specification Pattern Systems and Linear Temporal Logic provide the base for our process. We did evaluate the method in the context of Mercedes-Benz Passenger Car Development. The results show that for the investigated sample set of test statements, we could detect about 30% of test steps as redundant. This indicates the savings potential of our approach.", "title": "" }, { "docid": "e7ba504d2d9a80c0a10bfa4830a1fc54", "text": "BACKGROUND\nGlobal and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment.\n\n\nMETHODS\nWe did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12).\n\n\nFINDINGS\nGlobally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9-65·4) were blind (crude prevalence 0·48%; 80% UI 0·17-0·87; 56% female), 216·6 million (80% UI 98·5-359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34-4·89; 55% female), and 188·5 million (80% UI 64·5-350·2) had mild visual impairment (2·57%, 80% UI 0·88-4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1-1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9-997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9-57·3) in 1990 to 36·0 million (80% UI 12·9-65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (-36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3-270·0) in 1990 to 216·6 million (80% UI 98·5-359·1) in 2015.\n\n\nINTERPRETATION\nThere is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels.\n\n\nFUNDING\nBrien Holden Vision Institute.", "title": "" }, { "docid": "b1b57467dff40b52822ff2406405b217", "text": "Placement of attributes/methods within classes in an object-oriented system is usually guided by conceptual criteria and aided by appropriate metrics. Moving state and behavior between classes can help reduce coupling and increase cohesion, but it is nontrivial to identify where such refactorings should be applied. In this paper, we propose a methodology for the identification of Move Method refactoring opportunities that constitute a way for solving many common feature envy bad smells. An algorithm that employs the notion of distance between system entities (attributes/methods) and classes extracts a list of behavior-preserving refactorings based on the examination of a set of preconditions. In practice, a software system may exhibit such problems in many different places. Therefore, our approach measures the effect of all refactoring suggestions based on a novel entity placement metric that quantifies how well entities have been placed in system classes. The proposed methodology can be regarded as a semi-automatic approach since the designer will eventually decide whether a suggested refactoring should be applied or not based on conceptual or other design quality criteria. The evaluation of the proposed approach has been performed considering qualitative, metric, conceptual, and efficiency aspects of the suggested refactorings in a number of open-source projects.", "title": "" }, { "docid": "e0840870bdfa56302dc592bb228113c6", "text": "Background: The thread lift technique has become popular because it is less invasive, requires a shorter operation, less downtime, and results in fewer postoperative complications. The advantage of the technique is that the thread can be inserted under the skin without the need for long incisions. Currently, there are a lot of thread lift techniques with respect to the specific types of thread used on specific areas, such as the mid-face, lower face, or neck area. Objective: To review the thread lift technique for specific areas according to type of thread, patient selection, and how to match the most appropriate to the patient. Materials and Methods: A literature review technique was conducted by searching PubMed and MEDLINE, then compiled and summarized. Result: We have divided our protocols into two sections: Protocols for short suture, and protocols for long suture techniques. We also created 3D pictures for each technique to enhance understanding and application in a clinical setting. Conclusion: There are advantages and disadvantages to short suture and long suture techniques. The best outcome for each patient depends on appropriate patient selection and determining the most suitable technique for the defect and area of patient concern. Keywords—Thread lift, thread lift method, thread lift technique, thread lift procedure, threading.", "title": "" } ]
scidocsrr
34fa03cd360074fc2aad44f2c25f4576
Annotation of Entities and Relations in Spanish Radiology Reports
[ { "docid": "9aae377bf3ebb202b13fab2cbd85f1ce", "text": "The paper describes a rule-based information extraction (IE) system developed for Polish medical texts. We present two applications designed to select data from medical documentation in Polish: mammography reports and hospital records of diabetic patients. First, we have designed a special ontology that subsequently had its concepts translated into two separate models, represented as typed feature structure (TFS) hierarchies, complying with the format required by the IE platform we adopted. Then, we used dedicated IE grammars to process documents and fill in templates provided by the models. In particular, in the grammars, we addressed such linguistic issues as: ambiguous keywords, negation, coordination or anaphoric expressions. Resolving some of these problems has been deferred to a post-processing phase where the extracted information is further grouped and structured into more complex templates. To this end, we defined special heuristic algorithms on the basis of sample data. The evaluation of the implemented procedures shows their usability for clinical data extraction tasks. For most of the evaluated templates, precision and recall well above 80% were obtained.", "title": "" }, { "docid": "804920bbd9ee11cc35e93a53b58e7e79", "text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.", "title": "" } ]
[ { "docid": "c76d53333ae2443720178819bf23a3ea", "text": "Deng and Xu [2003] proposed a system of multiple recursive generators of prime modulus <i>p</i> and order <i>k</i>, where all nonzero coefficients of the recurrence are equal. This type of generator is efficient because only a single multiplication is required. It is common to choose <i>p</i> = 2<sup>31</sup>−1 and some multipliers to further improve the speed of the generator. In this case, some fast implementations are available without using explicit division or multiplication. For such a <i>p</i>, Deng and Xu [2003] provided specific parameters, yielding the maximum period for recurrence of order <i>k</i>, up to 120. One problem of extending it to a larger <i>k</i> is the difficulty of finding a complete factorization of <i>p</i><sup><i>k</i></sup>−1. In this article, we apply an efficient technique to find <i>k</i> such that it is easy to factor <i>p</i><sup><i>k</i></sup>−1, with <i>p</i> = 2<sup>31</sup>−1. The largest one found is <i>k</i> = 1597. To find multiple recursive generators of large order <i>k</i>, we introduce an efficient search algorithm with an early exit strategy in case of a failed search. For <i>k</i> = 1597, we constructed several efficient and portable generators with the period length approximately 10<sup>14903.1</sup>.", "title": "" }, { "docid": "9948ebbd2253021e3af53534619c5094", "text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "5387c752db7b4335a125df91372099b3", "text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.", "title": "" }, { "docid": "0724e800d88d1d7cd1576729f975b09a", "text": "Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.", "title": "" }, { "docid": "f8b56265c69727f55cc5debfc6958e41", "text": "Ground control of unmanned aerial vehicles (UAV) is a key to the advancement of this technology for commercial purposes. The need for reliable ground control arises in scenarios where human intervention is necessary, e.g. handover situations when autonomous systems fail. Manual flights are also needed for collecting diverse datasets to train deep neural network-based control systems. This axiom is even more prominent for the case of unmanned flying robots where there is no simple solution to capture optimal navigation footage. In such scenarios, improving the ground control and developing better autonomous systems are two sides of the same coin. To improve the ground control experience, and thus the quality of the footage, we propose to upgrade onboard teleoperation systems to a fully immersive setup that provides operators with a stereoscopic first person view (FPV) through a virtual reality (VR) head-mounted display. We tested users (n = 7) by asking them to fly our drone on the field. Test flights showed that operators flying our system can take off, fly, and land successfully while wearing VR headsets. In addition, we ran two experiments with prerecorded videos of the flights and walks to a wider set of participants (n = 69 and n = 20) to compare the proposed technology to the experience provided by current drone FPV solutions that only include monoscopic vision. Our immersive stereoscopic setup enables higher accuracy depth perception, which has clear implications for achieving better teleoperation and unmanned navigation. Our studies show comprehensive data on the impact of motion and simulator sickness in case of stereoscopic setup. We present the device specifications as well as the measures that improve teleoperation experience and reduce induced simulator sickness. Our approach provides higher perception fidelity during flights, which leads to a more precise better teleoperation and ultimately translates into better flight data for training deep UAV control policies.", "title": "" }, { "docid": "60465268d2ede9a7d8b374ac05df0d46", "text": "Nobody likes performance reviews. Subordinates are terrified they'll hear nothing but criticism. Bosses think their direct reports will respond to even the mildest criticism with anger or tears. The result? Everyone keeps quiet. That's unfortunate, because most people need help figuring out how to improve their performance and advance their careers. This fear of feedback doesn't come into play just during annual reviews. At least half the executives with whom the authors have worked never ask for feedback. Many expect the worst: heated arguments, even threats of dismissal. So rather than seek feedback, people try to guess what their bosses are thinking. Fears and assumptions about feedback often manifest themselves in psychologically maladaptive behaviors such as procrastination, denial, brooding, jealousy, and self-sabotage. But there's hope, say the authors. Those who learn adaptive techniques can free themselves from destructive responses. They'll be able to deal with feedback better if they acknowledge negative emotions, reframe fear and criticism constructively, develop realistic goals, create support systems, and reward themselves for achievements along the way. Once you've begun to alter your maladaptive behaviors, you can begin seeking regular feedback from your boss. The authors take you through four steps for doing just that: self-assessment, external assessment, absorbing the feedback, and taking action toward change. Organizations profit when employees ask for feedback and deal well with criticism. Once people begin to know how they are doing relative to management's priorities, their work becomes better aligned with organizational goals. What's more, they begin to transform a feedback-averse environment into a more honest and open one, in turn improving performance throughout the organization.", "title": "" }, { "docid": "a5776d4da32a93c69b18c696c717e634", "text": "Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.", "title": "" }, { "docid": "92dbb257f6d087ce61f5c560c34bf46f", "text": "This study investigates eCommerce adoption in family run SMEs (small and medium sized enterprises). Specifically, the objectives of the study are twofold: (a) to examine environmental and organisational determinants of eCommerce adoption in the family business context; (b) to explore the moderating effect of business strategic orientation on the relationships between adoption determinants and adoption decision. A quantitative questionnaire survey was executed. The sampling frame was outlined based on the OneSource database and 88 companies were involved. Results of logistic regression analyses proffer support that ‘external pressure’ and ‘perceived benefits’ are predictors of eCommerce adoption. Moreover, the findings indicate that the strategic orientation of family businesses will function as a moderator in the adoption process. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d", "text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.", "title": "" }, { "docid": "ede1f31a32e59d29ee08c64c1a6ed5f7", "text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.", "title": "" }, { "docid": "cae269a1eee20846aa2ea83cbf1d0ecc", "text": "Metformin has utility in cancer prevention and treatment, though the mechanisms for these effects remain elusive. Through genetic screening in C. elegans, we uncover two metformin response elements: the nuclear pore complex (NPC) and acyl-CoA dehydrogenase family member-10 (ACAD10). We demonstrate that biguanides inhibit growth by inhibiting mitochondrial respiratory capacity, which restrains transit of the RagA-RagC GTPase heterodimer through the NPC. Nuclear exclusion renders RagC incapable of gaining the GDP-bound state necessary to stimulate mTORC1. Biguanide-induced inactivation of mTORC1 subsequently inhibits growth through transcriptional induction of ACAD10. This ancient metformin response pathway is conserved from worms to humans. Both restricted nuclear pore transit and upregulation of ACAD10 are required for biguanides to reduce viability in melanoma and pancreatic cancer cells, and to extend C. elegans lifespan. This pathway provides a unified mechanism by which metformin kills cancer cells and extends lifespan, and illuminates potential cancer targets. PAPERCLIP.", "title": "" }, { "docid": "fb915584f23482986e672b1a38993ca1", "text": "We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.", "title": "" }, { "docid": "9e292d43355dbdbcf6360c88e49ba38b", "text": "This paper proposes stacked dual-patch CP antenna for GPS and SDMB services. The characteristic of CP at dual-frequency bands is achieved with a circular patch truncated corners with ears at diagonal direction. According to the dimensions of the truncated corners as well as spacing between centers of the two via-holes, the axial ratio of the CP antenna can be controlled. The good return loss results were obtained both at GPS and SDMB bands. The measured gains of the antenna system are 2.3 dBi and 2.4 dBi in GPS and SDMB bands, respectively. The measured axial ratio is slightly shifted frequencies due to diameter variation of via-holes and the spacing between lower patch and upper patch. The proposed low profile, low-cost fabrication, dual circularly polarization, and separated excitation ports make the proposed stacked antenna an applicable solution as a multi-functional antenna for GPS and SDMB operation on vehicle.", "title": "" }, { "docid": "60c976cb53d5128039e752e5f797f110", "text": "This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students’ learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students’ knowledge acquisition. Stakeholders in the educational role of technology include students, faculty members, institutions, and manufacturers. While the benefits of such technologies are still under investigation, the technology landscape offers opportunities to enhance face-to-face and online teaching, including contributions in the understanding of abstract concepts and training in real environments and situations. Barriers to technology use involve limited adoption of augmented and virtual reality technologies, and, more directly, necessary training of teachers in using such technologies within meaningful educational contexts. The author proposes a six-step methodology to aid adoption of these technologies as basic elements within the regular education: training teachers; developing conceptual prototypes; teamwork involving the teacher, a technical programmer, and an educational architect; and producing the experience, which then provides results in the subsequent two phases wherein teachers are trained to apply augmentedand virtual-reality solutions within their teaching methodology using an available subject-specific experience and then finally implementing the use of the experience in a regular subject with students. The essay concludes with discussion of the business opportunities facing virtual reality in face-to-face education as well as augmented and virtual reality in online education.", "title": "" }, { "docid": "86ad395a553495de5f297a2b5fde3f0e", "text": "⇒ NOT written, but spoken language. [Intuitions come from written.] ⇒ NOT meaning as thing, but use of linguistic forms for communicative functions o Direct att. in shared conceptual space like gestures (but w/conventions) ⇒ NOT grammatical rules, but patterns of use => schemas o Constructions themselves as complex symbols \"She sneezed him the ball\" o NOT 'a grammar' but a structured inventory of constructions: continuum of regularity => idiomaticity grammaticality = normativity • Many complexities = \"unification\" of constructions w/ incompatibilities o NOT innate UG, but \"teeming modularity\" (1) symbols, pred-arg structure,", "title": "" }, { "docid": "53a55976808757ceb4b5533af578aad9", "text": "Vehicular Ad-Hoc Networks (VANETs) will play an important role in Smart Cities and will support the development of not only safety applications, but also car smart video surveillance services. Recent improvements in multimedia over VANETs allow drivers, passengers, and rescue teams to capture, share, and access on-road multimedia services. Vehicles can cooperate with each other to transmit live flows of traffic accidents or disasters and provide drivers, passengers, and rescue teams rich visual information about a monitored area. Since humans will watch the videos, their distribution must be done by considering the provided Quality of Experience (QoE) even in multi-hop, multi-path, and dynamic environments. This article introduces an application framework to handle this kind of services and a routing protocol, the DBD (Distributed Beaconless Dissemination), that enhances the dissemination of live video flows on multimedia highway VANETs. DBD uses a backbone-based approach to create and maintain persistent and high quality routes during the video delivery in opportunistic Vehicle to Vehicle (V2V) scenarios. It also improves the performance of the IEEE 802.11p MAC layer, by solving the Spurious Forwarding (SF) problem, while increasing the packet delivery ratio and reducing the forwarding delay. Performance evaluation results show the benefits of DBD compared to existing works in forwarding videos over VANETs, where main objective and subjective QoE results are measured. Safety and video surveillance car applications are key Information and Communication Technologies (ICT) services for smart city scenarios and have been attracting an important attention from governments, car manufacturers, academia, and society [1]. Nowadays, the distribution of real-time multimedia content over Vehicular Ad-Hoc Networks (VANETs) is becoming a reality and allowing drivers/passengers to have new experiences with on-road videos in a smart city [2,3]. According to Cisco, video traffic will represent over 90% of the global IP data in a few years, where thousands of users will produce, share, and consume multimedia services ubiquitously, including in their vehicles. Multimedia VANETs are well-suited for capturing and sharing environmental monitoring, surveillance, traffic accidents, and disaster-based video smart city applications. Live streaming video flows provide users and authorities (e.g., first responder teams and paramedics) with more precise information than simple text messages and allow them to determine a suitable action, while reducing human reaction times [4]. Vehicles can cooperate with each other to disseminate short videos of dangerous situations to visually inform drivers and rescue teams about them both in the city and on a highway. …", "title": "" }, { "docid": "b4e6c50275eef350da454f088ba7e02c", "text": "Children with language-based learning impairments (LLIs) have major deficits in their recognition of some rapidly successive phonetic elements and nonspeech sound stimuli. In the current study, LLI children were engaged in adaptive training exercises mounted as computer \"games\" designed to drive improvements in their \"temporal processing\" skills. With 8 to 16 hours of training during a 20-day period, LLI children improved markedly in their abilities to recognize brief and fast sequences of nonspeech and speech stimuli.", "title": "" }, { "docid": "2c4fed71ee9d658516b017a924ad6589", "text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.", "title": "" }, { "docid": "ad7a5bccf168ac3b13e13ccf12a94f7d", "text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.", "title": "" }, { "docid": "dc6119045a87d7cea34db49554549926", "text": "Multi-tenancy is a relatively new software architecture principle in the realm of the Software as a Service (SaaS) business model. It allows to make full use of the economy of scale, as multiple customers – “tenants” – share the same application and database instance. All the while, the tenants enjoy a highly configurable application, making it appear that the application is deployed on a dedicated server. The major benefits of multi-tenancy are increased utilization of hardware resources and improved ease of maintenance, resulting in lower overall application costs, making the technology attractive for service providers targeting small and medium enterprises (SME). In our paper, we identify some of the core challenges of implementing multi-tenancy. Furthermore, we present a conceptual reengineering approach to support the migration of single-tenant applications into multi-tenant applications.", "title": "" } ]
scidocsrr
82fc35477773805d9206768a27b1c4e2
Virtual Team Training Model Transactive Memory Systems Virtual Team Training Model
[ { "docid": "a261f7df775cbcc1f2b3a5f68fba6029", "text": "As the role of virtual teams in organizations becomes increasingly important, it is crucial that companies identify and leverage team members’ knowledge. Yet, little is known of how virtual team members come to recognize one another’s knowledge, trust one another’s expertise, and coordinate their knowledge effectively. In this study, we develop a model of how three behavioral dimensions associated with transactive memory systems (TMS) in virtual teams—expertise location, Ritu Agarwal was the accepting senior editor for this paper. Alberto Espinosa and Susan Gasson served as reviewers. The associate editor and a third reviewer chose to remain anonymous. Authors are listed alphabetically. Each contributed equally to the paper. task–knowledge coordination, and cognition-based trust—and their impacts on team performance change over time. Drawing on the data from a study that involves 38 virtual teams of MBA students performing a complex web-based business simulation game over an 8-week period, we found that in the early stage of the project, the frequency and volume of task-oriented communications among team members played an important role in forming expertise location and cognition-based trust. Once TMS were established, however, task-oriented communication became less important. Instead, toward the end of the project, task–knowledge coordination emerges as a key construct that influences team performance, mediating the impact of all other constructs. Our study demonstrates that TMS can be formed even in virtual team environments where interactions take place solely through electronic media, although they take a relatively long time to develop. Furthermore, our findings show that, once developed, TMS become essential to performing tasks effectively in virtual teams.", "title": "" }, { "docid": "71fa9602c24916b8c868c24ba50a74e8", "text": "In this paper, we review the research on virtual teams in an effort to assess the state of the literature. We start with an examination of the definitions of virtual teams used and propose an integrative definition that suggests that all teams may be defined in terms of their extent of virtualness. Next, we review findings related to team inputs, processes, and outcomes, and identify areas of agreement and inconsistency in the literature on virtual teams. Based on this review, we suggest avenues for future research, including methodological and theoretical considerations that are important to advancing our understanding of virtual teams. © 2004 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "dff6c531a57d890aaae44c04ff5d3037", "text": "OBJECTIVE\nWe highlight some of the key discoveries and developments in the area of team performance over the past 50 years, especially as reflected in the pages of Human Factors.\n\n\nBACKGROUND\nTeams increasingly have become a way of life in many organizations, and research has kept up with the pace.\n\n\nMETHOD\nWe have characterized progress in the field in terms of eight discoveries and five challenges.\n\n\nRESULTS\nDiscoveries pertain to the importance of shared cognition, the measurement of shared cognition, advances in team training, the use of synthetic task environments for research, factors influencing team effectiveness, models of team effectiveness, a multidisciplinary perspective, and training and technological interventions designed to improve team effectiveness. Challenges that are faced in the coming decades include an increased emphasis on team cognition; reconfigurable, adaptive teams; multicultural influences; and the need for naturalistic study and better measurement.\n\n\nCONCLUSION\nWork in human factors has contributed significantly to the science and practice of teams, teamwork, and team performance. Future work must keep pace with the increasing use of teams in organizations.\n\n\nAPPLICATION\nThe science of teams contributes to team effectiveness in the same way that the science of individual performance contributes to individual effectiveness.", "title": "" } ]
[ { "docid": "c88e0cd1ef334fd427a2c33240e8f5fd", "text": "While smartphones and related mobile technologies are recognized as flexible and powerful tools that, when used prudently, can augment human cognition, there is also a growing perception that habitual involvement with these devices may have a negative and lasting impact on users' ability to think, remember, pay attention, and regulate emotion. The present review considers an intensifying, though still limited, area of research exploring the potential cognitive impacts of smartphone-related habits, and seeks to determine in which domains of functioning there is accruing evidence of a significant relationship between smartphone technology and cognitive performance, and in which domains the scientific literature is not yet mature enough to endorse any firm conclusions. We focus our review primarily on three facets of cognition that are clearly implicated in public discourse regarding the impacts of mobile technology - attention, memory, and delay of gratification - and then consider evidence regarding the broader relationships between smartphone habits and everyday cognitive functioning. Along the way, we highlight compelling findings, discuss limitations with respect to empirical methodology and interpretation, and offer suggestions for how the field might progress toward a more coherent and robust area of scientific inquiry.", "title": "" }, { "docid": "16bafec4544454a948d72f26861d0313", "text": "Measuring the similarity between documents is an important operation in the text processing field. In this paper, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature, the proposed measure takes the following three cases into account: a) The feature appears in both documents, b) the feature appears in only one document, and c) the feature appears in none of the documents. For the first case, the similarity increases as the difference between the two involved feature values decreases. Furthermore, the contribution of the difference is normally scaled. For the second case, a fixed value is contributed to the similarity. For the last case, the feature has no contribution to the similarity. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems. The results show that the performance obtained by the proposed measure is better than that achieved by other measures.", "title": "" }, { "docid": "b33eaecf2aff15ecb2f0d256bde7e1bb", "text": "This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.", "title": "" }, { "docid": "c9b568ea5553e0364d8f682b6584eb52", "text": "Photoplethysmograph (PPG) is a simple and cost effective method to assess cardiovascular related parameters such as heart rate, arterial blood oxygen saturation and blood pressure. PPG signal consists of not only synchronized heart beats, but also the rhythms of respiration. The PPG sensor, which consists of infrared light-emitting diodes (LEDs) and a photodetector, allows a simple, reliable and low-cost means of monitoring the pulse rate. In this project, PPG signals are acquired through a customized data acquisition process using Arduino board to control the pulse circuit and to obtain the PPG signals from human subjects. Using signal processing techniques, including filters, peak detections, wavelet transform analysis and power spectral density, the heart rate (HR) and breathing rate (BR) are be obtained simultaneously. Estimations of HR and BR are conducted using MATLAB algorithm developed based on the wavelet decomposition techniques to extract the heart and respiration activities from the PPG signals. The values of HR and BR obtained using the algorithm are similar to the values obtained by manual estimation for seven sample subjects where the range of percentage errors are small about 0–9.5% for the breathing rate and 2.1–5.7% for the heart rate.", "title": "" }, { "docid": "03cea891c4a9fdc77832979267f9dca9", "text": "Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g. a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs),and process saving may be obtained through the use of the CATCH operator. The use of CATCH, in particular, allows an elegant treatment of process saving.\n We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design.", "title": "" }, { "docid": "7acd253de05b3eb27d0abccbcb45367e", "text": "High school programming competitions often follow the traditional model of collegiate competitions, exemplified by the ACM International Collegiate Programming Contest (ICPC). This tradition has been reinforced by the nature of Advanced Placement Computer Science (AP CS A), for which ICPC-style problems are considered an excellent practice regimen. As more and more students in high school computer science courses approach the field from broader starting points, such as Exploring Computer Science (ECS), or the new AP CS Principles course, an analogous structure for high school outreach events becomes of greater importance.\n This paper describes our work on developing a Scratch-based alternative competition for high school students, that can be run in parallel with a traditional morning of ICPC-style problems.", "title": "" }, { "docid": "c7c5fde8197d87f2551a2897d5fd4487", "text": "The Parallel Meaning Bank is a corpus of translations annotated with shared, formal meaning representations comprising over 11 million words divided over four languages (English, German, Italian, and Dutch). Our approach is based on cross-lingual projection: automatically produced (and manually corrected) semantic annotations for English sentences are mapped onto their word-aligned translations, assuming that the translations are meaning-preserving. The semantic annotation consists of five main steps: (i) segmentation of the text in sentences and lexical items; (ii) syntactic parsing with Combinatory Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and (v) compositional semantic analysis based on Discourse Representation Theory. These steps are performed using statistical models trained in a semisupervised manner. The employed annotation models are all language-neutral. Our first results are promising.", "title": "" }, { "docid": "852ff3b52b4bf8509025cb5cb751899f", "text": "Digital images are ubiquitous in our modern lives, with uses ranging from social media to news, and even scientific papers. For this reason, it is crucial evaluate how accurate people are when performing the task of identify doctored images. In this paper, we performed an extensive user study evaluating subjects capacity to detect fake images. After observing an image, users have been asked if it had been altered or not. If the user answered the image has been altered, he had to provide evidence in the form of a click on the image. We collected 17,208 individual answers from 383 users, using 177 images selected from public forensic databases. Different from other previously studies, our method propose different ways to avoid lucky guess when evaluating users answers. Our results indicate that people show inaccurate skills at differentiating between altered and non-altered images, with an accuracy of 58%, and only identifying the modified images 46.5% of the time. We also track user features such as age, answering time, confidence, providing deep analysis of how such variables influence on the users’ performance.", "title": "" }, { "docid": "ce404452a843d18e4673d0dcf6cf01b1", "text": "We propose a formal mathematical model for sparse representations in neocortex based on a neuron model and associated operations. The design of our model neuron is inspired by recent experimental findings on active dendritic processing and NMDA spikes in pyramidal neurons. We derive a number of scaling laws that characterize the accuracy of such neurons in detecting activation patterns in a neuronal population under adverse conditions. We introduce the union property which shows that synapses for multiple patterns can be randomly mixed together within a segment and still lead to highly accurate recognition. We describe simulation results that provide overall insight into sparse representations as well as two primary results. First we show that pattern recognition by a neuron can be extremely accurate and robust with high dimensional sparse inputs even when using a tiny number of synapses to recognize large patterns. Second, equations representing recognition accuracy of a dendrite predict optimal NMDA spiking thresholds under a generous set of assumptions. The prediction tightly matches NMDA spiking thresholds measured in the literature. Our model neuron matches many of the known properties of pyramidal neurons. As such the theory provides a unified and practical mathematical framework for understanding the benefits and limits of sparse representations in cortical networks.", "title": "" }, { "docid": "cbde4d1109ab08833da661a6ea009bc8", "text": "The performance of ad hoc networks depends on cooperation and trust among distributed nodes. To enhance security in ad hoc networks, it is important to evaluate trustworthiness of other nodes without centralized authorities. In this paper, we present an information theoretic framework to quantitatively measure trust and model trust propagation in ad hoc networks. In the proposed framework, trust is a measure of uncertainty with its value represented by entropy. We develop four Axioms that address the basic understanding of trust and the rules for trust propagation. Based on these axioms, we present two trust models: entropy-based model and probability-based model, which satisfy all the axioms. Techniques of trust establishment and trust update are presented to obtain trust values from observation. The proposed trust evaluation method and trust models are employed in ad hoc networks for secure ad hoc routing and malicious node detection. A distributed scheme is designed to acquire, maintain, and update trust records associated with the behaviors of nodes' forwarding packets and the behaviors of making recommendations about other nodes. Simulations show that the proposed trust evaluation system can significantly improve the network throughput as well as effectively detect malicious behaviors in ad hoc networks.", "title": "" }, { "docid": "40555c2dc50a099ff129f60631f59c0d", "text": "As new technologies and information delivery systems emerge, the way in which individuals search for information to support research, teaching, and creative activities is changing. To understand different aspects of researchers’ information-seeking behavior, this article surveyed 2,063 academic researchers in natural science, engineering, and medical science from five research universities in the United States. A Web-based, in-depth questionnaire was designed to quantify researchers’ information searching, information use, and information storage behaviors. Descriptive statistics are reported.", "title": "" }, { "docid": "bfb0de9970cf1970f98c4fa78c2ec4d7", "text": "The problem of matching between binaries is important for software copyright enforcement as well as for identifying disclosed vulnerabilities in software. We present a search engine prototype called Rendezvous which enables indexing and searching for code in binary form. Rendezvous identifies binary code using a statistical model comprising instruction mnemonics, control flow sub-graphs and data constants which are simple to extract from a disassembly, yet normalising with respect to different compilers and optimisations. Experiments show that Rendezvous achieves F2 measures of 86.7% and 83.0% on the GNU C library compiled with different compiler optimisations and the GNU coreutils suite compiled with gcc and clang respectively. These two code bases together comprise more than one million lines of code. Rendezvous will bring significant changes to the way patch management and copyright enforcement is currently performed.", "title": "" }, { "docid": "002fe3efae0fc9f88690369496ce5e7d", "text": "Experimental evidence suggests that emotions can both speed-up and slow-down the internal clock. Speeding up has been observed for to-be-timed emotional stimuli that have the capacity to sustain attention, whereas slowing down has been observed for to-be-timed neutral stimuli that are presented in the context of emotional distractors. These effects have been explained by mechanisms that involve changes in bodily arousal, attention, or sentience. A review of these mechanisms suggests both merits and difficulties in the explanation of the emotion-timing link. Therefore, a hybrid mechanism involving stimulus-specific sentient representations is proposed as a candidate for mediating emotional influences on time. According to this proposal, emotional events enhance sentient representations, which in turn support temporal estimates. Emotional stimuli with a larger share in ones sentience are then perceived as longer than neutral stimuli with a smaller share.", "title": "" }, { "docid": "c091e5b24dc252949b3df837969e263a", "text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.", "title": "" }, { "docid": "848aae58854681e75fae293e2f8d2fc5", "text": "Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, such as object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep features from covolutional neural network(CNN) have attracted many researchers in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. We apply deep feature in object detection task, especially in vehicle detection task. To make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding. We experiment each task for different public datasets, and prove our framework is robust.", "title": "" }, { "docid": "00f7e2b68ddb3fd72c2294f946a4c3b9", "text": "The specific problems encountered in the design of near-field focused planar microstrip arrays for RFID (Radio Frequency IDentification) readers are described. In particular, the paper analyzes the case of a prototype operating at 2.4 GHz, which has been designed and characterized. Improvements with respect to conventional far-field focused arrays (equal phase arrays) are discussed and quantified.", "title": "" }, { "docid": "0c06c0e4fec9a2cc34c38161e142032d", "text": "We introduce a novel high-level security metrics objective taxonomization model for software-intensive systems. The model systematizes and organizes security metrics development activities. It focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals. The model emphasizes the roles of security-enforcing mechanisms, the overall security quality of the system under investigation, and secure system lifecycle, project and business management. Security correctness, effectiveness and efficiency are seen as the fundamental measurement objectives, determining the directions for more detailed security metrics development. Integration of the proposed model with riskdriven security metrics development approaches is also discussed.", "title": "" }, { "docid": "7b4e173511f0a572f77ab066448eb8b7", "text": "Live wide-area persistent surveillance (WAPS) systems must provide effective multi-target tracking on downlinked video streams in real-time. This paper presents the first published aerial tracking system that is documented to process over 100 megapixels per second. The implementation addresses the challenges with the mosaicked, low-resolution, grayscale NITF imagery provided by most currently fielded WAPS platforms and the flexible computation architecture required to provide real-time performance. This paper also provides ground-truth for repeatable evaluation of wide-area persistent surveillance on a 2009 dataset collected by AFRL [1] that is available to the public as well as a quantitative analysis of this real-time implementation. To our knowledge, this is the only publication that (1) provides details of a real-time implementation for detection and tracking in (2) mosaicked, composed imagery from a fielded WAPS sensor, and (3) provides annotation data and quantitative analysis for repeatable WAPS tracking experimentation in the computer vision community.", "title": "" }, { "docid": "889747dbf541583475cbce74c42dc616", "text": "This paper presents an analysis of FastSLAM - a Rao-Blackwellised particle filter formulation of simultaneous localisation and mapping. It shows that the algorithm degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and would always produce optimistic estimates of uncertainty in the long-term. In essence, FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean and, given sufficient particles, FastSLAM can produce good non-stochastic estimates in practice. FastSLAM also has several practical advantages, particularly with regard to data association, and would probably work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM", "title": "" }, { "docid": "4f03d46e3d10ea2b9e35eeb3774a2736", "text": "Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.", "title": "" } ]
scidocsrr
3cc92d1548f6451f1ebb8894fda732aa
A New Triple-Band Circular Ring Patch Antenna With Monopole-Like Radiation Pattern Using a Hybrid Technique
[ { "docid": "fcd9a80d35a24c7222392c11d3376c72", "text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.", "title": "" } ]
[ { "docid": "7554941bfcde72c640419ab591f02bbc", "text": "An always-growing number of cars are equipped with radars, mainly used for drivers and passengers' safety. In particular, according to European Telecommunications Standards Institute (ETSI) one specific frequency band is dedicated to automatic cruise control long-range radar operating around 77 GHz (W-band). After the discussion of the Mie scattering formulation applied to a weather radar working in the W-band, the proposal of a new Z-R equation to be used for correct rain estimation is given. Functional requirements to adapt an automatic cruise control long-range radar to a mini-weather radar are analyzed and the technical specifications are evaluated. Results provide the basis for the use of a 77 GHz automotive anti-collision radar for meteorological purposes.", "title": "" }, { "docid": "fca76468b4d72fd5ef7c85b5d56548b9", "text": "Cloud providers, like Amazon, offer their data centers' computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users' experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.", "title": "" }, { "docid": "e9225e726f019f6c72418a2e41f96caf", "text": "We build a chat bot with iterative content exploration that leads a user through a personalized knowledge acquisition session. The chat bot is designed as an automated customer support or product recommendation agent assisting a user in learning product features, product usability, suitability, troubleshooting and other related tasks. To control the user navigation through content, we extend the notion of a linguistic discourse tree (DT) towards a set of documents with multiple sections covering a topic. For a given paragraph, a DT is built by DT parsers. We then combine DTs for the paragraphs of documents to form what we call extended DT, which is a basis for interactive content exploration facilitated by the chat bot. To provide cohesive answers, we use a measure of rhetoric agreement between a question and an answer by tree kernel learning of their DTs.", "title": "" }, { "docid": "327bbbee0087e15db04780291ded9fe6", "text": "Semantic Reliability is a novel correctness criterion for multicast protocols based on the concept of message obsolescence: A message becomes obsolete when its content or purpose is superseded by a subsequent message. By exploiting obsolescence, a reliable multicast protocol may drop irrelevant messages to find additional buffer space for new messages. This makes the multicast protocol more resilient to transient performance perturbations of group members, thus improving throughput stability. This paper describes our experience in developing a suite of semantically reliable protocols. It summarizes the motivation, definition, and algorithmic issues and presents performance figures obtained with a running implementation. The data obtained experimentally is compared with analytic and simulation models. This comparison allows us to confirm the validity of these models and the usefulness of the approach. Finally, the paper reports the application of our prototype to distributed multiplayer games.", "title": "" }, { "docid": "d8ce92b054fc425a5db5bf17a62c6308", "text": "The possibility that wind turbine noise (WTN) affects human health remains controversial. The current analysis presents results related to WTN annoyance reported by randomly selected participants (606 males, 632 females), aged 18-79, living between 0.25 and 11.22 km from wind turbines. WTN levels reached 46 dB, and for each 5 dB increase in WTN levels, the odds of reporting to be either very or extremely (i.e., highly) annoyed increased by 2.60 [95% confidence interval: (1.92, 3.58), p < 0.0001]. Multiple regression models had R(2)'s up to 58%, with approximately 9% attributed to WTN level. Variables associated with WTN annoyance included, but were not limited to, other wind turbine-related annoyances, personal benefit, noise sensitivity, physical safety concerns, property ownership, and province. Annoyance was related to several reported measures of health and well-being, although these associations were statistically weak (R(2 )< 9%), independent of WTN levels, and not retained in multiple regression models. The role of community tolerance level as a complement and/or an alternative to multiple regression in predicting the prevalence of WTN annoyance is also provided. The analysis suggests that communities are between 11 and 26 dB less tolerant of WTN than of other transportation noise sources.", "title": "" }, { "docid": "adf69030a68ed3bf6fc4d008c50ac5b5", "text": "Many patients with low back and/or pelvic girdle pain feel relief after application of a pelvic belt. External compression might unload painful ligaments and joints, but the exact mechanical effect on pelvic structures, especially in (active) upright position, is still unknown. In the present study, a static three-dimensional (3-D) pelvic model was used to simulate compression at the level of anterior superior iliac spine and the greater trochanter. The model optimised forces in 100 muscles, 8 ligaments and 8 joints in upright trunk, pelvis and upper legs using a criterion of minimising maximum muscle stress. Initially, abdominal muscles, sacrotuberal ligaments and vertical sacroiliac joints (SIJ) shear forces mainly balanced a trunk weight of 500N in upright position. Application of 50N medial compression force at the anterior superior iliac spine (equivalent to 25N belt tension force) deactivated some dorsal hip muscles and reduced the maximum muscle stress by 37%. Increasing the compression up to 100N reduced the vertical SIJ shear force by 10% and increased SIJ compression force with 52%. Shifting the medial compression force of 100N in steps of 10N to the greater trochanter did not change the muscle activation pattern but further increased SIJ compression force by 40% compared to coxal compression. Moreover, the passive ligament forces were distributed over the sacrotuberal, the sacrospinal and the posterior ligaments. The findings support the cause-related designing of new pelvic belts to unload painful pelvic ligaments or muscles in upright posture.", "title": "" }, { "docid": "d5081c1f13d06b43386e2db276351abd", "text": "We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity modelling based on segmentation using expectation maximisation (EM) and optimisation via graph cuts. The segmentation pipeline is evaluated with two data cohorts: IBSR data (N=18, six subcortial structures: thalamus, caudate, putamen, pallidum, hippocampus, amygdala) and ADNI data (N=60, hippocampus). The average similarity index between automatically and manually generated volumes was 0.849 (IBSR, six subcortical structures) and 0.880 (ADNI, hippocampus). The correlation coefficient for hippocampal volumes was 0.95 with the ADNI data. The computation time using a standard multicore PC computer was about 3-4 min. Our results compare favourably with other recently published results.", "title": "" }, { "docid": "4967d450275657eaf3b87fc5c0079e2a", "text": "Recommender systems are software tools used to generate and provide suggestions for items and other entities to the users by exploiting various strategies. Hybrid recommender systems combine two or more recommendation strategies in different ways to benefit from their complementary advantages. This systematic literature review presents the state of the art in hybrid recommender systems of the last decade. It is the first quantitative review work completely focused in hybrid recommenders. We address the most relevant problems considered and present the associated data mining and recommendation techniques used to overcome them. We also explore the hybridization classes each hybrid recommender belongs to, the application domains, the evaluation process and proposed future research directions. Based on our findings, most of the studies combine collaborative filtering with another technique often in a weighted way. Also cold-start and data sparsity are the two traditional and top problems being addressed in 23 and 22 studies each, while movies and movie datasets are still widely used by most of the authors. As most of the studies are evaluated by comparisons with similar methods using accuracy metrics, providing more credible and user oriented evaluations remains a typical challenge. Besides this, newer challenges were also identified such as responding to the variation of user context, evolving user tastes or providing cross-domain recommendations. Being a hot topic, hybrid recommenders represent a good basis with which to respond accordingly by exploring newer opportunities such as contextualizing recommendations, involving parallel hybrid algorithms, processing larger datasets, etc.", "title": "" }, { "docid": "0e4cd983047da489ee3b28511aea573a", "text": "While bottom-up and top-down processes have shown effectiveness during predicting attention and eye fixation maps on images, in this paper, inspired by the perceptual organization mechanism before attention selection, we propose to utilize figure-ground maps for the purpose. So as to take both pixel-wise and region-wise interactions into consideration when predicting label probabilities for each pixel, we develop a context-aware model based on multiple segmentation to obtain final results. The MIT attention dataset [14] is applied finally to evaluate both new features and model. Quantitative experiments demonstrate that figure-ground cues are valid in predicting attention selection, and our proposed model produces improvements over baseline method.", "title": "" }, { "docid": "8e2e941c568328743c3fc56fda06b000", "text": "Neuroscientific research has consistently found that the perception of an affective state in another activates the observer's own neural substrates for the corresponding state, which is likely the neural mechanism for \"true empathy.\" However, to date there has not been a brain-imaging investigation of so-called \"cognitive empathy\", whereby one \"actively projects oneself into the shoes of another person,\" imagining someone's personal, emotional experience as if it were one's own. In order to investigate this process, we conducted a combined psychophysiology and PET and study in which participants imagined: (1) a personal experience of fear or anger from their own past; (2) an equivalent experience from another person as if it were happening to them; and (3) a nonemotional experience from their own past. When participants could relate to the scenario of the other, they produced patterns of psychophysiological and neuroimaging activation equivalent to those of personal emotional imagery, but when they could not relate to the other's story, differences emerged on all measures, e.g., decreased psychophysiological responses and recruitment of a region between the inferior temporal and fusiform gyri. The substrates of cognitive empathy overlap with those of personal feeling states to the extent that one can relate to the state and situation of the other.", "title": "" }, { "docid": "de1d8d115d4f80f5976dbb52558b89fe", "text": "With the enormous growth in processor performance over the last decade, it is clear that reliability, rather than performance, is now the greatest challenge for computer systems research. This is particularly true in the context of Internet services that require 24x7 operation and home computers with no professional administration. While operating system products have matured and become more reliable, they are still the source of a significant number of failures. Furthermore, recent studies show that device drivers are frequently responsible for operating system failures. For example, a study at Stanford University found that Linux drivers have 3 to 7 times the bug frequency as the rest of the OS [4]. An analysis of product support calls for Windows 2000 showed that device drivers accounted for 27% of crashes, compared to 2% for the kernel itself [16].", "title": "" }, { "docid": "da629f12846e3b2398624ec6a44d24de", "text": "We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probabilities of parser actions. Unlike a similar previous generative model (Henderson and Titov, 2010), the RNN is trained discriminatively to optimize a fast beam search. This beam search prunes after each shift action, so we add a correctness probability to each shift action and train this score to discriminate between correct and incorrect sequences of parser actions. We also speed up parsing time by caching computations for frequent feature combinations, including during training, giving us both faster training and a form of backoff smoothing. The resulting parser is over 35 times faster than its generative counterpart with nearly the same accuracy, producing state-of-art dependency parsing results while requiring minimal feature engineering. YAZDANI, Majid, HENDERSON, James. Incremental Recurrent Neural Network Dependency Parser with Search-based Discriminative Training. In: Proceedings of the 19th Conference on Computational Language Learning. 2015. p. 142-152", "title": "" }, { "docid": "067fd264747d466b86710366c14a4495", "text": "We present Embodied Construction Grammar, a formalism for linguist ic analysis designed specifically for integration into a simulation-based model of language unders tanding. As in other construction grammars, linguistic constructions serve to map between phonological f orms and conceptual representations. In the model we describe, however, conceptual representations are als o constrained to be grounded in the body’s perceptual and motor systems, and more precisely to parameteri ze m ntal simulations using those systems. Understanding an utterance thus involves at least two dis inct processes: analysis to determine which constructions the utterance instantiates, and simulationaccording to the parameters specified by those constructions. In this chapter, we outline a constru ction formalism that is both representationally adequate for these purposes and specified precisely enough for se in a computational architecture.", "title": "" }, { "docid": "62c49155e92350a0420fb215f0a92f78", "text": "Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is perhaps the key problem of the discipline of Distributed Artificial Intelligence (DAI). In order to make advances it is important that the theories and principles which guide this central activity are uncovered and analysed in a systematic and rigourous manner. To this end, this paper models agent communities using a distributed goal search formalism, and argues that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in all DAI systems. 1. The Coordination Problem Participation in any social situation should be both simultaneously constraining, in that agents must make a contribution to it, and yet enriching, in that participation provides resources and opportunities which would otherwise be unavailable (Gerson, 1976). Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is the key to achieving this objective. Without coordination the benefits of decentralised problem solving vanish and the community may quickly degenerate into a collection of chaotic, incohesive individuals. In more detail, the objectives of the coordination process are to ensure: that all necessary portions of the overall problem are included in the activities of at least one agent, that agents interact in a manner which permits their activities to be developed and integrated into an overall solution, that team members act in a purposeful and consistent manner, and that all of these objectives are achievable within the available computational and resource limitations (Lesser and Corkill, 1987). Specific examples of coordination activities include supplying timely information to needy agents, ensuring the actions of multiple actors are synchronised and avoiding redundant problem solving. There are three main reasons why the actions of multiple agents need to be coordinated: • because there are dependencies between agents’ actions Interdependence occurs when goals undertaken by individual agents are related either because local decisions made by one agent have an impact on the decisions of other community members (eg when building a house, decisions about the size and location of rooms impacts upon the wiring and plumbing) or because of the possibility of harmful interactions amongst agents (eg two mobile robots may attempt to pass through a narrow exit simultaneously, resulting in a collision, damage to the robots and blockage of the exit). Contribution to Foundations of DAI 2 • because there is a need to meet global constraints Global constraints exist when the solution being developed by a group of agents must satisfy certain conditions if it is to be deemed successful. For instance, a house building team may have a budget of £250,000, a distributed monitoring system may have to react to critical events within 30 seconds and a distributed air traffic control system may have to control the planes with a fixed communication bandwidth. If individual agents acted in isolation and merely tried to optimise their local performance, then such overarching constraints are unlikely to be satisfied. Only through coordinated action will acceptable solutions be developed. • because no one individual has sufficient competence, resources or information to solve the entire problem Many problems cannot be solved by individuals working in isolation because they do not possess the necessary expertise, resources or information. Relevant examples include the tasks of lifting a heavy object, driving in a convoy and playing a symphony. It may be impractical or undesirable to permanently synthesize the necessary components into a single entity because of historical, political, physical or social constraints, therefore temporary alliances through cooperative problem solving may be the only way to proceed. Differing expertise may need to be combined to produce a result outside of the scope of any of the individual constituents (eg in medical diagnosis, knowledge about heart disease, blood disorders and respiratory problems may need to be combined to diagnose a patient’s illness). Different agents may have different resources (eg processing power, memory and communications) which all need to be harnessed to solve a complex problem. Finally, different agents may have different information or viewpoints of a problem (eg in concurrent engineering systems, the same product may be viewed from a design, manufacturing and marketing perspective). Even when individuals can work independently, meaning coordination is not essential, information discovered by one agent can be of sufficient use to another that the two agents can solve the problem more than twice as fast. For example, when searching for a lost object in a large area it is often better, though not essential, to do so as a team. Analysis of this “combinatorial implosion” phenomena (Kornfield and Hewitt, 1981) has resulted in the postulation that cooperative search, when sufficiently large, can display universal characteristics which are independent of the nature of either the individual processes or the particular domain being tackled (Clearwater et al., 1991). If all the agents in the system could have complete knowledge of the goals, actions and interactions of their fellow community members and could also have infinite processing power, it would be possible to know exactly what each agent was doing at present and what it is intending to do in the future. In such instances, it would be possible to avoid conflicting and redundant efforts and systems could be perfectly coordinated (Malone, 1987). However such complete knowledge is infeasible, in any community of reasonable complexity, because bandwidth limitations make it impossible for agents to be constantly informed of all developments. Even in modestly sized communities, a complete analysis to determine the detailed activities of each agent is impractical the computation and communication costs of determining the optimal set and allocation of activities far outweighs the improvement in problem solving performance (Corkill and Lesser, 1986). Contribution to Foundations of DAI 3 As all community members cannot have a complete and accurate perspective of the overall system, the next easiest way of ensuring coherent behaviour is to have one agent with a wider picture. This global controller could then direct the activities of the others, assign agents to tasks and focus problem solving to ensure coherent behaviour. However such an approach is often impractical in realistic applications because even keeping one agent informed of all the actions in the community would swamp the available bandwidth. Also the controller would become a severe communication bottleneck and would render the remaining components unusable if it failed. To produce systems without bottlenecks and which exhibit graceful degradation of performance, most DAI research has concentrated on developing communities in which both control and data are distributed. Distributed control means that individuals have a degree of autonomy in generating new actions and in deciding which tasks to do next. When designing such systems it is important to ensure that agents spend the bulk of their time engaged on solving the domain level problems for which they were built, rather than in communication and coordination activities. To this end, the community should be decomposed into the most modular units possible. However the designer should ensure that these units are of sufficient granularity to warrant the overhead inherent in goal distribution distributing small tasks can prove more expensive than performing them in one place (Durfee et al., 1987). The disadvantage of distributing control and data is that knowledge of the system’s overall state is dispersed throughout the community and each individual has only a partial and imprecise perspective. Thus there is an increased degree of uncertainty about each agent’s actions, meaning that it more difficult to attain coherent global behaviour for example, agents may spread misleading and distracting information, multiple agents may compete for unshareable resources simultaneously, agents may unwittingly undo the results of each others activities and the same actions may be carried out redundantly. Also the dynamics of such systems can become extremely complex, giving rise to nonlinear oscillations and chaos (Huberman and Hogg, 1988). In such cases the coordination process becomes correspondingly more difficult as well as more important1. To develop better and more integrated models of coordination, and hence improve the efficiency and utility of DAI systems, it is necessary to obtain a deeper understanding of the fundamental concepts which underpin agent interactions. The first step in this analysis is to determine the perspective from which coordination should be described. When viewing agents from a purely behaviouristic (external) perspective, it is, in general, impossible to determine whether they have coordinated their actions. Firstly, actions may be incoherent even if the agents tried to coordinate their behaviour. This may occur, for instance, because their models of each other or of the environment are incorrect. For example, robot1 may see robot2 heading for exit2 and, based on this observation and the subsequent deduction that it will use this exit, decide to use exit1. However if robot2 is heading towards exit2 to pick up a particular item and actually intends to use exit1 then there may be incoherent behaviour (both agents attempting to use the same exit) although there was coordination. Secondly, even if there is coherent action, it may not", "title": "" }, { "docid": "8054bf47593fa139cb9e4c14e336818e", "text": "This paper provides a framework for evaluating healthcare software from a usability perspective. The framework is based on a review of both the healthcare software literature and the general literature on software usability and evaluation. The need for such a framework arises from the proliferation of software packages in the healthcare field, and from an historical focus on the technical and functional aspects, rather than on the usability, of these packages. Healthcare managers are generally unfamiliar with usability concepts, even though usability differences among software can play a significant role in the acceptance and effectiveness of systems. Six major areas of usability are described, and specific criteria which can be used in the software evaluation process are also presented.", "title": "" }, { "docid": "6927647b1e1f6bf9bcf65db50e9f8d6e", "text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.", "title": "" }, { "docid": "a5c072d196eed09548acba006b1e4ff6", "text": "MapReduce is becoming the state-of-the-art computing paradigm for processing large-scale datasets on a large cluster with tens or thousands of nodes. It has been widely used in various fields such as e-commerce, Web search, social networks, and scientific computation. Understanding the characteristics of MapReduce workloads is the key to achieving better configuration decisions and improving the system throughput. However, workload characterization of MapReduce, especially in a large-scale production environment, has not been well studied yet. To gain insight on MapReduce workloads, we collected a two-week workload trace from a 2,000-node Hadoop cluster at Taobao, which is the biggest online e-commerce enterprise in Asia, ranked 14th in the world as reported by Alexa. The workload trace covered 912,157 jobs, logged from Dec. 4 to Dec. 20, 2011. We characterized the workload at the granularity of job and task, respectively and concluded with a set of interesting observations. The results of workload characterization are representative and generally consistent with data platforms for e-commerce websites, which can help other researchers and engineers understand the performance and job characteristics of Hadoop in their production environments. In addition, we use these job analysis statistics to derive several implications for potential performance optimization solutions.", "title": "" }, { "docid": "17fb585ff12cff879febb32c2a16b739", "text": "An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large-scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the-art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.", "title": "" }, { "docid": "54d223a2a00cbda71ddf3f1b29f1ebed", "text": "Much of the data of scientific interest, particularly when independence of data is not assumed, can be represented in the form of information networks where data nodes are joined together to form edges corresponding to some kind of associations or relationships. Such information networks abound, like protein interactions in biology, web page hyperlink connections in information retrieval on the Web, cellphone call graphs in telecommunication, co-authorships in bibliometrics, crime event connections in criminology, etc. All these networks, also known as social networks, share a common property, the formation of connected groups of information nodes, called community structures. These groups are densely connected nodes with sparse connections outside the group. Finding these communities is an important task for the discovery of underlying structures in social networks, and has recently attracted much attention in data mining research. In this paper, we present Top Leaders, a new community mining approach that, simply put, regards a community as a set of followers congregating around a potential leader. Our algorithm starts by identifying promising leaders in a given network then iteratively assembles followers to their closest leaders to form communities, and subsequently finds new leaders in each group around which to gather followers again until convergence. Our intuitions are based on proven observations in social networks and the results are very promising. Experimental results on benchmark networks verify the feasibility and effectiveness of our new community mining approach.", "title": "" }, { "docid": "298894941f7615ea12291a815cb0752d", "text": "This paper describes ongoing research and development of machine learning and other complementary automatic learning techniques in a framework adapted to the specific needs of power system security assessment. In the proposed approach, random sampling techniques are considered to screen all relevant power system operating situations, while existing numerical simulation tools are exploited to derive detailed security information. The heart of the framework is provided by machine learning methods used to extract and synthesize security knowledge reformulated in a suitable way for decision making. This consists of transforming the data base of case by case numerical simulations into a power system security knowledge base. The main expected fallouts with respect to existing security assessment methods are computational efficiency, better physical insight into non-linear problems, and management of uncertainties. The paper discusses also the complementary roles of various automatic learning methods in this framework, such as decision tree induction, multilayer perceptrons and nearest neighbor classifiers. Illustrations are taken from two different real large scale power system security problems : transient stability assessment of the Hydro-Québec system and voltage security assessment of the system of Electricité de France.", "title": "" } ]
scidocsrr
d5cbb9266b3655f79e4675c9e5cf0da0
Prism adaptation and aftereffect: specifying the properties of a procedural memory system.
[ { "docid": "ae54996b12f39802f31173b43cda91f9", "text": "The topic of multiple forms of memory is considered from a biological point of view. Fact-and-event (declarative, explicit) memory is contrasted with a collection of non conscious (non-declarative, implicit) memory abilities including skills and habits, priming, and simple conditioning. Recent evidence is reviewed indicating that declarative and non declarative forms of memory have different operating characteristics and depend on separate brain systems. A brain-systems framework for understanding memory phenomena is developed in light of lesion studies involving rats, monkeys, and humans, as well as recent studies with normal humans using the divided visual field technique, event-related potentials, and positron emission tomography (PET).", "title": "" } ]
[ { "docid": "c09e5f5592caab9a076d92b4f40df760", "text": "Producing a comprehensive overview of the chemical content of biologically-derived material is a major challenge. Apart from ensuring adequate metabolome coverage and issues of instrument dynamic range, mass resolution and sensitivity, there are major technical difficulties associated with data pre-processing and signal identification when attempting large scale, high-throughput experimentation. To address these factors direct infusion or flow infusion electrospray mass spectrometry has been finding utility as a high throughput metabolite fingerprinting tool. With little sample pre-treatment, no chromatography and instrument cycle times of less than 5 min it is feasible to analyse more than 1,000 samples per week. Data pre-processing is limited to aligning extracted mass spectra and mass-intensity matrices are generally ready in a working day for a month’s worth of data mining and hypothesis generation. ESI-MS fingerprinting has remained rather qualitative by nature and as such ion suppression does not generally compromise data information content as originally suggested when the methodology was first introduced. This review will describe how the quality of data has improved through use of nano-flow infusion and mass-windowing approaches, particularly when using high resolution instruments. The increasingly wider availability of robust high accurate mass instruments actually promotes ESI-MS from a merely fingerprinting tool to the ranks of metabolite profiling and combined with MS/MS capabilities of hybrid instruments improved structural information is available concurrently. We summarise current applications in a wide range of fields where ESI-MS fingerprinting has proved to be an excellent tool for “first pass” metabolome analysis of complex biological samples. The final part of the review describes a typical workflow with reference to recently published data to emphasise key aspects of overall experimental design.", "title": "" }, { "docid": "9b8d4b855bab5e2fdcadd1fe1632f197", "text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.", "title": "" }, { "docid": "b6f4bd15f7407b56477eb2cfc4c72801", "text": "In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.", "title": "" }, { "docid": "b93ee4889d7f7dcfa04ef0132bc36b60", "text": "In the past decade, social and information networks have become prevalent, and research on the network data has attracted much attention. Besides the link structure, network data are often equipped with the content information (i.e, node attributes) that is usually noisy and characterized by high dimensionality. As the curse of dimensionality could hamper the performance of many machine learning tasks on networks (e.g., community detection and link prediction), feature selection can be a useful technique for alleviating such issue. In this paper, we investigate the problem of unsupervised feature selection on networks. Most existing feature selection methods fail to incorporate the linkage information, and the state-of-the-art approaches usually rely on pseudo labels generated from clustering. Such cluster labels may be far from accurate and can mislead the feature selection process. To address these issues, we propose a generative point of view for unsupervised features selection on networks that can seamlessly exploit the linkage and content information in a more effective manner. We assume that the link structures and node content are generated from a succinct set of high-quality features, and we find these features through maximizing the likelihood of the generation process. Experimental results on three real-world datasets show that our approach can select more discriminative features than state-of-the-art methods.", "title": "" }, { "docid": "ef92f3f230a7eedee7555b5fc35f5558", "text": "Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation (r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.", "title": "" }, { "docid": "a66b5b6dea68e5460b227af4caa14ef3", "text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.", "title": "" }, { "docid": "f0365424e98ebcc0cb06ce51f65cbe7c", "text": "The most important milestone in the field of magnetic sensors was that AMR sensors started to replace Hall sensors in many application, were larger sensitivity is an advantage. GMR and SDT sensor finally found limited applications. We also review the development in miniaturization of fluxgate sensors and briefly mention SQUIDs, resonant sensors, GMIs and magnetomechanical sensors.", "title": "" }, { "docid": "73c4bded5834e75adb9820a8e0fed13d", "text": "We present a comprehensive evaluation of a large number of semi-supervised anomaly detection techniques for time series data. Some of these are existing techniques and some are adaptations that have never been tried before. For example, we adapt the window based discord detection technique to solve this problem. We also investigate several techniques that detect anomalies in discrete sequences, by discretizing the time series data. We evaluate these techniques on a large variety of data sets obtained from a broad spectrum of application domains. The data sets have different characteristics in terms of the nature of normal time series and the nature of anomalous time series. We evaluate the techniques on different metrics, such as accuracy in detecting the anomalous time series, sensitivity to parameters, and computational complexity, and provide useful insights regarding the effectiveness of different techniques based on the experimental evaluation.", "title": "" }, { "docid": "bbfe7693d45e3343b30fad7f6c9279d8", "text": "Vernier permanent magnet (VPM) machines can be utilized for direct drive applications by virtue of their high torque density and high efficiency. The purpose of this paper is to develop a general design guideline for split-slot low-speed VPM machines, generalize the operation principle, and illustrate the relationship among the numbers of the stator slots, coil poles, permanent magnet (PM) pole pairs, thereby laying a solid foundation for the design of various kinds of VPM machines. Depending on the PM locations, three newly designed VPM machines are reported in this paper and they are referred to as 1) rotor-PM Vernier machine, 2) stator-tooth-PM Vernier machine, and 3) stator-yoke-PM Vernier machine. The back-electromotive force (back-EMF) waveforms, static torque, and air-gap field distribution are predicted using time-stepping finite element method (TS-FEM). The performances of the proposed VPM machines are compared and reported.", "title": "" }, { "docid": "16f1b038f51e614da06ba84ebd175e14", "text": "This paper explores how to extract argumentation-relevant information automatically from a corpus of legal decision documents, and how to build new arguments using that information. For decision texts, we use the Vaccine/Injury Project (V/IP) Corpus, which contains default-logic annotations of argument structure. We supplement this with presuppositional annotations about entities, events, and relations that play important roles in argumentation, and about the level of confidence that arguments would be successful. We then propose how to integrate these semantic-pragmatic annotations with syntactic and domain-general semantic annotations, such as those generated in the DeepQA architecture, and outline how to apply machine learning and scoring techniques similar to those used in the IBM Watson system for playing the Jeopardy! question-answer game. We replace this game-playing goal, however, with the goal of learning to construct legal arguments.", "title": "" }, { "docid": "8eac34d73a2bcb4fa98793499d193067", "text": "We review here the recent success in quantum annealing, i.e., optimization of the cost or energy functions of complex systems utilizing quantum fluctuations. The concept is introduced in successive steps through the studies of mapping of such computationally hard problems to the classical spin glass problems. The quantum spin glass problems arise with the introduction of quantum fluctuations, and the annealing behavior of the systems as these fluctuations are reduced slowly to zero. This provides a general framework for realizing analog quantum computation.", "title": "" }, { "docid": "fa8c3873cf03af8d4950a0e53f877b08", "text": "The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multi-dimensional diffusions is particularly challenging. In principle this involves data-augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretisation of idealised approaches for the continuous-time diffusion setup; or through the use of standard finite-dimensional methodologies discretisation of the diffusion model. The connections between these approaches have not been well-studied. This paper will provide a unified framework bringing together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for reducible diffusions, and our framework is correspondingly more complex in that case. Therefore we treat the reducible and irreducible cases differently within the paper. Supplementary materials for the article are avilable on line. 1 Overview of likelihood-based inference for diffusions Diffusion processes have gained much popularity as statistical models for observed and latent processes. Among others, their appeal lies in their flexibility to deal with nonlinearity, time-inhomogeneity and heteroscedasticity by specifying two interpretable functionals, their amenability to efficient computations due to their Markov property, and the rich existing mathematical theory about their properties. As a result, they are used as models throughout Science; some book references related with this approach to modeling include Section 5.3 of [1] for physical systems, Section 8.3.3 (in conjunction with Section 6.3) of [12] for systems biology and mass action stochastic kinetics, and Chapter 10 of [27] for interest rates. A mathematically precise specification of a d-dimensional diffusion process V is as the solution of a stochastic differential equation (SDE) of the type: dVs = b(s, Vs; θ1) ds+ σ(s, Vs; θ2) dBs, s ∈ [0, T ] ; (1) where B is an m-dimensional standard Brownian motion, b(·, · ; · ) : R+ ×Rd ×Θ1 → R is the drift and σ(·, · ; · ) : R+ × R × Θ2 → R is the diffusion coefficient. These ICREA and Department of Economics, Universitat Pompeu Fabra, omiros.papaspiliopoulos@upf.edu Department of Statistics, University of Warwick Department of Statistics and Actuarial Science, University of Iowa, Iowa City, Iowa", "title": "" }, { "docid": "462256d2d428f8c77269e4593518d675", "text": "This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher, and E. Fatemi, we decompose a given (possible textured) image f into a sum of two functions u+v, where u ¥ BV is a function of bounded variation (a cartoon or sketchy approximation of f), while v is a function representing the texture or noise. To model v we use the space of oscillating functions introduced by Yves Meyer, which is in some sense the dual of the BV space. The new algorithm is very simple, making use of differential equations and is easily solved in practice. Finally, we implement the method by finite differences, and we present various numerical results on real textured images, showing the obtained decomposition u+v, but we also show how the method can be used for texture discrimination and texture segmentation.", "title": "" }, { "docid": "21130eded44790720e79a750ecdf3847", "text": "Enabled by Web 2.0 technologies social media provide an unparalleled platform for consumers to share their product experiences and opinions---through word-of-mouth (WOM) or consumer reviews. It has become increasingly important to understand how WOM content and metrics thereof are related to consumer purchases and product sales. By integrating network analysis with text sentiment mining techniques, we propose product comparison networks as a novel construct, computed from consumer product reviews. To test the validity of these product ranking measures, we conduct an empirical study based on a digital camera dataset from Amazon.com. The results demonstrate significant linkage between network-based measures and product sales, which is not fully captured by existing review measures such as numerical ratings. The findings provide important insights into the business impact of social media and user-generated content, an emerging problem in business intelligence research. From a managerial perspective, our results suggest that WOM in social media also constitutes a competitive landscape for firms to understand and manipulate.", "title": "" }, { "docid": "33cab0ec47af5e40d64e34f8ffc7dd6f", "text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.", "title": "" }, { "docid": "75639f4119e862382732b1ee597a9bd3", "text": "People enjoy food photography because they appreciate food. Behind each meal there is a story described in a complex recipe and, unfortunately, by simply looking at a food image we do not have access to its preparation process. Therefore, in this paper we introduce an inverse cooking system that recreates cooking recipes given food images. Our system predicts ingredients as sets by means of a novel architecture, modeling their dependencies without imposing any order, and then generates cooking instructions by attending to both image and its inferred ingredients simultaneously. We extensively evaluate the whole system on the large-scale Recipe1M dataset and show that (1) we improve performance w.r.t. previous baselines for ingredient prediction; (2) we are able to obtain high quality recipes by leveraging both image and ingredients; (3) our system is able to produce more compelling recipes than retrieval-based approaches according to human judgment.", "title": "" }, { "docid": "8cfc2b5947a130d72486748b1d086e7e", "text": "The Legal Knowledge Interchange Format (LKIF), being developed in the European ESTRELLA project, defines a knowledge representation language for arguments, rules, ontologies, and cases in XML. In this article, the syntax and argumentation-theoretic semantics of the LKIF rule language is presented and illustrated with an example based on German family law. This example is then applied to show how LKIF rules can be used with the Carneades argumentation system to construct, evaluate and visualize arguments about a legal case.", "title": "" }, { "docid": "181eafc11f3af016ca0926672bdb5a9d", "text": "The conventional wisdom is that backprop nets with excess hi dden units generalize poorly. We show that nets with excess capacity ge neralize well when trained with backprop and early stopping. Experim nts suggest two reasons for this: 1) Overfitting can vary significant ly i different regions of the model. Excess capacity allows better fit to reg ions of high non-linearity, and backprop often avoids overfitting the re gions of low non-linearity. 2) Regardless of size, nets learn task subco mponents in similar sequence. Big nets pass through stages similar to th ose learned by smaller nets. Early stopping can stop training the large n et when it generalizes comparably to a smaller net. We also show that co njugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linea rity.", "title": "" }, { "docid": "6524efda795834105bae7d65caf15c53", "text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.", "title": "" }, { "docid": "a08d783229b59342cdb015e051450f94", "text": "We consider the problem of estimating the remaining useful life (RUL) of a system or a machine from sensor data. Many approaches for RUL estimation based on sensor data make assumptions about how machines degrade. Additionally, sensor data from machines is noisy and o‰en su‚ers from missing values in many practical seŠings. We propose Embed-RUL: a novel approach for RUL estimation from sensor data that does not rely on any degradation-trend assumptions, is robust to noise, and handles missing values. EmbedRUL utilizes a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to generate embeddings for multivariate time series subsequences. Œe embeddings for normal and degraded machines tend to be di‚erent, and are therefore found to be useful for RUL estimation. We show that the embeddings capture the overall paŠern in the time series while €ltering out the noise, so that the embeddings of two machines with similar operational behavior are close to each other, even when their sensor readings have signi€cant and varying levels of noise content. We perform experiments on publicly available turbofan engine dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL outperforms the previously reported [24] state-of-the-art on several metrics.", "title": "" } ]
scidocsrr
611f59a91654d05d1352b7a7790e5f24
Identifying Important Citations Using Contextual Information from Full Text
[ { "docid": "d34b81ac6c521cbf466b4b898486a201", "text": "We introduce the novel task of identifying important citations in scholarly literature, i.e., citations that indicate that the cited work is used or extended in the new effort. We believe this task is a crucial component in algorithms that detect and follow research topics and in methods that measure the quality of publications. We model this task as a supervised classification problem at two levels of detail: a coarse one with classes (important vs. non-important), and a more detailed one with four importance classes. We annotate a dataset of approximately 450 citations with this information, and release it publicly. We propose a supervised classification approach that addresses this task with a battery of features that range from citation counts to where the citation appears in the body of the paper, and show that, our approach achieves a precision of 65% for a recall of 90%.", "title": "" }, { "docid": "101d36f875c1bdee99f14208fe016a5f", "text": "We are investigating automatic generation of a review (or survey) article in a specific subject domain. In a research paper, there are passages where the author describes the essence of a cited paper and the differences between the current paper and the cited paper (we call them citing areas). These passages can be considered as a kind of summary of the cited paper from the current author’s viewpoint. We can know the state of the art in a specific subject domain from the collection of citing areas. Further, if these citing areas are properly classified and organized, they can act as a kind of a review article. In our previous research, we proposed the automatic extraction of citing areas. Then, with the information in the citing areas, we automatically identified the types of citation relationships that indicate the reasons for citation (we call them citation types). Citation types offer a useful clue for organizing citing areas. In addition, to support writing a review article, it is necessary to take account of the contents of the papers together with the citation links and citation types. In this paper, we propose several methods for classifying papers automatically. We found that our proposed methods BCCT-C, the bibliographic coupling considering only type C citations, which pointed out the problems or gaps in related works, are more effective than others. We also implemented a prototype system to support writing a review article, which is based on our proposed method.", "title": "" }, { "docid": "e181f73c36c1d8c9463ef34da29d9e03", "text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................", "title": "" }, { "docid": "659deeead04953483a3ed6c5cc78cd76", "text": "We describe ParsCit, a freely available, open-source imple entation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label th token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference string s from a plain text file, and to retrieve the citation contexts . The package comes with utilities to run it as a web service or as a standalone uti lity. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.", "title": "" } ]
[ { "docid": "2220633d6343df0ebb2d292358ce182b", "text": "This paper presents a system for fully automatic recognition and reconstruction of 3D objects in image databases. We pose the object recognition problem as one of finding consistent matches between all images, subject to the constraint that the images were taken from a perspective camera. We assume that the objects or scenes are rigid. For each image, we associate a camera matrix, which is parameterised by rotation, translation and focal length. We use invariant local features to find matches between all images, and the RANSAC algorithm to find those that are consistent with the fundamental matrix. Objects are recognised as subsets of matching images. We then solve for the structure and motion of each object, using a sparse bundle adjustment algorithm. Our results demonstrate that it is possible to recognise and reconstruct 3D objects from an unordered image database with no user input at all.", "title": "" }, { "docid": "286c9943e00495f8e638c744e53f7bc4", "text": "Text-based passwords alone are subject to dictionary attac ks s users tend to choose weak passwords in favor of memorability, as we ll as phishing attacks. Many recognition-based graphical password schemes alone, in order to offer sufficient security, require a number of rounds of veri fication, introducing usability issues. We suggest a hybrid user authentication a pproach combining text passwords, recognition-based graphical passwords, a nd a two-step process, to provide increased security with fewer rounds than such gr aphical passwords alone. A variation of this two-step authentication method, which we have implemented and deployed, is in use in the real world.", "title": "" }, { "docid": "3d3f5b45b939f926d1083bab9015e548", "text": "Industry is facing an era characterised by unpredictable market changes and by a turbulent competitive environment. The key to compete in such a context is to achieve high degrees of responsiveness by means of high flexibility and rapid reconfiguration capabilities. The deployment of modular solutions seems to be part of the answer to face these challenges. Semantic modelling and ontologies may represent the needed knowledge representation to support flexibility and modularity of production systems, when designing a new system or when reconfiguring an existing one. Although numerous ontologies for production systems have been developed in the past years, they mainly focus on discrete manufacturing, while logistics aspects, such as those related to internal logistics and warehousing, have not received the same attention. The paper aims at offering a representation of logistics aspects, reflecting what has become a de-facto standard terminology in industry and among researchers in the field. Such representation is to be used as an extension to the already-existing production systems ontologies that are more focused on manufacturing processes. The paper presents the structure of the hierarchical relations within the examined internal logistics elements, namely Storage and Transporters, structuring them in a series of classes and sub-classes, suggesting also the relationships and the attributes to be considered to complete the modelling. Finally, the paper proposes an industrial example with a miniload system to show how such a modelling of internal logistics elements could be instanced in the real world. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "23cc8b190e9de5177cccf2f918c1ad45", "text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.", "title": "" }, { "docid": "fad96b214957401078d254640734535c", "text": "Vision-based driver assistance systems are designed for, and implemented in modern vehicles, for improving safety and better comfort. This report reviews areas of research on vision-based driver assistance systems and provides an extensive bibliography for the discussed subjects.", "title": "" }, { "docid": "3c4e3d86df819aea592282b171191d0d", "text": "Memory forensic analysis collects evidence for digital crimes and malware attacks from the memory of a live system. It is increasingly valuable, especially in cloud computing. However, memory analysis on on commodity operating systems (such as Microsoft Windows) faces the following key challenges: (1) a partial knowledge of kernel data structures; (2) difficulty in handling ambiguous pointers; and (3) lack of robustness by relying on soft constraints that can be easily violated by kernel attacks. To address these challenges, we present MACE, a memory analysis system that can extract a more complete view of the kernel data structures for closed-source operating systems and significantly improve the robustness by only leveraging pointer constraints (which are hard to manipulate) and evaluating these constraint globally (to even tolerate certain amount of pointer attacks). We have evaluated MACE on 100 memory images for Windows XP SP3 and Windows 7 SP0. Overall, MACE can construct a kernel object graph from a memory image in just a few minutes, and achieves over 95% recall and over 96% precision. Our experiments on real-world rootkit samples and synthetic attacks further demonstrate that MACE outperforms other external memory analysis tools with respect to wider coverage and better robustness.", "title": "" }, { "docid": "8da5c73ee05e567f86a37be9839316ff", "text": "In this paper, adaptive dynamic surface control (DSC) is developed for a class of pure-feedback nonlinear systems with unknown dead zone and perturbed uncertainties using neural networks. The explosion of complexity in traditional backstepping design is avoided by utilizing dynamic surface control and introducing integral-type Lyapunov function. It is proved that the proposed design method is able to guarantee semi-global uniform ultimate boundedness of all signals in the closed-loop system, with arbitrary small tracking error by appropriately choosing design constants. Simulation results demonstrate the effectiveness of the proposed approach. c © 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2d12d91005d1de356a61186cbde8b444", "text": "Research into the perceptual and cognitive effects of playing video games is an area of increasing interest for many investigators. Over the past decade, expert video game players (VGPs) have been shown to display superior performance compared to non-video game players (nVGPs) on a range of visuospatial and attentional tasks. A benefit of video game expertise has recently been shown for task switching, suggesting that VGPs also have superior cognitive control abilities compared to nVGPs. In two experiments, we examined which aspects of task switching performance this VGP benefit may be localized to. With minimal trial-to-trial interference from minimally overlapping task set rules, VGPs demonstrated a task switching benefit compared to nVGPs. However, this benefit disappeared when proactive interference between tasks was increased, with substantial stimulus and response overlap in task set rules. We suggest that VGPs have no generalized benefit in task switching-related cognitive control processes compared to nVGPs, with switch cost reductions due instead to a specific benefit in controlling selective attention.", "title": "" }, { "docid": "f0ef9541461cd9d9e42ea355ea31ac41", "text": "We introduce and create a framework for deriving probabilistic models of Information Retrieval. The models are nonparametric models of IR obtained in the language model approach. We derive term-weighting models by measuring the divergence of the actual term distribution from that obtained under a random process. Among the random processes we study the binomial distribution and Bose--Einstein statistics. We define two types of term frequency normalization for tuning term weights in the document--query matching process. The first normalization assumes that documents have the same length and measures the information gain with the observed term once it has been accepted as a good descriptor of the observed document. The second normalization is related to the document length and to other statistics. These two normalization methods are applied to the basic models in succession to obtain weighting formulae. Results show that our framework produces different nonparametric models forming baseline alternatives to the standard tf-idf model.", "title": "" }, { "docid": "f7e45feaa48b8d7741ac4cdb3ef4749b", "text": "Classification problems refer to the assignment of some alt ern tives into predefined classes (groups, categories). Such problems often arise in several application fields. For instance, in assessing credit card applications the loan officer must evaluate the charact eristics of each applicant and decide whether an application should be accepted or rejected. Simil ar situations are very common in fields such as finance and economics, production management (fault diagnosis) , medicine, customer satisfaction measurement, data base management and retrieval, etc.", "title": "" }, { "docid": "d29c4e8598bbe2406ae314402f200f41", "text": "A big step forward to improve power system monitoring and performance, continued load growth without a corresponding increase in transmission resources has resulted in reduced operational margins for many power systems worldwide and has led to operation of power systems closer to their stability limits and to power exchange in new patterns. These issues, as well as the on-going worldwide trend towards deregulation of the entire industry on the one hand and the increased need for accurate and better network monitoring on the other hand, force power utilities exposed to this pressure to demand new solutions for wide area monitoring, protection and control. Wide-area monitoring, protection, and control require communicating the specific-node information to a remote station but all information should be time synchronized so that to neutralize the time difference between information. It gives a complete simultaneous snap shot of the power system. The conventional system is not able to satisfy the time-synchronized requirement of power system. Phasor Measurement Unit (PMU) is enabler of time-synchronized measurement, it communicate the synchronized local information to remote station.", "title": "" }, { "docid": "8e5a2a30f74ef7bd722b83e352a7f4a1", "text": "Bias-corrected approximate 100(1-α)% pointwise and simultaneous confidence and prediction intervals for least squares support vector machines are proposed. A simple way of determining the bias without estimating higher order derivatives is formulated. A variance estimator is developed that works well in the homoscedastic and heteroscedastic case. In order to produce simultaneous confidence intervals, a simple Šidák correction and a more involved correction (based on upcrossing theory) are used. The obtained confidence intervals are compared to a state-of-the-art bootstrap-based method. Simulations show that the proposed method obtains similar intervals compared to the bootstrap at a lower computational cost.", "title": "" }, { "docid": "5c7c9b6a96fe597494dc7b8cbcbaf073", "text": "We study the scheduling of human-robot teams where the human and robotic agents share decision-making authority over scheduling decisions. Our goal is to design AI scheduling techniques that account for how people make decisions under different control schema.", "title": "" }, { "docid": "1255bb7d89a30314dc41dbcf7ac9a174", "text": "Gainesville, Florida, 10 March 2 012. Today, the Mobile Location- Based Services Summit hosted a panel entitled \"What Was Wrong with First-Generation Location-Based Services?\" The panel chair, Sumi Helal of the University of Florida, invited two world-class experts in LBS history and technology to discuss the topic: Paolo Bellavista of the University of Bologna and Axel Kupper of the University of Munich. The panel discussed the popularity of today's LBSs and analyzed their distinguishing aspects in comparison with first-generation LBSs. The panel was anything but controversial, with all panelists in total agreement on what initially went wrong and why today's LBSs work. They analyzed how the failure unfolded to set the stage for a major paradigm shift in LBS business and technology and noted the milestones that shaped today's LBSs.", "title": "" }, { "docid": "658c7ae98ea4b0069a7a04af1e462307", "text": "Exploiting packetspsila timing information for covert communication in the Internet has been explored by several network timing channels and watermarking schemes. Several of them embed covert information in the inter-packet delay. These channels, however, can be detected based on the perturbed traffic pattern, and their decoding accuracy could be degraded by jitter, packet loss and packet reordering events. In this paper, we propose a novel TCP-based timing channel, named TCPScript to address these shortcomings. TCPScript embeds messages in ldquonormalrdquo TCP data bursts and exploits TCPpsilas feedback and reliability service to increase the decoding accuracy. Our theoretical capacity analysis and extensive experiments have shown that TCPScript offers much higher channel capacity and decoding accuracy than an IP timing channel and JitterBug. On the countermeasure, we have proposed three new metrics to detect aggressive TCPScript channels.", "title": "" }, { "docid": "274186e87674920bfe98044aa0208320", "text": "Message routing in mobile delay tolerant networks inherently relies on the cooperation between nodes. In most existing routing protocols, the participation of nodes in the routing process is taken as granted. However, in reality, nodes can be unwilling to participate. We first show in this paper the impact of the unwillingness of nodes to participate in existing routing protocols through a set of experiments. Results show that in the presence of even a small proportion of nodes that do not forward messages, performance is heavily degraded. We then analyze two major reasons of the unwillingness of nodes to participate, i.e., their rational behavior (also called selfishness) and their wariness of disclosing private mobility information. Our main contribution in this paper is to survey the existing related research works that overcome these two issues. We provide a classification of the existing approaches for protocols that deal with selfish behavior. We then conduct experiments to compare the performance of these strategies for preventing different types of selfish behavior. For protocols that preserve the privacy of users, we classify the existing approaches and provide an analytical comparison of their security guarantees.", "title": "" }, { "docid": "edf350dfe9680f40a38f7dd2fde42fbb", "text": "Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets.", "title": "" }, { "docid": "acec9743d22434e704ac5a2399baabb9", "text": "We propose a Bayesian approximate inference method for learning the dependence structure of a Gaussian graphical model. Using pseudo-likelihood, we derive an analytical expression to approximate the marginal likelihood for an arbitrary graph structure without invoking any assumptions about decomposability. The majority of the existing methods for learning Gaussian graphical models are either restricted to decomposable graphs or require specification of a tuning parameter that may have a substantial impact on learned structures. By combining a simple sparsity inducing prior for the graph structures with a default reference prior for the model parameters, we obtain a fast and easily applicable scoring function that works well for even high-dimensional data. We demonstrate the favourable performance of our approach by large-scale comparisons against the leading methods for learning non-decomposable Gaussian graphical models. A theoretical justification for our method is provided by showing that it yields a consistent estimator of the graph structure.", "title": "" }, { "docid": "27fd27cf86b68822b3cfb73cff2e2cb6", "text": "Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.", "title": "" }, { "docid": "5443a07fe5f020972cbdce8f5996a550", "text": "The training of severely disabled individuals on the use of electric power wheelchairs creates many challenges, particularly in the case of children. The adjustment of equipment and training on a per-patient basis in an environment with limited specialists and resources often leads to a reduced amount of training time per patient. Virtual reality rehabilitation has recently been proven an effective way to supplement patient rehabilitation, although some important challenges remain including high setup/equipment costs and time-consuming continual adjustments to the simulation as patients improve. We propose a design for a flexible, low-cost rehabilitation system that uses virtual reality training and games to engage patients in effective instruction on the use of powered wheelchairs. We also propose a novel framework based on Bayesian networks for self-adjusting adaptive training in virtual rehabilitation environments. Preliminary results from a user evaluation and feedback from our rehabilitation specialist collaborators support the effectiveness of our approach.", "title": "" } ]
scidocsrr
7c0d83022c64e3185988c56fbe8a11df
A Framework for Personal Mobile Commerce Pattern Mining and Prediction
[ { "docid": "efbf875699a14277d4b7daa3cb43f02b", "text": "The increasing pervasiveness of location-acquisition technologies (GPS, GSM networks, etc.) enables people to conveniently log their location history into spatial-temporal data, thus giving rise to the necessity as well as opportunity to discovery valuable knowledge from this type of data. In this paper, we propose the novel notion of individual life pattern, which captures individual's general life style and regularity. Concretely, we propose the life pattern normal form (the LP-normal form) to formally describe which kind of life regularity can be discovered from location history; then we propose the LP-Mine framework to effectively retrieve life patterns from raw individual GPS data. Our definition of life pattern focuses on significant places of individual life and considers diverse properties to combine the significant places. LP-Mine is comprised of two phases: the modelling phase and the mining phase. The modelling phase pre-processes GPS data into an available format as the input of the mining phase. The mining phase applies separate strategies to discover different types of pattern. Finally, we conduct extensive experiments using GPS data collected by volunteers in the real world to verify the effectiveness of the framework.", "title": "" } ]
[ { "docid": "d752270f54bea465d40e39a06d7b4297", "text": "This paper proposes to address the word sense ambiguity issue in an unsupervised manner, where word sense representations are learned along a word sense selection mechanism given contexts. Prior work focused on designing a single model to deliver both mechanisms, and thus suffered from either coarse-grained representation learning or inefficient sense selection. The proposed modular approach, MUSE, implements flexible modules to optimize distinct mechanisms, achieving the first purely sense-level representation learning system with linear-time sense selection. We leverage reinforcement learning to enable joint training on the proposed modules, and introduce various exploration techniques on sense selection for better robustness. The experiments on benchmark data show that the proposed approach achieves the state-of-the-art performance on synonym selection as well as on contextual word similarities in terms of MaxSimC.", "title": "" }, { "docid": "7456af2a110a0f05b39d7d72e64ab553", "text": "Initially mobile phones were developed only for voice communication but now days the scenario has changed, voice communication is just one aspect of a mobile phone. There are other aspects which are major focus of interest. Two such major factors are web browser and GPS services. Both of these functionalities are already implemented but are only in the hands of manufacturers not in the hands of users because of proprietary issues, the system does not allow the user to access the mobile hardware directly. But now, after the release of android based open source mobile phone a user can access the hardware directly and design customized native applications to develop Web and GPS enabled services and can program the other hardware components like camera etc. In this paper we will discuss the facilities available in android platform for implementing LBS services (geo-services).", "title": "" }, { "docid": "22255906a7f1d30c9600728a6dc9ad9f", "text": "The next major step in the evolution of LTE targets the rapidly increasing demand for mobile broadband services and traffic volumes. One of the key technologies is a new carrier type, referred to in this article as a Lean Carrier, an LTE carrier with minimized control channel overhead and cell-specific reference signals. The Lean Carrier can enhance spectral efficiency, increase spectrum flexibility, and reduce energy consumption. This article provides an overview of the motivations and main use cases of the Lean Carrier. Technical challenges are highlighted, and design options are discussed; finally, a performance evaluation quantifies the benefits of the Lean Carrier.", "title": "" }, { "docid": "a3832a55f964bbf2f5f5138cc048346c", "text": "Association Rule Mining is a data mining method that enables to find out frequent item set, interesting pattern, interesting correlation among set of items in a transactional database or data repositories. This paper establishes the preparatory of basic concept about Association Rule Mining. Broadly, Association Rule Mining can be classified into Apriori, Partitioning, and Frequent Pattern Tree Algorithms. This study is to show the Benefits and Limitations. Keywords— Data Mining, Association rule, Apriori, FP Tree, and Partitioning", "title": "" }, { "docid": "49ff711b6c91c9ec42e16ce2f3bb435b", "text": "In this letter, a wideband three-section branch-line hybrid with harmonic suppression is designed using a novel transmission line model. The proposed topology is constructed using a coupled line, two series transmission lines, and open-ended stubs. The required design equations are obtained by applying even- and odd-mode analysis. To support these equations, a three-section branch-line hybrid working at 0.9 GHz is fabricated and tested. The physical area of the prototype is reduced by 87.7% of the conventional hybrid and the fractional bandwidth is greater than 52%. In addition, the proposed technique can eliminate second harmonic by a level better than 15 dB.", "title": "" }, { "docid": "1ba4e36597e7beaf6591185c1c799afd", "text": "A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon.", "title": "" }, { "docid": "5f109b71bf1e39030db2594e54718ce5", "text": "Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods", "title": "" }, { "docid": "ac46e6176377612544bb74c064feed67", "text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.", "title": "" }, { "docid": "7b55b39902d40295ea14088dddaf77e0", "text": "Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.", "title": "" }, { "docid": "936353c90f0e0ce7946a11b4a60d494c", "text": "This paper deals with multi-class classification problems. Many methods extend binary classifiers to operate a multi-class task, with strategies such as the one-vs-one and the one-vs-all schemes. However, the computational cost of such techniques is highly dependent on the number of available classes. We present a method for multi-class classification, with a computational complexity essentially independent of the number of classes. To this end, we exploit recent developments in multifunctional optimization in machine learning. We show that in the proposed algorithm, labels only appear in terms of inner products, in the same way as input data emerge as inner products in kernel machines via the so-called the kernel trick. Experimental results on real data show that the proposed method reduces efficiently the computational time of the classification task without sacrificing its generalization ability.", "title": "" }, { "docid": "ce1ff59a3b327af3708440414b5eb964", "text": "In recent years, all major automotive companies have launched initiatives towards cars that assist people in making driving decisions. The ultimate goal of all these efforts are cars that can drive themselves. The benefit of such a technology could be enormous. At present, some 42,000 people die every year in traffic accidents in the U.S., mostly because of human error. Self-driving cars could make people safer and more productive. Self-driving cars is a true AI challenge. To endow cars with the ability to make decisions on behalf of their drivers, they have to sense, perceive, and act. Recent work in this field has extensively built on probabilistic representations and machine learning methods. The speaker will report on past work on the DARPA Grand Challenge, and discuss ongoing work on the Urban Challenge, DARPA’s follow-up program on self-driving cars.", "title": "" }, { "docid": "6d3410de121ffe037eafd5f30daa7252", "text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.", "title": "" }, { "docid": "1608c56c79af07858527473b2b0262de", "text": "The field weakening control strategy of interior permanent magnet synchronous motor for electric vehicles was studied in the paper. A field weakening control method based on gradient descent of voltage limit according to the ellipse and modified current setting were proposed. The field weakening region was determined by the angle between the constant torque direction and the voltage limited ellipse decreasing direction. The direction of voltage limited ellipse decreasing was calculated by using the gradient descent method. The current reference was modified by the field weakening direction and the magnitude of the voltage error according to the field weakening region. A simulink model was also founded by Matlab/Simulink, and the validity of the proposed strategy was proved by the simulation results.", "title": "" }, { "docid": "c8233fcbc4d07dbd076a4d7a4fdf3b0c", "text": "A 15-b l-Msample/s digitally self-calibrated pipeline analog-to-digital converter (ADC) is presented. A radix 1.93, 1 b per stage design is employed. The digital self-calibration accounts for capacitor mismatch, comparator offset, charge injection, finite op-amp gain, and capacitor nonlinearity contributing to DNL. A THD of –90 dB was measured with a 9.8756-kHz sine-wave input. The DNL was measured to be within +0.25 LSB at 15 b, and the INL was measured to be within +1.25 LSB at 15 b. The die area is 9.3 mm x 8.3 mm and operates on +4-V power supply with 1.8-W power dissipation. The ADC is fabricated in an 11-V, 4-GHz, 2.4-pm BiCMOS process.", "title": "" }, { "docid": "efb041a04f4294a828cafc0c6ea88ac5", "text": "Number of movies are released every week. There is a large amount of data related to the movies is available over the internet, because of that much data available, it is an interesting data mining topic. The prediction of movies is complex problem. Every viewer, producer, director’s production houses all are curious about the movies that how it will perform in the theatre. Many work has been done relating to movies using social networking, blogs articles but much less has been explored by the data and attributes related to a movie which is continuous and in different dimensions. We have used IMDB for our experimentation. We created dataset and then transformed it and applied machine learning approaches to build efficient models that can predict the movies popularity.", "title": "" }, { "docid": "f2b3643ca7a9a1759f038f15847d7617", "text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.", "title": "" }, { "docid": "1de30db68b41c0e29320397ca464bb75", "text": "In software development, bug reports provide crucial information to developers. However, these reports widely differ in their quality. We conducted a survey among developers and users of APACHE, ECLIPSE, and MOZILLA to find out what makes a good bug report.\n The analysis of the 466 responses revealed an information mismatch between what developers need and what users supply. Most developers consider steps to reproduce, stack traces, and test cases as helpful, which are at the same time most difficult to provide for users. Such insight is helpful to design new bug tracking tools that guide users at collecting and providing more helpful information.\n Our CUEZILLA prototype is such a tool and measures the quality of new bug reports; it also recommends which elements should be added to improve the quality. We trained CUEZILLA on a sample of 289 bug reports, rated by developers as part of the survey. In our experiments, CUEZILLA was able to predict the quality of 31--48% of bug reports accurately.", "title": "" }, { "docid": "197f4782bc11e18b435f4bc568b9de79", "text": "Protected-module architectures (PMAs) have been proposed to provide strong isolation guarantees, even on top of a compromised system. Unfortunately, Intel SGX – the only publicly available highend PMA – has been shown to only provide limited isolation. An attacker controlling the untrusted page tables, can learn enclave secrets by observing its page access patterns. Fortifying existing protected-module architectures in a realworld setting against side-channel attacks is an extremely difficult task as system software (hypervisor, operating system, . . . ) needs to remain in full control over the underlying hardware. Most stateof-the-art solutions propose a reactive defense that monitors for signs of an attack. Such approaches unfortunately cannot detect the most novel attacks, suffer from false-positives, and place an extraordinary heavy burden on enclave-developers when an attack is detected. We present Heisenberg, a proactive defense that provides complete protection against page table based side channels. We guarantee that any attack will either be prevented or detected automatically before any sensitive information leaks. Consequently, Heisenberg can always securely resume enclave execution – even when the attacker is still present in the system. We present two implementations. Heisenberg-HW relies on very limited hardware features to defend against page-table-based attacks. We use the x86/SGX platform as an example, but the same approach can be applied when protected-module architectures are ported to different platforms as well. Heisenberg-SW avoids these hardware modifications and can readily be applied. Unfortunately, it’s reliance on Intel Transactional Synchronization Extensions (TSX) may lead to significant performance overhead under real-life conditions.", "title": "" }, { "docid": "b0be609048c8497f69991c7acc76dc9c", "text": "We propose a novel recurrent neural network-based approach to simultaneously handle nested named entity recognition and nested entity mention detection. The model learns a hypergraph representation for nested entities using features extracted from a recurrent neural network. In evaluations on three standard data sets, we show that our approach significantly outperforms existing state-of-the-art methods, which are feature-based. The approach is also efficient: it operates linearly in the number of tokens and the number of possible output labels at any token. Finally, we present an extension of our model that jointly learns the head of each entity mention.", "title": "" } ]
scidocsrr
e36e96392a43f1abbd82341feee681d5
Blockchain based trust & authentication for decentralized sensor networks
[ { "docid": "9f6e103a331ab52b303a12779d0d5ef6", "text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.", "title": "" } ]
[ { "docid": "a400a4c5c108b1c3bfff999429fd9478", "text": "Chemical genetic studies on acetyl-CoA carboxylases (ACCs), rate-limiting enzymes in long chain fatty acid biosynthesis, have greatly advanced the understanding of their biochemistry and molecular biology and promoted the use of ACCs as targets for herbicides in agriculture and for development of drugs for diabetes, obesity and cancers. In mammals, ACCs have both biotin carboxylase (BC) and carboxyltransferase (CT) activity, catalyzing carboxylation of acetyl-CoA to malonyl-CoA. Several classes of small chemicals modulate ACC activity, including cellular metabolites, natural compounds, and chemically synthesized products. This article reviews chemical genetic studies of ACCs and the use of ACCs for targeted therapy of cancers.", "title": "" }, { "docid": "77f0b791691135b90cf231d6061a0a5f", "text": "The hyperlink structure of Wikipedia forms a rich semantic network connecting entities and concepts, enabling it as a valuable source for knowledge harvesting. Wikipedia, as crowd-sourced data, faces various data quality issues which significantly impacts knowledge systems depending on it as the information source. One such issue occurs when an anchor text in a Wikipage links to a wrong Wikipage, causing the error link problem. While much of previous work has focused on leveraging Wikipedia for entity linking, little has been done to detect error links.\n In this paper, we address the error link problem, and propose algorithms to detect and correct error links. We introduce an efficient method to generate candidate error links based on iterative ranking in an Anchor Text Semantic Network. This greatly reduces the problem space. A more accurate pairwise learning model was used to detect error links from the reduced candidate error link set, while suggesting correct links in the same time. This approach is effective when data sparsity is a challenging issue. The experiments on both English and Chinese Wikipedia illustrate the effectiveness of our approach. We also provide a preliminary analysis on possible causes of error links in English and Chinese Wikipedia.", "title": "" }, { "docid": "d2a4efcd82d2c55fe243de6d023c5013", "text": "This paper examines a popular stock message board and finds slight daily predictability using supervised learning algorithms when combining daily sentiment with historical price information. Additionally, with the profit potential in trading stocks, it is of no surprise that a number of popular financial websites are attempting to capture investor sentiment by providing an aggregate of this negative and positive online emotion. We question if the existence of dishonest posters are capitalizing on the popularity of the boards by writing sentiment in line with their trading goals as a means of influencing others, and therefore undermining the purpose of the boards. We exclude these posters to determine if predictability increases, but find no discernible difference.", "title": "" }, { "docid": "1fd8b2621cdac10dcbaf1dd4b46f4aaf", "text": "BACKGROUND\nOf the few exercise intervention studies focusing on pediatric populations, none have confined the intervention to the scheduled physical education curriculum.\n\n\nOBJECTIVE\nTo examine the effect of an 8-month school-based jumping program on the change in areal bone mineral density (aBMD), in grams per square centimeter, of healthy third- and fourth-grade children.\n\n\nSTUDY DESIGN\nTen elementary schools were randomized to exercise (n = 63) and control groups (n = 81). Exercise groups did 10 tuck jumps 3 times weekly and incorporated jumping, hopping, and skipping into twice weekly physical education classes. Control groups did regular physical education classes. At baseline and after 8 months of intervention, we measured aBMD and lean and fat mass by dual-energy x-ray absorptiometry (Hologic QDR-4500). Calcium intake, physical activity, and maturity were estimated by questionnaire.\n\n\nRESULTS\nThe exercise group showed significantly greater change in femoral trochanteric aBMD (4.4% vs 3.2%; P <.05). There were no group differences at other sites. Results were similar after controlling for covariates (baseline aBMD change in height, change in lean, calcium, physical activity, sex, and ethnicity) in hierarchical regression.\n\n\nCONCLUSIONS\nAn easily implemented school-based jumping intervention augments aBMD at the trochanteric region in the prepubertal and early pubertal skeleton.", "title": "" }, { "docid": "9228218e663951e54f31d697997c80f9", "text": "In this paper, we describe a simple set of \"recipes\" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.", "title": "" }, { "docid": "9836c71624933bb2edde6d30ab1b6273", "text": "Many people believe that sexual orientation (homosexuality vs. heterosexuality) is determined by education and social constraints. There are, however, a large number of studies indicating that prenatal factors have an important influence on this critical feature of human sexuality. Sexual orientation is a sexually differentiated trait (over 90% of men are attracted to women and vice versa). In animals and men, many sexually differentiated characteristics are organized during early life by sex steroids, and one can wonder whether the same mechanism also affects human sexual orientation. Two types of evidence support this notion. First, multiple sexually differentiated behavioral, physiological, or even morphological traits are significantly different in homosexual and heterosexual populations. Because some of these traits are known to be organized by prenatal steroids, including testosterone, these differences suggest that homosexual subjects were, on average, exposed to atypical endocrine conditions during development. Second, clinical conditions associated with significant endocrine changes during embryonic life often result in an increased incidence of homosexuality. It seems therefore that the prenatal endocrine environment has a significant influence on human sexual orientation but a large fraction of the variance in this behavioral characteristic remains unexplained to date. Genetic differences affecting behavior either in a direct manner or by changing embryonic hormone secretion or action may also be involved. How these biological prenatal factors interact with postnatal social factors to determine life-long sexual orientation remains to be determined.", "title": "" }, { "docid": "ccff1c7fa149a033b49c3a6330d4e0f3", "text": "Stroke is the leading cause of permanent adult disability in the U.S., frequently resulting in chronic motor impairments. Rehabilitation of the upper limb, particularly the hand, is especially important as arm and hand deficits post-stroke limit the performance of activities of daily living and, subsequently, functional independence. Hand rehabilitation is challenging due to the complexity of motor control of the hand. New instrumentation is needed to facilitate examination of the hand. Thus, a novel actuated exoskeleton for the index finger, the FingerBot, was developed to permit the study of finger kinetics and kinematics under a variety of conditions. Two such novel environments, one applying a spring-like extension torque proportional to angular displacement at each finger joint and another applying a constant extension torque at each joint, were compared in 10 stroke survivors with the FingerBot. Subjects attempted to reach targets located throughout the finger workspace. The constant extension torque assistance resulted in a greater workspace area (p < 0.02) and a larger active range of motion for the metacarpophalangeal joint (p < 0.01) than the spring-like assistance. Additionally, accuracy in terms of reaching the target was greater with the constant extension assistance as compared to no assistance. The FingerBot can be a valuable tool in assessing various hand rehabilitation paradigms following stroke.", "title": "" }, { "docid": "29e030bb4d8547d7615b8e3d17ec843d", "text": "This Paper examines the enforcement of occupational safety and health (OSH) regulations; it validates the state of enforcement of OSH regulations by extracting the salient issues that influence enforcement of OSH regulations in Nigeria. It’s the duty of the Federal Ministry of Labour and Productivity (Inspectorate Division) to enforce the Factories Act of 1990, while the Labour, Safety, Health and Welfare Bill of 2012 empowers the National Council for Occupational Safety and Health of Nigeria to administer the proceeding regulations on its behalf. Sadly enough, the impact of the enforcement authority is ineffective, as the key stakeholders pay less attention to OSH regulations; thus, rendering the OSH scheme dysfunctional and unenforceable, at the same time impeding OSH development. For optimum OSH in Nigeria, maximum enforcement and compliance with the regulations must be in place. This paper, which is based on conceptual analysis, reviews literature gathered through desk literature search. It identified issues to OSH enforcement such as: political influence, bribery and corruption, insecurity, lack of governmental commitment, inadequate legislation inter alia. While recommending ways to improve the enforcement of OSH regulations, it states that self-regulatory style of enforcing OSH regulations should be adopted by organisations. It also recommends that more OSH inspectors be recruited; local government authorities empowered to facilitate the enforcement of OSH regulations. Moreover, the study encourages organisations to champion OSH enforcement, as it is beneficial to them; it concludes that the burden of OSH improvement in Nigeria is on the government, educational authorities, organisations and trade unions.", "title": "" }, { "docid": "1e8195deeecb793c65b02924f2da3ef2", "text": "This paper provides an introductory survey of a class of optimization problems known as bilevel programming. We motivate this class through a simple application, and then proceed with the general formulation of bilevel programs. We consider various cases (linear, linear-quadratic, nonlinear), describe their main properties and give an overview of solution approaches.", "title": "" }, { "docid": "1dd1d5304cad393ade793b3435858ce4", "text": "With today‘s ubiquity and popularity of social network applications, the ability to analyze and understand large networks in an ef cient manner becomes critically important. However, as networks become larger and more complex, reasoning about social dynamics via simple statistics is not a feasible option. To overcome these limitations, we can rely on visual metaphors. Visualization nowadays is no longer a passive process that produces images from a set of numbers. Recent years have witnessed a convergence of social network analytics and visualization, coupled with interaction, that is changing the way analysts understand and characterize social networks. In this chapter, we discuss the main goal of visualization and how different metaphors are aimed towards elucidating different aspects of social networks, such as structure and semantics. We also describe a number of methods where analytics and visualization are interwoven towards providing a better comprehension of social structure and dynamics.", "title": "" }, { "docid": "89a00db08d8a439ab1528943c38904b2", "text": "In biomedical application, conventional hard robots have been widely used for a long time. However, when they come in contact with human body especially for rehabilitation purposes, the hard and stuff nature of the robots have received a lot of drawbacks as they interfere with movement. Recently, soft robots are drawing attention due to their high customizability and compliance, especially soft actuators. In this paper, we present a soft pneumatic bending actuator and characterize the performance of the actuator such as radius of curvature and force output during actuation. The characterization was done by a simple measurement system that we developed. This work serves as a guideline for designing soft bending actuators with application-specific requirements, for example, soft exoskeleton for rehabilitation. Keywords— Soft Robots, Actuators, Wearable, Hand Exoskeleton, Rehabilitation.", "title": "" }, { "docid": "7265c5e3f64b0a19592e7b475649433c", "text": "A power transformer outage has a dramatic financial consequence not only for electric power systems utilities but also for interconnected customers. The service reliability of this important asset largely depends upon the condition of the oil-paper insulation. Therefore, by keeping the qualities of oil-paper insulation system in pristine condition, the maintenance planners can reduce the decline rate of internal faults. Accurate diagnostic methods for analyzing the condition of transformers are therefore essential. Currently, there are various electrical and physicochemical diagnostic techniques available for insulation condition monitoring of power transformers. This paper is aimed at the description, analysis and interpretation of modern physicochemical diagnostics techniques for assessing insulation condition in aged transformers. Since fields and laboratory experiences have shown that transformer oil contains about 70% of diagnostic information, the physicochemical analyses of oil samples can therefore be extremely useful in monitoring the condition of power transformers.", "title": "" }, { "docid": "bb547f90a98aa25d0824dc63b9de952d", "text": "When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.", "title": "" }, { "docid": "16a18f742d67e4dfb660b4ce3b660811", "text": "Container-based virtualization has become the de-facto standard for deploying applications in data centers. However, deployed containers frequently include a wide-range of tools (e.g., debuggers) that are not required for applications in the common use-case, but they are included for rare occasions such as in-production debugging. As a consequence, containers are significantly larger than necessary for the common case, thus increasing the build and deployment time. CNTR1 provides the performance benefits of lightweight containers and the functionality of large containers by splitting the traditional container image into two parts: the “fat” image — containing the tools, and the “slim” image — containing the main application. At run-time, CNTR allows the user to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. To achieve this, CNTR transparently combines the two container images using a new nested namespace, without any modification to the application, the container manager, or the operating system. We have implemented CNTR in Rust, using FUSE, and incorporated a range of optimizations. CNTR supports the full Linux filesystem API, and it is compatible with all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). Through extensive evaluation, we show that CNTR incurs reasonable performance overhead while reducing, on average, by 66.6% the image size of the Top-50 images available on Docker Hub.", "title": "" }, { "docid": "6037693a098f8f2713b2316c75447a50", "text": "Presently, monoclonal antibodies (mAbs) therapeutics have big global sales and are starting to receive competition from biosimilars. We previously reported that the nano-surface and molecular-orientation limited (nSMOL) proteolysis which is optimal method for bioanalysis of antibody drugs in plasma. The nSMOL is a Fab-selective limited proteolysis, which utilize the difference of protease nanoparticle diameter (200 nm) and antibody resin pore diameter (100 nm). In this report, we have demonstrated that the full validation for chimeric antibody Rituximab bioanalysis in human plasma using nSMOL proteolysis. The immunoglobulin fraction was collected using Protein A resin from plasma, which was then followed by the nSMOL proteolysis using the FG nanoparticle-immobilized trypsin under a nondenaturing condition at 50°C for 6 h. After removal of resin and nanoparticles, Rituximab signature peptides (GLEWIGAIYPGNGDTSYNQK, ASGYTFTSYNMHWVK, and FSGSGSGTSYSLTISR) including complementarity-determining region (CDR) and internal standard P14R were simultaneously quantified by multiple reaction monitoring (MRM). This quantification of Rituximab using nSMOL proteolysis showed lower limit of quantification (LLOQ) of 0.586 µg/mL and linearity of 0.586 to 300 µg/mL. The intra- and inter-assay precision of LLOQ, low quality control (LQC), middle quality control (MQC), and high quality control (HQC) was 5.45-12.9% and 11.8, 5.77-8.84% and 9.22, 2.58-6.39 and 6.48%, and 2.69-7.29 and 4.77%, respectively. These results indicate that nSMOL can be applied to clinical pharmacokinetics study of Rituximab, based on the precise analysis.", "title": "" }, { "docid": "2438479795a9673c36138212b61c6d88", "text": "Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.", "title": "" }, { "docid": "fdb0c8d2a4c4bbe68b7cffe58adbd074", "text": "Endowing a chatbot with personality is challenging but significant to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified personality or profile. We present a method that uses generic conversation data from social media (without speaker identities) to generate profile-coherent responses. The central idea is to detect whether a profile should be used when responding to a user post (by a profile detector), and if necessary, select a key-value pair from the profile to generate a response forward and backward (by a bidirectional decoder) so that a personalitycoherent response can be generated. Furthermore, in order to train the bidirectional decoder with generic dialogue data, a position detector is designed to predict a word position from which decoding should start given a profile value. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.", "title": "" }, { "docid": "0a3cac4df8679fcc9b53a32b3dcaa695", "text": "This paper describes the design of a simple, low-cost microcontroller based heart rate measuring device with LCD output. Heart rate of the subject is measured from the finger using optical sensors and the rate is then averaged and displayed on a text based LCD.", "title": "" }, { "docid": "1deeae749259ff732ad3206dc4a7e621", "text": "In traditional active learning, there is only one labeler that always returns the ground truth of queried labels. However, in many applications, multiple labelers are available to offer diverse qualities of labeling with different costs. In this paper, we perform active selection on both instances and labelers, aiming to improve the classification model most with the lowest cost. While the cost of a labeler is proportional to its overall labeling quality, we also observe that different labelers usually have diverse expertise, and thus it is likely that labelers with a low overall quality can provide accurate labels on some specific instances. Based on this fact, we propose a novel active selection criterion to evaluate the cost-effectiveness of instance-labeler pairs, which ensures that the selected instance is helpful for improving the classification model, and meanwhile the selected labeler can provide an accurate label for the instance with a relative low cost. Experiments on both UCI and real crowdsourcing data sets demonstrate the superiority of our proposed approach on selecting cost-effective queries.", "title": "" }, { "docid": "101bcd956dcdb0fff3ecf78aa841314a", "text": "HCI research has increasingly examined how sensing technologies can help people capture and visualize data about their health-related behaviors. Yet, few systems help people reflect more fundamentally on the factors that influence behaviors such as physical activity (PA). To address this research gap, we take a novel approach, examining how such reflections can be stimulated through a medium that generations of families have used for reflection and teaching: storytelling. Through observations and interviews, we studied how 13 families interacted with a low-fidelity prototype, and their attitudes towards this tool. Our prototype used storytelling and interactive prompts to scaffold reflection on factors that impact children's PA. We contribute to HCI research by characterizing how families interacted with a story-driven reflection tool, and how such a tool can encourage critical processes for behavior change. Informed by the Transtheoretical Model, we present design implications for reflective informatics systems.", "title": "" } ]
scidocsrr
adfb5416be2b38bc4afa93c8a5160db9
Towards a Gamification Framework for Software Process Improvement Initiatives: Construction and Validation
[ { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "44bd9d0b66cb8d4f2c4590b4cb724765", "text": "AIM\nThis paper is a description of inductive and deductive content analysis.\n\n\nBACKGROUND\nContent analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method.\n\n\nDISCUSSION\nWhen using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge.\n\n\nCONCLUSION\nInductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.", "title": "" }, { "docid": "f25cfe1f277071a033b9665dd893005d", "text": "This paper presents a review of the literature on gamification design frameworks. Gamification, understood as the use of game design elements in other contexts for the purpose of engagement, has become a hot topic in the recent years. However, there's also a cautionary tale to be extracted from Gartner's reports on the topic: many gamification-based solutions fail because, mostly, they have been created on a whim, or mixing bits and pieces from game components, without a clear and formal design process. The application of a definite design framework aims to be a path to success. Therefore, before starting the gamification of a process, it is very important to know which frameworks or methods exist and their main characteristics. The present review synthesizes the process of gamification design for a successful engagement experience. This review categorizes existing approaches and provides an assessment of their main features, which may prove invaluable to developers of gamified solutions at different levels and scopes.", "title": "" }, { "docid": "2ccb76e0cda888491ebb37bb316c5490", "text": "For any Software Process Improvement (SPI) initiative to succeed human factors, in particular, motivation and commitment of the people involved should be kept in mind. In fact, Organizational Change Management (OCM) has been identified as an essential knowledge area for any SPI initiative. However, enough attention is still not given to the human factors and therefore, the high degree of failures in the SPI initiatives is directly linked to a lack of commitment and motivation. Gamification discipline allows us to define mechanisms that drive people’s motivation and commitment towards the development of tasks in order to encourage and accelerate the acceptance of an SPI initiative. In this paper, a gamification framework oriented to both organization needs and software practitioners groups involved in an SPI initiative is defined. This framework tries to take advantage of the transverse nature of gamification in order to apply its Critical Success Factors (CSF) to the organizational change management of an SPI. Gamification framework guidelines have been validated by some qualitative methods. Results show some limitations that threaten the reliability of this validation. These require further empirical validation of a software organization.", "title": "" } ]
[ { "docid": "b7147d00aff93ee7b3351bd87f8b365c", "text": "Vending Machine as we all know is a machine which can vend different products which is more like an automated process with no requirement of man handling which we normally see in fast moving cities because of fast paced life. This paper compares different aspects like area, timing constraint, speed, power dissipation of a vending machine with 2 different design styles algorithm while installation. FSM based algorithm has been utilized to simulate model, synthesize the machine on the stratix III family of FPGA provided with quartus design tool which is logic device design software from Altera.", "title": "" }, { "docid": "1c104704a868e3e40583f1797b0e8439", "text": "Mobile e-commerce or M-commerce describes online sales transaction that uses wireless or mobile electronic devices. These wireless devices interact with computer networks that have the ability to conduct online merchandise purchases. The rapid growth of mobile commerce is being driven by number of factors – increasing mobile user base, rapid adoption of online commerce and technological advances. These systems provide the potential for organizations and users to perform various commerce-related tasks without regard to time and location. Owing to wireless nature of these devices, there are many issues that affect the functioning of m-commerce. This paper identifies and discusses these issues which include technological issues and application issues pertaining to M-commerce. Keywords— mobile, e-commerce, M-commerce", "title": "" }, { "docid": "6f87969a98451881a9c9da9c8a05f219", "text": "The possibility of filtering light cloud cover in satellite imagery to expose objects beneath the clouds is discussed. A model of the cloud distortion process is developed and a transformation is introduced which makes the signal and noise additive so that optimum linear filtering techniques can be applied. This homomorphic filtering can be done in the two-dimensional image plane, or it can be extended to include the spectral dimension on multispectral data. The three-dimensional filter is especially promising because clouds tend to follow a common spectral response. The noise statistics can be estimated directly from the noisy data. Results from a computer simulation and from Landsat data are shown.", "title": "" }, { "docid": "47afccb5e7bcdade764666f3b5ab042e", "text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.", "title": "" }, { "docid": "8df196edbb812198ebe1f86e81f38481", "text": "Ever since the formulation of Rhetorical Structure Theory (RST) by Mann and Thompson, researchers have debated about what is the ‘right’ number of relations. One proposal is based on the discourse markers (connectives) signalling the presence of a particular relationship. In this paper, I discuss the adequacy of such a proposal, in the light of two different corpus studies: a study of conversations, and a study of newspaper articles. The two corpora were analyzed in terms of rhetorical relations, and later coded for external signals of those relations. The conclusion in both studies is that there are a high number of relations (between 60% and 70% of the total, on average) that are not signalled. A comparison between the two corpora suggests that genre-specific factors may affect which relations are signalled, and which are not.", "title": "" }, { "docid": "a2673b70bf6c7cf50f2f4c4db2845e19", "text": "This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task. The goal of this workshop was to bring together researchers in NLP and linguistics with a shared task aimed at testing the generalizability of NLP systems beyond the distributions of their training data. We describe the motivation, setup, and participation of the shared task, provide discussion of some highlighted results, and discuss lessons learned.", "title": "" }, { "docid": "2342c92f91c243474a53323a476ae3d9", "text": "Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of <inline-formula> <tex-math notation=\"LaTeX\">$96.5$</tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq1-2549518.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$92.8$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2549518.gif\"/></alternatives></inline-formula> percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.", "title": "" }, { "docid": "4003b1a03be323c78e98650895967a07", "text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.", "title": "" }, { "docid": "f7e19e14c90490e1323e47860d21ec4d", "text": "There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery — including DNA-sequencing technologies and analysis algorithms — need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision.", "title": "" }, { "docid": "5af6896d3ffa131e837a7bad57643bde", "text": "With the advent of commodity 3D capturing devices and better 3D modeling tools, 3D shape content is becoming increasingly prevalent. Therefore, the need for shape retrieval algorithms to handle large-scale shape repositories is more and more important. This track provides a benchmark to evaluate large-scale 3D shape retrieval based on the ShapeNet dataset. It is a continuation of the SHREC 2016 large-scale shape retrieval challenge with a goal of measuring progress with recent developments in deep learning methods for shape retrieval. We use ShapeNet Core55, which provides more than 50 thousands models over 55 common categories in total for training and evaluating several algorithms. Eight participating teams have submitted a variety of retrieval methods which were evaluated on several standard information retrieval performance metrics. The approaches vary in terms of the 3D representation, using multi-view projections, point sets, volumetric grids, or traditional 3D shape descriptors. Overall performance on the shape retrieval task has improved significantly compared to the iteration of this competition in SHREC 2016. We release all data, results, and evaluation code for the benefit of the community and to catalyze future research into large-scale 3D shape retrieval (website: https://www.shapenet.org/shrec17).", "title": "" }, { "docid": "8d5dd3f590dee87ea609278df3572f6e", "text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.", "title": "" }, { "docid": "4bc74a746ef958a50bb8c542aa25860f", "text": "A new approach to super resolution line spectrum estimation in both temporal and spatial domain using a coprime pair of samplers is proposed. Two uniform samplers with sample spacings MT and NT are used where M and N are coprime and T has the dimension of space or time. By considering the difference set of this pair of sample spacings (which arise naturally in computation of second order moments), sample locations which are O(MN) consecutive multiples of T can be generated using only O(M + N) physical samples. In order to efficiently use these O(MN) virtual samples for super resolution spectral estimation, a novel algorithm based on the idea of spatial smoothing is proposed, which can be used for estimating frequencies of sinusoids buried in noise as well as for estimating Directions-of-Arrival (DOA) of impinging signals on a sensor array. This technique allows us to construct a suitable positive semidefinite matrix on which subspace based algorithms like MUSIC can be applied to detect O(MN) spectral lines using only O(M + N) physical samples.", "title": "" }, { "docid": "6859a7d2838708a2361e2e0b0cf1819c", "text": "In edge computing, content and service providers aim at enhancing user experience by providing services closer to the user. At the same time, infrastructure providers such as access ISPs aim at utilizing their infrastructure by selling edge resources to these content and service providers. In this context, auctions are widely used to set a price that reflects supply and demand in a fair way. In this work, we propose RAERA, the first robust auction scheme for edge resource allocation that is suitable to work with the market uncertainty typical for edge resources---here, customers typically have different valuation distribution for a wide range of heterogeneous resources. Additionally, RAERA encourages truthful bids and allows the infrastructure provider to maximize its break-even profit. Our preliminary evaluations highlight that REARA offers a time dependent fair price. Sellers can achieve higher revenue in the range of 5%-15% irrespective of varying demands and the buyers pay up to 20% lower than their top bid amount.", "title": "" }, { "docid": "e78f8f96af1c589487273c1fecfa0f7c", "text": "BACKGROUND\nAtrial fibrillation (AFib) is the most common form of heart arrhythmia and a potent risk factor for stroke. Nonvitamin K antagonist oral anticoagulants (NOACs) are routinely prescribed to manage AFib stroke risk; however, nonadherence to treatment is a concern. Additional tools that support self-care and medication adherence may benefit patients with AFib.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the perceived usability and usefulness of a mobile app designed to support self-care and treatment adherence for AFib patients who are prescribed NOACs.\n\n\nMETHODS\nA mobile app to support AFib patients was previously developed based on early stage interview and usability test data from clinicians and patients. An exploratory pilot study consisting of naturalistic app use, surveys, and semistructured interviews was then conducted to examine patients' perceptions and everyday use of the app.\n\n\nRESULTS\nA total of 12 individuals with an existing diagnosis of nonvalvular AFib completed the 4-week study. The average age of participants was 59 years. All participants somewhat or strongly agreed that the app was easy to use, and 92% (11/12) reported being satisfied or very satisfied with the app. Participant feedback identified changes that may improve app usability and usefulness for patients with AFib. Areas of usability improvement were organized by three themes: app navigation, clarity of app instructions and design intent, and software bugs. Perceptions of app usefulness were grouped by three key variables: core needs of the patient segment, patient workflow while managing AFib, and the app's ability to support the patient's evolving needs.\n\n\nCONCLUSIONS\nThe results of this study suggest that mobile tools that target self-care and treatment adherence may be helpful to AFib patients, particularly those who are newly diagnosed. Additionally, participant feedback provided insight into the varied needs and health experiences of AFib patients, which may improve the design and targeting of the intervention. Pilot studies that qualitatively examine patient perceptions of usability and usefulness are a valuable and often underutilized method for assessing the real-world acceptability of an intervention. Additional research evaluating the AFib Connect mobile app over a longer period, and including a larger, more diverse sample of AFib patients, will be helpful for understanding whether the app is perceived more broadly to be useful and effective in supporting patient self-care and medication adherence.", "title": "" }, { "docid": "6a0f60881dddc5624787261e0470b571", "text": "Title of Dissertation: AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES Marco David Adelfio, Doctor of Philosophy, 2015 Dissertation directed by: Professor Hanan Samet Department of Computer Science Data tables on the Web hold large quantities of information, but are difficult to search, browse, and merge using existing systems. This dissertation presents a collection of techniques for extracting, processing, and querying tables that contain geographic data, by harnessing the coherence of table structures for retrieval tasks. Data tables, including spreadsheets, HTML tables, and those found in rich document formats, are the standard way of communicating structured data for typical computer users. Notably, geographic tables (i.e., those containing names of locations) constitute a large fraction of publicly-available data tables and are ripe for exposure to Internet users who are increasingly comfortable interacting with geographic data using web-based maps. Of particular interest is the creation of a large repository of geographic data tables that would enable novel queries such as “find vacation itineraries geographically similar to mine” for use in trip planning or “find demographic datasets that cover regions X, Y, and Z” for sociological research. In support of these goals, this dissertation identifies several methods for using the structure and context of data tables to improve the interpretation of the contents, even in the presence of ambiguity. First, a method for identifying functional components of data tables is presented, capitalizing on techniques for sequence labeling that are used in natural language processing. Next, a novel automated method for converting place references to physical latitude/longitude values, a process known as geotagging, is applied to tables with high accuracy. A classification procedure for identifying a specific class of geographic table, the travel itinerary, is also described, which borrows inspiration from optimization techniques for the traveling salesman problem (TSP). Finally, methods for querying spatially similar tables are introduced and several mechanisms for visualizing and interacting with the extracted geographic data are explored. AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES", "title": "" }, { "docid": "fab5740f2d6392378155e2d71382dc8e", "text": "Education provides a plethora of tools that can be used (alone or combined) for achieving better results. One of the most recent technological advances that can be used as an educational tool is Augmented Reality, a technology that can combine virtual and physical objects in order to enhance the real world. However, little is known about this technology and its possible applications in primary and secondary education. This paper consists a literature review focused on AR and its current and future incorporation in modern education via various context aware technologies (e.g. tablets, smartphones) which can provide opportunities for more interactive and joyful educational experiences. Also, it is described the possibility of implementing AR in Open Course Project situations, such as the one which is available at the Eastern Macedonia and Thrace Institute of Technology. Its purpose is to inform “creators” and stimulate “users” so that the benefits of this promising technology may be diffused throughout the educational process.", "title": "" }, { "docid": "054ed84aa377673d1327dedf26c06c59", "text": "App Stores, such as Google Play or the Apple Store, allow users to provide feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps. Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers to accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task. In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%).", "title": "" }, { "docid": "d8c5ff196db9acbea12e923b2dcef276", "text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.", "title": "" }, { "docid": "f5519eff0c13e0ee42245fdf2627b8ae", "text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.", "title": "" } ]
scidocsrr
db4152c47c5cd5ad07b084b3ccf1beab
Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload
[ { "docid": "20baadf2e55bcdeced824920c5ae6811", "text": "Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot–human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches.", "title": "" } ]
[ { "docid": "2e6149db737759e4033b60c9041bdecf", "text": "We propose IMAGINATION (IMAge Generation for INternet AuthenticaTION), a system for the generation of attack-resistant, user-friendly, image-based CAPTCHAs. In our system, we produce controlled distortions on randomly chosen images and present them to the user for annotation from a given list of words. The distortions are performed in a way that satisfies the incongruous requirements of low perceptual degradation and high resistance to attack by content-based image retrieval systems. Word choices are carefully generated to avoid ambiguity as well as to avoid attacks based on the choices themselves. Preliminary results demonstrate the attack-resistance and user-friendliness of our system compared to text-based CAPTCHAs.", "title": "" }, { "docid": "c12a5822429d1dcfd78221663fff4519", "text": "In this paper we discuss the stability properties of convolutional neural networks. Convolutional neural networks are widely used in machine learning. In classification they are mainly used as feature extractors. Ideally, we expect similar features when the inputs are from the same class. That is, we hope to see a small change in the feature vector with respect to a deformation on the input signal. This can be established mathematically, and the key step is to derive the Lipschitz properties. Further, we establish that the stability results can be extended for more general networks. We give a formula for computing the Lipschitz bound, and compare it with other methods to show it is closer to the optimal value.", "title": "" }, { "docid": "ac8aa79e25628f68d51bf7c157428a74", "text": "In this article, we explore the relevance and contribution of new signals in a broader interpretation of multimedia for personal health. We present how core multimedia research is becoming an important enabler for applications with the potential for significant societal impact.", "title": "" }, { "docid": "a6e0bbc761830bc74d58793a134fa75b", "text": "With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.", "title": "" }, { "docid": "44672e9dc60639488800ad4ae952f272", "text": "The GPS technology and new forms of urban geography have changed the paradigm for mobile services. As such, the abundant availability of GPS traces has enabled new ways of doing taxi business. Indeed, recent efforts have been made on developing mobile recommender systems for taxi drivers using Taxi GPS traces. These systems can recommend a sequence of pick-up points for the purpose of maximizing the probability of identifying a customer with the shortest driving distance. However, in the real world, the income of taxi drivers is strongly correlated with the effective driving hours. In other words, it is more critical for taxi drivers to know the actual driving routes to minimize the driving time before finding a customer. To this end, in this paper, we propose to develop a cost-effective recommender system for taxi drivers. The design goal is to maximize their profits when following the recommended routes for finding passengers. Specifically, we first design a net profit objective function for evaluating the potential profits of the driving routes. Then, we develop a graph representation of road networks by mining the historical taxi GPS traces and provide a Brute-Force strategy to generate optimal driving route for recommendation. However, a critical challenge along this line is the high computational cost of the graph based approach. Therefore, we develop a novel recursion strategy based on the special form of the net profit function for searching optimal candidate routes efficiently. Particularly, instead of recommending a sequence of pick-up points and letting the driver decide how to get to those points, our recommender system is capable of providing an entire driving route, and the drivers are able to find a customer for the largest potential profit by following the recommendations. This makes our recommender system more practical and profitable than other existing recommender systems. Finally, we carry out extensive experiments on a real-world data set collected from the San Francisco Bay area and the experimental results clearly validate the effectiveness of the proposed recommender system.", "title": "" }, { "docid": "c80a60778e5c8e3349ce13475176a118", "text": "Future homes will be populated with large numbers of robots with diverse functionalities, ranging from chore robots to elder care robots to entertainment robots. While household robots will offer numerous benefits, they also have the potential to introduce new security and privacy vulnerabilities into the home. Our research consists of three parts. First, to serve as a foundation for our study, we experimentally analyze three of today's household robots for security and privacy vulnerabilities: the WowWee Rovio, the Erector Spykee, and the WowWee RoboSapien V2. Second, we synthesize the results of our experimental analyses and identify key lessons and challenges for securing future household robots. Finally, we use our experiments and lessons learned to construct a set of design questions aimed at facilitating the future development of household robots that are secure and preserve their users' privacy.", "title": "" }, { "docid": "4f3066f6d45bc48cfe655642f384e09a", "text": "There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of categorical perception. In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, surprise expressions lie between happiness and fear expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.", "title": "" }, { "docid": "974ce5e1213491d9c7a5afc1a98ebc64", "text": "A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.", "title": "" }, { "docid": "95fb51b0b6d8a3a88edfc96157233b10", "text": "Various types of video can be captured with fisheye lenses; their wide field of view is particularly suited to surveillance video. However, fisheye lenses introduce distortion, and this changes as objects in the scene move, making fisheye video difficult to interpret. Current still fisheye image correction methods are either limited to small angles of view, or are strongly content dependent, and therefore unsuitable for processing video streams. We present an efficient and robust scheme for fisheye video correction, which minimizes time-varying distortion and preserves salient content in a coherent manner. Our optimization process is controlled by user annotation, and takes into account a wide set of measures addressing different aspects of natural scene appearance. Each is represented as a quadratic term in an energy minimization problem, leading to a closed-form solution via a sparse linear system. We illustrate our method with a range of examples, demonstrating coherent natural-looking video output. The visual quality of individual frames is comparable to those produced by state-of-the-art methods for fisheye still photograph correction.", "title": "" }, { "docid": "9f6f00bf0872c54fbf2ec761bf73f944", "text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.", "title": "" }, { "docid": "63fec4e13fe2a909f9125a973295bf47", "text": "The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.", "title": "" }, { "docid": "363cdcc34c855e712707b5b920fbd113", "text": "This paper presents the design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator. First, the force transmission design of underactuated fingers is revisited. An optimal geometry of the tendon-driven fingers is then obtained. Then, underactuation between the fingers is addressed using differential mechanisms. Tendon routings are proposed and verified experimentally. Finally, a prototype of a 15-degree-of-freedom hand is built and tested. The results demonstrate the feasibility of a humanoid hand with many degrees of freedom and one single degree of actuation.", "title": "" }, { "docid": "7e08ddffc3a04c6dac886e14b7e93907", "text": "The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and low-rank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are block-diagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1-norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets.", "title": "" }, { "docid": "30dfcf624badf766c3c7070548a47af4", "text": "The primary purpose of this paper is to stimulate discussion about a research agenda for a new interdisciplinary field. This field-the study of coordination-draws upon a variety of different disciplines including computer science, organization theory, management science, economics, and psychology. Work in this new area will include developing a body of scientific theory, which we will call \"coordination theory,\" about how the activities of separate actors can be coordinated. One important use for coordination theory will be in developing and using computer and communication systems to help people coordinate their activities in new ways. We will call these systems \"coordination technology.\" Rationale There are four reasons why work in this area is timely: (1) In recent years, large numbers of people have acquired direct access to computers. These computers are now beginning to be connected to each other. Therefore, we now have, for the first time, an opportunity for vastly larger numbers of people to use computing and communications capabilities to help coordinate their work. For example, specialized new software has been developed to (a) support multiple authors working together on the same document, (b) help people display and manipulate information more effectively in face-to-face meetings, and (c) help people intelligently route and process electronic messages. It already appears likely that there will be commercially successful products of this new type (often called \"computer supported cooperative work\" or \"groupware\"), and to some observers these applications herald a paradigm shift in computer usage as significant as the earlier shifts to time-sharing and personal computing. It is less clear whether the continuing development of new computer applications in this area will depend solely on the intuitions of successful designers or whether it will also be guided by a coherent underlying theory of how people coordinate their activities now and how they might do so differently with computer support. (2) In the long run, the dramatic improvements in the costs and capabilities of information technologies are changing-by orders of magnitude-the constraints on how certain kinds of communication and coordination can occur. At the same time, there is a pervasive feeling in American business that the pace of change is accelerating and that we need to create more flexible and adaptive organizations. Together, these changes may soon lead us across a threshhold where entirely new ways of organizing human activities become desirable. For 2 example, new capabilities for communicating information faster, less expensively, and …", "title": "" }, { "docid": "893408bc41eb46a75fc59e23f74339cf", "text": "We discuss cutting stock problems (CSPs) from the perspective of the paper industry and the financial impact they make. Exact solution approaches and heuristics have been used for decades to support cutting stock decisions in that industry. We have developed polylithic solution techniques integrated in our ERP system to solve a variety of cutting stock problems occurring in real world problems. Among them is the simultaneous minimization of the number of rolls and the number of patterns while not allowing any overproduction. For two cases, CSPs minimizing underproduction and CSPs with master rolls of different widths and availability, we have developed new column generation approaches. The methods are numerically tested using real world data instances. An assembly of current solved and unsolved standard and non-standard CSPs at the forefront of research are put in perspective.", "title": "" }, { "docid": "8f957dab2aa6b186b61bc309f3f2b5c3", "text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "title": "" }, { "docid": "f1c1a0baa9f96d841d23e76b2b00a68d", "text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08", "title": "" }, { "docid": "ece8f2f4827decf0c440ca328ee272b4", "text": "We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formulated as a constrained optimization problem that is computationally inexpensive to solve. We discuss various properties of the algorithm and provide proof of convergence for two different optimization criteria We demonstrate the performance and the speed of the algorithm on linear classifiers learned from real-world datasets, including a medical dataset on detection of lung cancer from medical images. The ability to convert SVM's and other \"black-box\" classifiers into a set of human-understandable rules, is critical not only for physician acceptance, but also to reducing the regulatory barrier for medical-decision support systems based on such classifiers.", "title": "" }, { "docid": "e96eaf2bde8bf50605b67fb1184b760b", "text": "In response to your recent publication comparing subjective effects of D9-tetrahydrocannabinol and herbal cannabis (Wachtel et al. 2002), a number of comments are necessary. The first concerns the suitability of the chosen “marijuana” to assay the issues at hand. NIDA cannabis has been previously characterized in a number of studies (Chait and Pierri 1989; Russo et al. 2002), as a crude lowgrade product (2–4% THC) containing leaves, stems and seeds, often 3 or more years old after processing, with a stale odor lacking in terpenoids. This contrasts with the more customary clinical cannabis employed by patients in Europe and North America, composed solely of unseeded flowering tops with a potency of up to 20% THC. Cannabis-based medicine extracts (CBME) (Whittle et al. 2001), employed in clinical trials in the UK (Notcutt 2002; Robson et al. 2002), are extracted from flowering tops with abundant glandular trichomes, and retain full terpenoid and flavonoid components. In the study at issue (Wachtel et al. 2002), we are informed that marijuana contained 2.11% THC, 0.30% cannabinol (CBN), and 0.05% (CBD). The concentration of the latter two cannabinoids is virtually inconsequential. Thus, we are not surprised that no differences were seen between NIDA marijuana with essentially only one cannabinoid, and pure, synthetic THC. In comparison, clinical grade cannabis and CBME customarily contain high quantities of CBD, frequently equaling the percentage of THC (Whittle et al. 2001). Carlini et al. (1974) determined that cannabis extracts produced effects “two or four times greater than that expected from their THC content, based on animal and human studies”. Similarly, Fairbairn and Pickens (1981) detected the presence of unidentified “powerful synergists” in cannabis extracts, causing 330% greater activity in mice than THC alone. The clinical contribution of other CBD and other cannabinoids, terpenoids and flavonoids to clinical cannabis effects has been espoused as an “entourage effect” (Mechoulam and Ben-Shabat 1999), and is reviewed in detail by McPartland and Russo (2001). Briefly summarized, CBD has anti-anxiety effects (Zuardi et al. 1982), anti-psychotic benefits (Zuardi et al. 1995), modulates metabolism of THC by blocking its conversion to the more psychoactive 11-hydroxy-THC (Bornheim and Grillo 1998), prevents glutamate excitotoxicity, serves as a powerful anti-oxidant (Hampson et al. 2000), and has notable anti-inflammatory and immunomodulatory effects (Malfait et al. 2000). Terpenoid cannabis components probably also contribute significantly to clinical effects of cannabis and boil at comparable temperatures to THC (McPartland and Russo 2001). Cannabis essential oil demonstrates serotonin receptor binding (Russo et al. 2000). Its terpenoids include myrcene, a potent analgesic (Rao et al. 1990) and anti-inflammatory (Lorenzetti et al. 1991), betacaryophyllene, another anti-inflammatory (Basile et al. 1988) and gastric cytoprotective (Tambe et al. 1996), limonene, a potent inhalation antidepressant and immune stimulator (Komori et al. 1995) and anti-carcinogenic (Crowell 1999), and alpha-pinene, an anti-inflammatory (Gil et al. 1989) and bronchodilator (Falk et al. 1990). Are these terpenoid effects significant? A dried sample of drug-strain cannabis buds was measured as displaying an essential oil yield of 0.8% (Ross and ElSohly 1996), or a putative 8 mg per 1000 mg cigarette. Buchbauer et al. (1993) demonstrated that 20–50 mg of essential oil in the ambient air in mouse cages produced measurable changes in behavior, serum levels, and bound to cortical cells. Similarly, Komori et al. (1995) employed a gel of citrus fragrance with limonene to produce a significant antidepressant benefit in humans, obviating the need for continued standard medication in some patients, and also improving CD4/8 immunologic ratios. These data would E. B. Russo ()) Montana Neurobehavioral Specialists, 900 North Orange Street, Missoula, MT, 59802 USA e-mail: erusso@blackfoot.net", "title": "" } ]
scidocsrr
df6282669a1b9768a3ff9d7b0a67976f
Using Recurrent Neural Network for Learning Expressive Ontologies
[ { "docid": "82f18b2c38969f556ff4464ecb99f837", "text": "Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models— plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)—can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models’ ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.", "title": "" } ]
[ { "docid": "14a45e3e7aadee56b7d2e28c692aba9f", "text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.", "title": "" }, { "docid": "020c262a2af47f69858b9d66ea317aba", "text": "Process models are often used to visualize and communicate workflows to involved stakeholders. Unfortunately, process modeling notations can be complex and need specific knowledge to be understood. Storyboards, as a visual language to illustrate workflows as sequences of images, provide natural visualization features that allow for better communication, to provide insight to people from non-process modeling expert domains. This paper proposes a visualization approach using a 3D virtual world environment to visualize storyboards for business process models. A prototype was built to present its applicability via generating output with examples of five major process model patterns and two non-trivial use cases. Illustrative results for the approach show the promise of using a 3D virtual world to visualize complex process models in an unambiguous and intuitive manner.", "title": "" }, { "docid": "c757cc329886c1192b82f36c3bed8b7f", "text": "Though much research has been conducted on Subjectivity and Sentiment Analysis (SSA) during the last decade, little work has focused on Arabic. In this work, we focus on SSA for both Modern Standard Arabic (MSA) news articles and dialectal Arabic microblogs from Twitter. We showcase some of the challenges associated with SSA on microblogs. We adopted a random graph walk approach to extend the Arabic SSA lexicon using ArabicEnglish phrase tables, leading to improvements for SSA on Arabic microblogs. We used different features for both subjectivity and sentiment classification including stemming, part-of-speech tagging, as well as tweet specific features. Our classification features yield results that surpass Arabic SSA results in the literature.", "title": "" }, { "docid": "36b46a2bf4b46850f560c9586e91d27b", "text": "Promoting pro-environmental behaviour amongst urban dwellers is one of today's greatest sustainability challenges. The aim of this study is to test whether an information intervention, designed based on theories from environmental psychology and behavioural economics, can be effective in promoting recycling of food waste in an urban area. To this end we developed and evaluated an information leaflet, mainly guided by insights from nudging and community-based social marketing. The effect of the intervention was estimated through a natural field experiment in Hökarängen, a suburb of Stockholm city, Sweden, and was evaluated using a difference-in-difference analysis. The results indicate a statistically significant increase in food waste recycled compared to a control group in the research area. The data analysed was on the weight of food waste collected from sorting stations in the research area, and the collection period stretched for almost 2 years, allowing us to study the short- and long term effects of the intervention. Although the immediate positive effect of the leaflet seems to have attenuated over time, results show that there was a significant difference between the control and the treatment group, even 8 months after the leaflet was distributed. Insights from this study can be used to guide development of similar pro-environmental behaviour interventions for other urban areas in Sweden and abroad, improving chances of reaching environmental policy goals.", "title": "" }, { "docid": "c274c85ec3749151f18adaaabeb992b5", "text": "Using SDN to configure and control a multi-site network involves writing code that handles low-level details. We describe preliminary work on a framework that takes a network description and set of policies as input, and handles all the details of deriving routes and installing flow rules in switches. The paper describes key software components and reports preliminary results.", "title": "" }, { "docid": "8e0d5c838647f3999c5bf6d351413dd1", "text": "We present the results of the first large-scale study of the uniqueness of Web browsing histories, gathered from a total of 368, 284 Internet users who visited a history detection demonstration website. Our results show that for a majority of users (69%), the browsing history is unique and that users for whom we could detect at least 4 visited websites were uniquely identified by their histories in 97% of cases. We observe a significant rate of stability in browser history fingerprints: for repeat visitors, 38% of fingerprints are identical over time, and differing ones were correlated with original history contents, indicating static browsing preferences (for history subvectors of size 50). We report a striking result that it is enough to test for a small number of pages in order to both enumerate users’ interests and perform an efficient and unique behavioral fingerprint; we show that testing 50 web pages is enough to fingerprint 42% of users in our database, increasing to 70% with 500 web pages. Finally, we show that indirect history data, such as information about categories of visited websites can also be effective in fingerprinting users, and that similar fingerprinting can be performed by common script providers such as Google or Facebook.", "title": "" }, { "docid": "f29cee48c229ba57d58a07650633bec4", "text": "In this work, we improve the performance of intra-sentential zero anaphora resolution in Japanese using a novel method of recognizing subject sharing relations. In Japanese, a large portion of intrasentential zero anaphora can be regarded as subject sharing relations between predicates, that is, the subject of some predicate is also the unrealized subject of other predicates. We develop an accurate recognizer of subject sharing relations for pairs of predicates in a single sentence, and then construct a subject shared predicate network, which is a set of predicates that are linked by the subject sharing relations recognized by our recognizer. We finally combine our zero anaphora resolution method exploiting the subject shared predicate network and a state-ofthe-art ILP-based zero anaphora resolution method. Our combined method achieved a significant improvement over the the ILPbased method alone on intra-sentential zero anaphora resolution in Japanese. To the best of our knowledge, this is the first work to explicitly use an independent subject sharing recognizer in zero anaphora resolution.", "title": "" }, { "docid": "f8cc65321723e9bd54b5aea4052542fc", "text": "Falls in elderly is a major health problem and a cost burden to social services. Thus automatic fall detectors are needed to support the independence and security of the elderly. The goal of this research is to develop a real-time portable wireless fall detection system, which is capable of automatically discriminating between falls and Activities of Daily Life (ADL). The fall detection system contains a portable fall-detection terminal and a monitoring centre, both of which communicate with ZigBee protocol. To extract the features of falls, falls data and ADL data obtained from young subjects are analyzed. Based on the characteristics of falls, an effective fall detection algorithm using tri-axis accelerometers is introduced, and the results show that falls can be distinguished from ADL with a sensitivity over 95% and a specificity of 100%, for a total set of 270 movements.", "title": "" }, { "docid": "63020af866e49c52e7da33972d50bdd1", "text": "Internet of Things (IOT) plays a vital role in connecting the surrounding environmental things to the network and made easy to access those un-internet things from any remote location. It’s inevitable for the people to update with the growing technology. And generally people are facing problems on parking vehicles in parking slots in a city. In this study we design a Smart Parking System (SPS) which enables the user to find the nearest parking area and gives availability of parking slots in that respective parking area. And it mainly focus on reducing the time in finding the parking lots and also it avoids the unnecessary travelling through filled parking lots in a parking area. Thus it reduces the fuel consumption which in turn reduces carbon footprints in an atmosphere.", "title": "" }, { "docid": "8db6d52ee2778d24c6561b9158806e84", "text": "Surface fuctionalization plays a crucial role in developing efficient nanoparticulate drug-delivery systems by improving their therapeutic efficacy and minimizing adverse effects. Here we propose a simple layer-by-layer self-assembly technique capable of constructing mesoporous silica nanoparticles (MSNs) into a pH-responsive drug delivery system with enhanced efficacy and biocompatibility. In this system, biocompatible polyelectrolyte multilayers of alginate/chitosan were assembled on MSN's surface to achieve pH-responsive nanocarriers. The functionalized MSNs exhibited improved blood compatibility over the bare MSNs in terms of low hemolytic and cytotoxic activity against human red blood cells. As a proof-of-concept, the anticancer drug doxorubicin (DOX) was loaded into nanocarriers to evaluate their use for the pH-responsive drug release both in vitro and in vivo. The DOX release from nanocarriers was pH dependent, and the release rate was much faster at lower pH than that of at higher pH. The in vitro evaluation on HeLa cells showed that the DOX-loaded nanocarriers provided a sustained intracellular DOX release and a prolonged DOX accumulation in the nucleus, thus resulting in a prolonged therapeutic efficacy. In addition, the pharmacokinetic and biodistribution studies in healthy rats showed that DOX-loaded nanocarriers had longer systemic circulation time and slower plasma elimination rate than free DOX. The histological results also revealed that the nanocarriers had good tissue compatibility. Thus, the biocompatible multilayers functionalized MSNs hold the substantial potential to be further developed as effective and safe drug-delivery carriers.", "title": "" }, { "docid": "082517b83d9a9cdce3caef62a579bf2e", "text": "To enable autonomous driving, a semantic knowledge of the environment is unavoidable. We therefore introduce a multiclass classifier to determine the classes of an object relying solely on radar data. This is a challenging problem as objects of the same category have often a diverse appearance in radar data. As classification methods a random forest classifier and a deep convolutional neural network are evaluated. To get good results despite the limited training data available, we introduce a hybrid approach using an ensemble consisting of the two classifiers. Further we show that the accuracy can be improved significantly by allowing a lower detection rate.", "title": "" }, { "docid": "2e11a8170ec8b2547548091443d46cc6", "text": "This chapter presents the theory of the Core Elements of the Gaming Experience (CEGE). The CEGE are the necessary but not sufficient conditions to provide a positive experience while playing video-games. This theory, formulated using qualitative methods, is presented with the aim of studying the gaming experience objectively. The theory is abstracted using a model and implemented in questionnaire. This chapter discusses the formulation of the theory, introduces the model, and shows the use of the questionnaire in an experiment to differentiate between two different experiences. In loving memory of Samson Cairns 4.1 The Experience of Playing Video-games The experience of playing video-games is usually understood as the subjective relation between the user and the video-game beyond the actual implementation of the game. The implementation is bound by the speed of the microprocessors of the gaming console, the ergonomics of the controllers, and the usability of the interface. Experience is more than that, it is also considered as a personal relationship. Understanding this relationship as personal is problematic under a scientific scope. Personal and subjective knowledge does not allow a theory to be generalised or falsified (Popper 1994). In this chapter, we propose a theory for understanding the experience of playing video-games, or gaming experience, that can be used to assess and compare different experiences. This section introduces the approach taken towards understanding the gaming experience under the aforementioned perspective. It begins by presenting an E.H. Calvillo-Gámez (B) División de Nuevas Tecnologías de la Información, Universidad Politécnica de San Luis Potosí, San Luis Potosí, México e-mail: e.calvillo@upslp.edu.mx 47 R. Bernhaupt (ed.), Evaluating User Experience in Games, Human-Computer Interaction Series, DOI 10.1007/978-1-84882-963-3_4, C © Springer-Verlag London Limited 2010 48 E.H. Calvillo-Gámez et al. overview of video-games and user experience in order to familiarise the reader with such concepts. Last, the objective and overview of the whole chapter are presented. 4.1.", "title": "" }, { "docid": "428d522f59dbef1c52421abcaaa7a0c2", "text": "We devise new coding methods to minimize Phase Change Memory write energy. Our method minimizes the energy required for memory rewrites by utilizing the differences between PCM read, set, and reset energies. We develop an integer linear programming method and employ dynamic programming to produce codes for uniformly distributed data. We also introduce data-aware coding schemes to efficiently address the energy minimization problem for stochastic data. Our evaluations show that the proposed methods result in up to 32% and 44% reduction in memory energy consumption for uniform and stochastic data respectively.", "title": "" }, { "docid": "f3bfb1542c5254997fadcc8533007972", "text": "For most entity disambiguation systems, the secret recipes are feature representations for mentions and entities, most of which are based on Bag-of-Words (BoW) representations. Commonly, BoW has several drawbacks: (1) It ignores the intrinsic meaning of words/entities; (2) It often results in high-dimension vector spaces and expensive computation; (3) For different applications, methods of designing handcrafted representations may be quite different, lacking of a general guideline. In this paper, we propose a different approach named EDKate. We first learn low-dimensional continuous vector representations for entities and words by jointly embedding knowledge base and text in the same vector space. Then we utilize these embeddings to design simple but effective features and build a two-layer disambiguation model. Extensive experiments on real-world data sets show that (1) The embedding-based features are very effective. Even a single one embedding-based feature can beat the combination of several BoW-based features. (2) The superiority is even more promising in a difficult set where the mention-entity prior cannot work well. (3) The proposed embedding method is much better than trivial implementations of some off-the-shelf embedding algorithms. (4) We compared our EDKate with existing methods/systems and the results are also positive.", "title": "" }, { "docid": "c7363f755e28b23599b598e3c74fdfc6", "text": "We propose a new fluid control technique that uses scale-dependent force control to preserve small-scale fluid detail. Control particles define local force fields and can be generated automatically from either a physical simulation or a sequence of target shapes. We use a multi-scale decomposition of the velocity field and apply control forces only to the coarse-scale components of the flow. Small-scale detail is thus preserved in a natural way avoiding the artificial viscosity often introduced by force-based control methods. We demonstrate the effectiveness of our method for both Lagrangian and Eulerian fluid simulation environments.", "title": "" }, { "docid": "1f247e127866e62029310218c380bc31", "text": "Human Resource is the most important asset for any organization and it is the resource of achieving competitive advantage. Managing human resources is very challenging as compared to managing technology or capital and for its effective management, organization requires effective HRM system. HRM system should be backed up by strong HRM practices. HRM practices refer to organizational activities directed at managing the group of human resources and ensuring that the resources are employed towards the fulfillment of organizational goals. The purpose of this study is to explore contribution of Human Resource Management (HRM) practices including selection, training, career planning, compensation, performance appraisal, job definition and employee participation on perceived employee performance. This research describe why human resource management (HRM) decisions are likely to have an important and unique influence on organizational performance. This research forum will help advance research on the link between HRM and organizational performance. Unresolved questions is trying to identify in need of future study and make several suggestions intended to help researchers studying these questions build a more cumulative body of knowledge that will have key implications for body theory and practice. This study comprehensively evaluated the links between systems of High Performance Work Practices and firm performance. Results based on a national sample of firms indicate that these practices have an economically and statistically significant impact on employee performance. Support for predictions that the impact of High Performance Work Practices on firm performance is in part contingent on their interrelationships and links with competitive strategy was limited.", "title": "" }, { "docid": "2dd6fff23e32efc7d6ead42d0dbc4ff0", "text": "Recent technological advances in wheat genomics provide new opportunities to uncover genetic variation in traits of breeding interest and enable genome-based breeding to deliver wheat cultivars for the projected food requirements for 2050. There has been tremendous progress in development of whole-genome sequencing resources in wheat and its progenitor species during the last 5 years. High-throughput genotyping is now possible in wheat not only for routine gene introgression but also for high-density genome-wide genotyping. This is a major transition phase to enable genome-based breeding to achieve progressive genetic gains to parallel to projected wheat production demands. These advances have intrigued wheat researchers to practice less pursued analytical approaches which were not practiced due to the short history of genome sequence availability. Such approaches have been successful in gene discovery and breeding applications in other crops and animals for which genome sequences have been available for much longer. These strategies include, (i) environmental genome-wide association studies in wheat genetic resources stored in genbanks to identify genes for local adaptation by using agroclimatic traits as phenotypes, (ii) haplotype-based analyses to improve the statistical power and resolution of genomic selection and gene mapping experiments, (iii) new breeding strategies for genome-based prediction of heterosis patterns in wheat, and (iv) ultimate use of genomics information to develop more efficient and robust genome-wide genotyping platforms to precisely predict higher yield potential and stability with greater precision. Genome-based breeding has potential to achieve the ultimate objective of ensuring sustainable wheat production through developing high yielding, climate-resilient wheat cultivars with high nutritional quality.", "title": "" }, { "docid": "1b18b2b05e6fe19060039cd02ddb6131", "text": "Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain.", "title": "" }, { "docid": "aec48ddea7f21cabb9648eec07c31dcd", "text": "High voltage Marx generator implementation using IGBT (Insulated Gate Bipolar Transistor) stacks is proposed in this paper. To protect the Marx generator at the moment of breakdown, AOCP (Active Over-Current Protection) part is included. The Marx generator is composed of 12 stages and each stage is made of IGBT stacks, two diode stacks, and capacitors. IGBT stack is used as a single switch. Diode stacks and inductors are used to charge the high voltage capacitor at each stage without power loss. These are also used to isolate input and high voltage negative output in high voltage generation mode. The proposed Marx generator implementation uses IGBT stack with a simple driver and has modular design. This system structure gives compactness and easiness to implement the total system. Some experimental and simulated results are included to verify the system performances in this paper.", "title": "" }, { "docid": "b908987c5bae597683f177beb2bba896", "text": "This paper presents a novel task of cross-language authorship attribution (CLAA), an extension of authorship attribution task to multilingual settings: given data labelled with authors in language X , the objective is to determine the author of a document written in language Y , where X 6= Y . We propose a number of cross-language stylometric features for the task of CLAA, such as those based on sentiment and emotional markers. We also explore an approach based on machine translation (MT) with both lexical and cross-language features. We experimentally show that MT could be used as a starting point to CLAA, since it allows good attribution accuracy to be achieved. The cross-language features provide acceptable accuracy while using jointly with MT, though do not outperform lexical", "title": "" } ]
scidocsrr
b3c9fb37a02b5ef9b6845204a320e0d7
Healing of hymenal injuries in prepubertal and adolescent girls: a descriptive study.
[ { "docid": "f66854fd8e3f29ae8de75fc83d6e41f5", "text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.", "title": "" } ]
[ { "docid": "dff035a6e773301bd13cd0b71d874861", "text": "Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data. A number of approaches have been proposed to extract representative features from 3D skeletal data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skeletal features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.", "title": "" }, { "docid": "4d2be7aac363b77c6abd083947bc28c7", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" }, { "docid": "a39c9399742571ca389813ffb7e7657e", "text": "Developed agriculture needs to find new ways to improve efficiency. One approach is to utilise available information technologies in the form of more intelligent machines to reduce and target energy inputs in more effective ways than in the past. Precision Farming has shown benefits of this approach but we can now move towards a new generation of equipment. The advent of autonomous system architectures gives us the opportunity to develop a complete new range of agricultural equipment based on small smart machines that can do the right thing, in the right place, at the right time in the right way.", "title": "" }, { "docid": "263e8b756862ab28d313578e3f6acbb1", "text": "Goal posts detection is a critical robot soccer ability which is needed to be accurate, robust and efficient. A goal detection method using Hough transform to get the detailed goal features is presented in this paper. In the beginning, the image preprocessing and Hough transform implementation are described in detail. A new modification on the θ parameter range in Hough transform is explained and applied to speed up the detection process. Line processing algorithm is used to classify the line detected, and then the goal feature extraction method, including the line intersection calculation, is done. Finally, the goal distance from the robot body is estimated using triangle similarity. The experiment is performed on our university humanoid robot with the goal dimension of 225 cm in width and 110 cm in height, in yellow color. The result shows that the goal detection method, including the modification in Hough transform, is able to extract the goal features seen by the robot correctly, with the lowest speed of 5 frames per second. Additionally, the goal distance estimation is accomplished with maximum error of 20 centimeters.", "title": "" }, { "docid": "c14763b69b668ec8a999467e2a03ca73", "text": "Item Response Theory is based on the application of related mathematical models to testing data. Because it is generally regarded as superior to classical test theory, it is the preferred method for developing scales, especially when optimal decisions are demanded, as in so-called high-stakes tests. The term item is generic: covering all kinds of informative item. They might be multiple choice questions that have incorrect and correct responses, but are also commonly statements on questionnaires that allow respondents to indicate level of agreement (a rating or Likert scale), or patient symptoms scored as present/absent, or diagnostic information in complex systems. IRT is based on the idea that the probability of a correct/keyed response to an item is a mathematical function of person and item parameters. The person parameter is construed as (usually) a single latent trait or dimension. Examples include general intelligence or the strength of an attitude.", "title": "" }, { "docid": "e49aa0d0f060247348f8b3ea0a28d3c6", "text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "title": "" }, { "docid": "0e4722012aeed8dc356aa8c49da8c74f", "text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.", "title": "" }, { "docid": "b594a4fafc37a18773b1144dfdbb965d", "text": "Deep generative modelling for robust human body analysis is an emerging problem with many interesting applications, since it enables analysis-by-synthesis and unsupervised learning. However, the latent space learned by such models is typically not human-interpretable, resulting in less flexible models. In this work, we adopt a structured semi-supervised variational auto-encoder approach and present a deep generative model for human body analysis where the pose and appearance are disentangled in the latent space, allowing for pose estimation. Such a disentanglement allows independent manipulation of pose and appearance and hence enables applications such as pose-transfer without being explicitly trained for such a task. In addition, the ability to train in a semi-supervised setting relaxes the need for labelled data. We demonstrate the merits of our generative model on the Human3.6M and ChictopiaPlus datasets.", "title": "" }, { "docid": "6f518559d8c99ea1e6368ec8c108cabe", "text": "This paper introduces an integrated Local Interconnect Network (LIN) transceiver which sets a new performance benchmark in terms of electromagnetic compatibility (EMC). The proposed topology succeeds in an extraordinary high robustness against RF disturbances which are injected into the BUS and in very low electromagnetic emissions (EMEs) radiated by the LIN network without adding any external components for filtering. In order to evaluate the circuits superior EMC performance, it was designed using a HV-BiCMOS technology for automotive applications, the EMC behavior was measured and the results were compared with a state of the art topology.", "title": "" }, { "docid": "0bdd3930b9d0a2ab510fa00f2620e5e9", "text": "Solid objects in the real world do not pass through each other when they collide. Enforcing this property of \\solidness\" is important in many interactive graphics applications; for example, solidness makes virtual reality more believable, and solidness is essential for the correctness of vehicle simulators. These applications use a collision-detection algorithm to enforce the solidness of objects. Unfortunately, previous collision-detection algorithms do not adequately address the needs of interactive applications. To work in these applications, a collision-detection algorithm must run at real-time rates, even when many objects can collide, and it must tolerate objects whose motion is speci ed \\on the y\" by a user. This dissertation describes a new collision-detection algorithm that meets these criteria through approximation and graceful degradation, elements of time-critical computing. The algorithm is not only fast but also interruptible, allowing an application to trade accuracy for speed as needed. The algorithm uses two forms of approximate geometry. The rst is a four-dimensional structure called a space-time bound. This structure provides a conservative estimate of where an object may be in the immediate future based on estimates of the object's acceleration. The algorithm uses space-time bounds to focus its attention on objects that are likely to collide. The second form of approximate geometry is a sphere-tree. This structure contains a hierarchy of unions of spheres, each union approximating the three-dimensional surface of an object at a di erent level of detail. Sphere-trees allow the algorithm to quickly nd approximate contacts between objects, and they allow the application to interrupt the algorithm in order to meet real-time performance goals. Automatically building sphere-trees is an interesting problem in itself, and this dissertation describes several approaches. The simplest approach is based on octrees, and more sophisticated approaches use simulated annealing and approximate medial-axis surfaces. Several of the steps in these algorithms are themselves signi cant. One step is a simple algorithm for checking whether a union of two-dimensional shapes covers a polygon. Another step builds Voronoi diagrams for three-dimensional data points, and does so more robustly and accurately than previous algorithms. An implementation of the collision-detection algorithm runs signi cantly faster than a previous algorithm in empirical tests. In particular, this implementation allows real-time performance in a sample application (a simple ight simulator) that is too slow with the previous algorithm; in some cases, the performance improves by more than two orders of magnitude. Experience with this sample application suggests that time-critical computing is not always simple to apply, but it provides enough bene ts that it deserves further exploration in other contexts. i", "title": "" }, { "docid": "54b2845699e97617e713821959913622", "text": "Machine learning (ML) algorithms have garnered increased interest as they demonstrate improved ability to extract meaningful trends from large, diverse, and noisy data sets. While research is advancing the state-of-the-art in ML algorithms, it is difficult to drastically improve the real-world performance of these algorithms. Porting new and existing algorithms from single-node systems to multi-node clusters, or from architecturally homogeneous systems to heterogeneous systems, is a promising optimization technique. However, performing optimized ports is challenging for domain experts who may lack experience in distributed and heterogeneous software development. This work explores how challenges in ML application development on heterogeneous, distributed systems shaped the development of the HadoopCL2 (HCL2) programming system. ML applications guide this work because they exhibit features that make application development difficult: large & diverse datasets, complex algorithms, and the need for domain-specific knowledge. The goal of this work is a general, MapReduce programming system that outperforms existing programming systems. This work evaluates the performance and portability of HCL2 against five ML applications from the Mahout ML framework on two hardware platforms. HCL2 demonstrates speedups of greater than 20x relative to Mahout for three computationally heavy algorithms and maintains minor performance improvements for two I/O bound algorithms.", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "e4c2fcc09b86dc9509a8763e7293cfe9", "text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.", "title": "" }, { "docid": "771ee12eec90c042b5c2320680ddb290", "text": "1. SUMMARY In the past decade educators have developed a myriad of tools to help novices learn to program. Different tools emerge as new features or combinations of features are employed. In this panel we consider the features of recent tools that have garnered significant interest in the computer science education community. These including narrative tools which support programming to tell a story (e.g., Alice [6], Jeroo [8]), visual programming tools which support the construction of programs through a drag-and-drop interface (e.g., JPie [3], Alice [6], Karel Universe), flow-model tools (e.g., Raptor [1], Iconic Programmer [2], VisualLogic) which construct programs through connecting program elements to represent order of computation, specialized output realizations (e.g., Lego Mindstorms [5], JES [7]) that provide execution feedback in nontextual ways, like multimedia or kinesthetic robotics, and tiered language tools (e.g., ProfessorJ [4], RoboLab) in which novices can use more sophisticated versions of a language as their expertise develops.", "title": "" }, { "docid": "90315eafb385aef259441f03ef6649b7", "text": "Although there has been considerable progress in the development of engineering principles for synthetic biology, a substantial challenge is the construction of robust circuits in a noisy cellular environment. Such an environment leads to considerable intercellular variability in circuit behaviour, which can hinder functionality at the colony level. Here we engineer the synchronization of thousands of oscillating colony ‘biopixels’ over centimetre-length scales through the use of synergistic intercellular coupling involving quorum sensing within a colony and gas-phase redox signalling between colonies. We use this platform to construct a liquid crystal display (LCD)-like macroscopic clock that can be used to sense arsenic via modulation of the oscillatory period. Given the repertoire of sensing capabilities of bacteria such as Escherichia coli, the ability to coordinate their behaviour over large length scales sets the stage for the construction of low cost genetic biosensors that are capable of detecting heavy metals and pathogens in the field.", "title": "" }, { "docid": "a9f75cfc4afc60112988320c849bbaf5", "text": "Spin-transfer torque random access memory (STT-RAM), as a promising nonvolatile memory technology, faces challenges of high write energy and low density. The recently developed magnetoelectric random access memory (MeRAM) enables the possibility of overcoming these challenges by the use of voltage-controlled magnetic anisotropy (VCMA) effect and achieves high density, fast speed, and low energy simultaneously. As both STT-RAM and MeRAM suffer from the reliability problem of write errors, we implement a fast Landau-Lifshitz-Gilbert equation-based simulator to capture their write error rate (WER) under process and temperature variation. We utilize a multi-write peripheral circuit to minimize WER and design reliable STT-RAM and MeRAM. With the same acceptable WER, MeRAM shows advantages of 83% faster write speed, 67.4% less write energy, 138% faster read speed, and 28.2% less read energy compared with STT-RAM. Benefiting from the VCMA effect, MeRAM also achieves twice the density of STT-RAM with a 32 nm technology node, and this density difference is expected to increase with technology scaling down.", "title": "" }, { "docid": "59d39dd0a5535be81c695a7fbd4005c1", "text": "Over the last decade, accumulating evidence has suggested a causative link between mitochondrial dysfunction and major phenotypes associated with aging. Somatic mitochondrial DNA (mtDNA) mutations and respiratory chain dysfunction accompany normal aging, but the first direct experimental evidence that increased mtDNA mutation levels contribute to progeroid phenotypes came from the mtDNA mutator mouse. Recent evidence suggests that increases in aging-associated mtDNA mutations are not caused by damage accumulation, but rather are due to clonal expansion of mtDNA replication errors that occur during development. Here we discuss the caveats of the traditional mitochondrial free radical theory of aging and highlight other possible mechanisms, including insulin/IGF-1 signaling (IIS) and the target of rapamycin pathways, that underlie the central role of mitochondria in the aging process.", "title": "" }, { "docid": "473968c14db4b189af126936fd5486ca", "text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.", "title": "" }, { "docid": "37927017353dc0bab9c081629d33d48c", "text": "Generating a secret key between two parties by extracting the shared randomness in the wireless fading channel is an emerging area of research. Previous works focus mainly on single-antenna systems. Multiple-antenna devices have the potential to provide more randomness for key generation than single-antenna ones. However, the performance of key generation using multiple-antenna devices in a real environment remains unknown. Different from the previous theoretical work on multiple-antenna key generation, we propose and implement a shared secret key generation protocol, Multiple-Antenna KEy generator (MAKE) using off-the-shelf 802.11n multiple-antenna devices. We also conduct extensive experiments and analysis in real indoor and outdoor mobile environments. Using the shared randomness extracted from measured Received Signal Strength Indicator (RSSI) to generate keys, our experimental results show that using laptops with three antennas, MAKE can increase the bit generation rate by more than four times over single-antenna systems. Our experiments validate the effectiveness of using multi-level quantization when there is enough mutual information in the channel. Our results also show the trade-off between bit generation rate and bit agreement ratio when using multi-level quantization. We further find that even if an eavesdropper has multiple antennas, she cannot gain much more information about the legitimate channel.", "title": "" } ]
scidocsrr
6f07f3dd1a6a372c59dcc346008771d5
A Sequential Neural Information Diffusion Model with Structure Attention
[ { "docid": "825640f8ce425a34462b98869758e289", "text": "Recurrent neural networks scale poorly due to the intrinsic difficulty in parallelizing their state computations. For instance, the forward pass computation of ht is blocked until the entire computation of ht−1 finishes, which is a major bottleneck for parallel computing. In this work, we propose an alternative RNN implementation by deliberately simplifying the state computation and exposing more parallelism. The proposed recurrent unit operates as fast as a convolutional layer and 5-10x faster than cuDNN-optimized LSTM. We demonstrate the unit’s effectiveness across a wide range of applications including classification, question answering, language modeling, translation and speech recognition. We open source our implementation in PyTorch and CNTK1.", "title": "" }, { "docid": "957170b015e5acd4ab7ce076f5a4c900", "text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "title": "" } ]
[ { "docid": "49880a6cad6b00b9dfbd517c6675338e", "text": "Associations between large cavum septum pellucidum and functional psychosis disorders, especially schizophrenia, have been reported. We report a case of late-onset catatonia associated with enlarged CSP and cavum vergae. A 66-year-old woman was presented with altered mental status and stereotypic movement. She was initially treated with aripiprazole and lorazepam. After 4 weeks, she was treated with electroconvulsive therapy. By 10 treatments, echolalia vanished, and catatonic behavior was alleviated. Developmental anomalies in the midline structure may increase susceptibility to psychosis, even in the elderly.", "title": "" }, { "docid": "3ce09ec0f516894d027583d27814294f", "text": "This paper provides a model of the use of computer algebra experimentation in algebraic graph theory. Starting from the semisymmetric cubic graph L on 112 vertices, we embed it into another semisymmetric graph N of valency 15 on the same vertex set. In order to consider systematically the links between L and N a number of combinatorial structures are involved and related coherent configurations are investigated. In particular, the construction of the incidence double cover of directed graphs is exploited. As a natural by-product of the approach presented here, a number of new interesting (mostly non-Schurian) association schemes on 56, 112 and 120 vertices are introduced and briefly discussed. We use computer algebra system GAP (including GRAPE and nauty), as well as computer package COCO.", "title": "" }, { "docid": "d353db098a7ca3bd9dc73b803e7369a2", "text": "DevOps community advocates collaboration between development and operations staff during software deployment. However this collaboration may cause a conceptual deficit. This paper proposes a Unified DevOps Model (UDOM) in order to overcome the conceptual deficit. Firstly, the origin of conceptual deficit is discussed. Secondly, UDOM model is introduced that includes three sub-models: application and data model, workflow execution model and infrastructure model. UDOM model can help to scale down deployment time, mitigate risk, satisfy customer requirements, and improve productivity. Finally, this paper can be a roadmap for standardization DevOps terminologies, concepts, patterns, cultures, and tools.", "title": "" }, { "docid": "2da7166b9ec1ca7da168ac4fc5f056e6", "text": "Can an algorithm create original and compelling fashion designs to serve as an inspirational assistant? To help answer this question, we design and investigate different image generation models associated with different loss functions to boost creativity in fashion generation. The dimensions of our explorations include: (i) different Generative Adversarial Networks architectures that start from noise vectors to generate fashion items, (ii) novel loss functions that encourage creativity, inspired from Sharma-Mittal divergence, a generalized mutual information measure for the widely used relative entropies such as Kullback-Leibler, and (iii) a generation process following the key elements of fashion design (disentangling shape and texture components). A key challenge of this study is the evaluation of generated designs and the retrieval of best ones, hence we put together an evaluation protocol associating automatic metrics and human experimental studies that we hope will help ease future research. We show that our proposed creativity losses yield better overall appreciation than the one employed in Creative Adversarial Networks. In the end, about 61% of our images are thought to be created by human designers rather than by a computer while also being considered original per our human subject experiments, and our proposed loss scores the highest compared to existing losses in both novelty and likability. Figure 1: Training generative adversarial models with appropriate losses leads to realistic and creative 512× 512 fashion images.", "title": "" }, { "docid": "8c89db7cda2547a9f84dec7a0990cd59", "text": "In this paper, a changeable winding brushless DC (BLDC) motor for the expansion of the speed region is described. The changeable winding BLDC motor is driven by a large number of phase turns at low speeds and by a reduced number of turns at high speeds. For this reason, the section where the winding changes is very important. Ideally, the time at which the windings are to be converted should be same as the time at which the voltage changes. However, if this timing is not exactly synchronized, a large current is generated in the motor, and the demagnetization of the permanent magnet occurs. In addition, a large torque ripple is produced. In this paper, we describe the demagnetization of the permanent magnet in a fault situation when the windings change, and we suggest a design process to solve this problem.", "title": "" }, { "docid": "f92351eac81d6d28c3fd33ea96b75f91", "text": "There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cb0b7879f61630b467aa595d961bfcef", "text": "UNLABELLED\nGlucagon-like peptide 1 (GLP-1[7-36 amide]) is an incretin hormone primarily synthesized in the lower gut (ileum, colon/rectum). Nevertheless, there is an early increment in plasma GLP-1 immediately after ingesting glucose or mixed meals, before nutrients have entered GLP-1 rich intestinal regions. The responsible signalling pathway between the upper and lower gut is not clear. It was the aim of this study to see, whether small intestinal resection or colonectomy changes GLP-1[7-36 amide] release after oral glucose. In eight healthy controls, in seven patients with inactive Crohn's disease (no surgery), in nine patients each after primarily jejunal or ileal small intestinal resections, and in six colonectomized patients not different in age (p = 0.10), body-mass-index (p = 0.24), waist-hip-ratio (p = 0.43), and HbA1c (p = 0.22), oral glucose tolerance tests (75 g) were performed in the fasting state. GLP-1[7-36 amide], insulin C-peptide, GIP and glucagon (specific (RIAs) were measured over 240 min.\n\n\nSTATISTICS\nRepeated measures ANOVA, t-test (significance: p < 0.05). A clear and early (peak: 15-30 min) GLP-1[7-36 amide] response was observed in all subjects, without any significant difference between gut-resected and control groups (p = 0.95). There were no significant differences in oral glucose tolerance (p = 0.21) or in the suppression of pancreatic glucagon (p = 0.36). Colonectomized patients had a higher insulin (p = 0.011) and C-peptide (p = 0.0023) response in comparison to all other groups. GIP responses also were higher in the colonectomized patients (p = 0.0005). Inactive Crohn's disease and resections of the small intestine as well as proctocolectomy did not change overall GLP-1[7-36 amide] responses and especially not the early increment after oral glucose. This may indicate release of GLP-1[7-36 amide] after oral glucose from the small number of GLP-1[7-36 amide] producing L-cells in the upper gut rather than from the main source in the ileum, colon and rectum. Colonectomized patients are characterized by insulin hypersecretion, which in combination with their normal oral glucose tolerance possibly indicates a reduced insulin sensitivity in this patient group. GIP may play a role in mediating insulin hypersecretion in these patients.", "title": "" }, { "docid": "e99c8800033f33caa936a6ff8dd79995", "text": "Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. We present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike.", "title": "" }, { "docid": "3ff55193d10980cbb8da5ec757b9161c", "text": "The growth of social web contributes vast amount of user generated content such as customer reviews, comments and opinions. This user generated content can be about products, people, events, etc. This information is very useful for businesses, governments and individuals. While this content meant to be helpful analyzing this bulk of user generated content is difficult and time consuming. So there is a need to develop an intelligent system which automatically mine such huge content and classify them into positive, negative and neutral category. Sentiment analysis is the automated mining of attitudes, opinions, and emotions from text, speech, and database sources through Natural Language Processing (NLP). The objective of this paper is to discover the concept of Sentiment Analysis in the field of Natural Language Processing, and presents a comparative study of its techniques in this field. Keywords— Natural Language Processing, Sentiment Analysis, Sentiment Lexicon, Sentiment Score.", "title": "" }, { "docid": "4ed4bab7f0ef009ed1bb2e803c3c7833", "text": "Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.", "title": "" }, { "docid": "5066a15ddba96311302889267b228301", "text": "This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminary analysis based on two well-known visual attention models.", "title": "" }, { "docid": "6df80f85e102b94c1b29b8e0dca6cab4", "text": "With the shortage of the energy and ever increasing of the oil price, research on the renewable and green energy sources, especially the solar arrays and the fuel cells, becomes more and more important. How to achieve high step-up and high efficiency DC/DC converters is the major consideration in the renewable grid-connected power applications due to the low voltage of PV arrays and fuel cells. The topology study with high step-up conversion is concentrated and most topologies recently proposed in these applications are covered and classified. The advantages and disadvantages of these converters are discussed and the major challenges of high step-up converters in renewable energy applications are summarized. This paper would like to make a clear picture on the general law and framework for the next generation non-isolated high step-up DC/DC converters.", "title": "" }, { "docid": "6b930b924ea560a4cbdff108f5d0c4af", "text": "Abstract A blockchain constitutes a distributed ledger that records transactions across a network of agents. Blockchain’s value proposition requires that agents eventually agree on the ledger’s contents since payments possess risk otherwise. Restricted blockchains ensure this consensus by appointing a central authority to dictate payment validity. Permissionless blockchains (e.g. Bitcoin, Ethereum), however, admit no central authority and therefore face a non-trivial issue of inducing consensus endogenously. Nakamoto (2008) provided a temporary solution to the problem by invoking an economic mechanism known as Proof-of-Work (PoW). PoW, however, lacks sustainability, so, in recent years, a variety of alternatives have been proposed. This paper studies the most famous such alternative, Proof-of-Stake (PoS). I provide the first formal economic model of PoS and demonstrate that PoS induces consensus in equilibrium. My result arises because I endogenize blockchain coin prices. Propagating disagreement introduces the prospect of default and thereby reduces blockchain coin value which implies that stake-holders face an implicit cost from delaying consensus. PoS randomly selects a stake-holder to update the blockchain and provides her an explicit monetary incentive, a “block reward,” for her service. In the event of disagreement, block rewards constitute a perverse incentive, but I demonstrate that restricting updating ability to large stake-holders induces an equilibrium in which consensus obtains as soon as possible. I also demonstrate that consensus obtains eventually almost surely in any equilibrium so long as the blockchain employs a modest block reward schedule. My work reveals the economic viability of permissionless blockchains.", "title": "" }, { "docid": "712140b99a4765908ca26018b61f270f", "text": "Accelerated by electric mobility and new requirements regarding quantities, quality and innovative motor designs, production technologies for windings of electrical drives gain in importance. Especially the demand for increasing slot fill ratios and a product design allowing manufacturers of electric drives to produce big quantities in a good quality impels innovations in the design of windings. The hairpin winding is a result of this development combining high slot fill ratios with new potentials for an economical production of the winding for electric drives. This is achieved by a new method for the production of the winding: The winding is assembled by mounting preformed elements of insulated copper wire to the stator simplifying the elaborate winding process on the one hand. On the other hand it becomes necessary to join these elements mechanically and electrically to manufacture the winding. Due to this, contacting technologies gain in importance regarding the production of hairpin windings. The new challenge consists of the high number of contact points that have to be produced in a small amount of space. On account of its process stability, high process velocities and the possibility of realizing a remote joining process, the laser welding shows big potentials for the realization of a contacting process for hairpin windings that is capable of series production. This paper describes challenges and possibilities for the application of infrared lasers in the field of hairpin winding production.", "title": "" }, { "docid": "f12cbeb6a202ea8911a67abe3ffa6ccc", "text": "In order to enhance the study of the kinematics of any robot arm, parameter design is directed according to certain necessities for the robot, and its forward and inverse kinematics are discussed. The DH convention Method is used to form the kinematical equation of the resultant structure. In addition, the Robotics equations are modeled in MATLAB to create a 3D visual simulation of the robot arm to show the result of the trajectory planning algorithms. The simulation has detected the movement of each joint of the robot arm, and tested the parameters, thus accomplishing the predetermined goal which is drawing a sine wave on a writing board.", "title": "" }, { "docid": "6f6733c35f78b00b771cf7099c953954", "text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.", "title": "" }, { "docid": "467c2a106b6fd5166f3c2a44d655e722", "text": "AutoVis is a data viewer that responds to content – text, relational tables, hierarchies, streams, images – and displays the information appropriately (that is, as an expert would). Its design rests on the grammar of graphics, scagnostics and a modeler based on the logic of statistical analysis. We distinguish an automatic visualization system (AVS) from an automated visualization system. The former automatically makes decisions about what is to be visualized. The latter is a programming system for automating the production of charts, graphs and visualizations. An AVS is designed to provide a first glance at data before modeling and analysis are done. AVS is designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models. The design of this system incorporates several unique features: (1) a spare interface – analysts simply drag a data source into an empty window, (2) a graphics generator that requires no user definitions to produce graphs, (3) a statistical analyzer that protects users from false conclusions, and (4) a pattern recognizer that responds to the aspects (density, shape, trend, and so on) that professional statisticians notice when investigating data sets.", "title": "" }, { "docid": "1f95e7fcd4717429259aa4b9581cf308", "text": "This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.", "title": "" }, { "docid": "2247a7972e853221e0e04c9761847c04", "text": "Recently, as real-time Ethernet based protocols, especially EtherCAT have become more widely used in various fields such as automation systems and motion control, many studies on their design have been conducted. In this paper, we describe a method for the design of an EtherCAT slave module we developed and its application to a closed loop motor drive. Our EtherCAT slave module consists of the ARM Cortex-M3 as the host controller and ET1100 as the EtherCAT slave controller. These were interfaced with a parallel interface instead of the SPI used by many researchers and developers. To measure the performance of this device, 32-axis closed loop step motor drives were used and the experimental results in the test environment are described.", "title": "" }, { "docid": "d7538c23aa43edce6cfde8f2125fd3bb", "text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.", "title": "" } ]
scidocsrr
eea25ca3dd998aef0d0af43226c204cc
Alzheimer's disease diagnostics by adaptation of 3D convolutional network
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "262d91525f42ead887c8f8d50a5782fd", "text": "Over the past decade, machine learning techniques especially predictive modeling and pattern recognition in biomedical sciences from drug delivery system [7] to medical imaging has become one of the important methods which are assisting researchers to have deeper understanding of entire issue and to solve complex medical problems. Deep learning is power learning machine learning algorithm in classification while extracting high-level features. In this paper, we used convolutional neural network to classify Alzheimer’s brain from normal healthy brain. The importance of classifying this kind of medical data is to potentially develop a predict model or system in order to recognize the type disease from normal subjects or to estimate the stage of the disease. Classification of clinical data such as Alzheimer’s disease has been always challenging and most problematic part has been always selecting the most discriminative features. Using Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimer’s subjects from normal controls where the accuracy of test data on trained data reached 96.85%. This experiment suggests us the shift and scale invariant features extracted by CNN followed by deep learning classification is most powerful method to distinguish clinical data from healthy data in fMRI. This approach also enables us to expand our methodology to predict more complicated systems.", "title": "" }, { "docid": "0ae198d0edf78a7ca15e61af44fd754d", "text": "For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.", "title": "" } ]
[ { "docid": "38c78be386aa3827f39825f9e40aa3cc", "text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.", "title": "" }, { "docid": "ffa10972e49ef7d51754afa3b758d952", "text": "We introduce a new approach to tackle the problem of offensive language in online social media. Our approach uses unsupervised text style transfer to translate offensive sentences into non-offensive ones. We propose a new method for training encoderdecoders using non-parallel data that combines a collaborative classifier, attention and the cycle consistency loss. Experimental results on data from Twitter and Reddit show that our method outperforms a state-of-the-art text style transfer system in two out of three quantitative metrics and produces reliable non-offensive transferred sentences.", "title": "" }, { "docid": "101e93562935c799c3c3fa62be98bf09", "text": "This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line , we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.", "title": "" }, { "docid": "e8b29527805a29dfe12c22643345e440", "text": "Highly cited articles are interesting because of the potential association between high citation counts and high quality research. This study investigates the 82 most highly cited Information Science and Library Science’ (IS&LS) articles (the top 0.1%) in the Web of Science from the perspectives of disciplinarity, annual citation patterns, and first author citation profiles. First, the relative frequency of these 82 articles was much lower for articles solely in IS&LS than for those in IS&LS and at least one other subject, suggesting that that the promotion of interdisciplinary research in IS&LS may be conducive to improving research quality. Second, two thirds of the first authors had an h-index in IS&LS of less than eight, show that much significant research is produced by researchers without a high overall IS&LS research productivity. Third, there is a moderate correlation (0.46) between citation ranking and the number of years between peak year and year of publication. This indicates that high quality ideas and methods in IS&LS often are deployed many years after being published.", "title": "" }, { "docid": "6a8a849bc8272a7b73259e732e3be81b", "text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.", "title": "" }, { "docid": "c155ce2743c59f4ce49fdffe74d94443", "text": "The theta oscillation (5-10Hz) is a prominent behavior-specific brain rhythm. This review summarizes studies showing the multifaceted role of theta rhythm in cognitive functions, including spatial coding, time coding and memory, exploratory locomotion and anxiety-related behaviors. We describe how activity of hippocampal theta rhythm generators - medial septum, nucleus incertus and entorhinal cortex, links theta with specific behaviors. We review evidence for functions of the theta-rhythmic signaling to subcortical targets, including lateral septum. Further, we describe functional associations of theta oscillation properties - phase, frequency and amplitude - with memory, locomotion and anxiety, and outline how manipulations of these features, using optogenetics or pharmacology, affect associative and innate behaviors. We discuss work linking cognition to the slope of the theta frequency to running speed regression, and emotion-sensitivity (anxiolysis) to its y-intercept. Finally, we describe parallel emergence of theta oscillations, theta-mediated neuronal activity and behaviors during development. This review highlights a complex interplay of neuronal circuits and synchronization features, which enables an adaptive regulation of multiple behaviors by theta-rhythmic signaling.", "title": "" }, { "docid": "116b5f129e780a99a1d78ec02a1fb092", "text": "We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.", "title": "" }, { "docid": "48f7920d14f9cd68a5e5da3b16e11b70", "text": "About 35 years ago in Scientific American, I wrote about dramatic new studies of the brain. Three patients who were seeking relief from epilepsy had undergone surgery that severed the corpus callosum--the superhighway of neurons connecting the halv es of the brain. By working with these patients, my colleagues Roger W. Sperry, Joseph E. Bogen, P.J. Vogel and I witnessed what happened when the left and the right hemispheres were unable to communicate with each other.", "title": "" }, { "docid": "8ba00268d677a407077a79dbcaa5aa5c", "text": "BACKGROUND\nBroad or excessive malar protrusion is a trait associated with aggression and old age in Asian cultures. Although various methods of shifting the zygoma have been introduced for reduction malarplasty, soft tissue sagging and inadequate bony union still remain great challenges. We have devised an arch-lifting technique that helps overcome these issues. An analysis of surgical outcomes is presented herein.\n\n\nMETHODS\nA total of 54 patients subjected to lifting malar reductions between January 2013 and November 2014 were retrospectively reviewed. The reduction procedure entailed an L-shaped osteotomy of the zygomaticomaxillary junction via an intraoral approach. In addition, a prefabricated U-shaped microplate was applied for arch fixation in the lifted position. The follow-up period ranged from 6 to 18 months (average, 9.2 months), during which medical records, photographs, and facial bone computed tomography (CT) images were obtained to assess the postoperative results.\n\n\nRESULTS\nPatients were generally satisfied with aesthetic outcomes, thus rating the procedure excellent in terms of zygomatic prominence, midfacial width, symmetry, and resistance to cheek drooping. There were no major complications, such as facial nerve damage or trismus. An inadequate bony contact occurred in two instances due to unanticipated trauma, with immediate reduction and fixation thereafter. Minor wound infections developed in three patients but responded well to antibiotics.\n\n\nCONCLUSION\nZygomatic reduction procedures must consider the dynamics of the adjacent muscles, which are stabilized through arch fixation. The use of our arch-lifting technique for reduction malarplasty efficiently elevates the zygomatic complex, thereby ensuring an adequate bone-to-bone contact. Predictable and accurate outcomes are thereby achieved.", "title": "" }, { "docid": "57580f1e26a1f922e5bc3c4e2bd13c54", "text": "Shotguns are usually used to fire multiple pellets, but they are capable of firing single projectiles. Shotgun slug injuries are rare, severe, and fully comparable to those inflicted by high-velocity projectiles. A case of gunshot suicide of a 59-year-old man with a shotgun loaded with shotgun slugs is presented. The first two shots were fired into the heart region, but did not hit the vital organs of the victim’s thorax and did not cause immediate incapacitation. The man was able to reload and refire. The third shot was fired into the region of right temple; the last shot caused severe cerebrocranial gunshot injury and was fatal. The victim did not pull aside his clothing to expose his skin before shooting into the heart region.", "title": "" }, { "docid": "8e16b62676e5ef36324c738ffd5f737d", "text": "Virtualization technology has shown immense popularity within embedded systems due to its direct relationship with cost reduction, better resource utilization, and higher performance measures. Efficient hypervisors are required to achieve such high performance measures in virtualized environments, while taking into consideration the low memory footprints as well as the stringent timing constraints of embedded systems. Although there are a number of open-source hypervisors available such as Xen, Linux KVM and OKL4 Micro visor, this is the first paper to present the open-source embedded hypervisor Extensible Versatile hyper Visor (Xvisor) and compare it against two of the commonly used hypervisors KVM and Xen in-terms of comparison factors that affect the whole system performance. Experimental results on ARM architecture prove Xvisor's lower CPU overhead, higher memory bandwidth, lower lock synchronization latency and lower virtual timer interrupt overhead and thus overall enhanced virtualized embedded system performance.", "title": "" }, { "docid": "085307dca53722f902dd3651e7383521", "text": "BACKGROUND\nExposure to drugs and toxins is a major cause for patients' visits to the emergency department (ED).\n\n\nMETHODS\nRecommendations for the use of clinical laboratory tests were prepared by an expert panel of analytical toxicologists and ED physicians specializing in clinical toxicology. These recommendations were posted on the world wide web and presented in open forum at several clinical chemistry and clinical toxicology meetings.\n\n\nRESULTS\nA menu of important stat serum and urine toxicology tests was prepared for clinical laboratories who provide clinical toxicology services. For drugs-of-abuse intoxication, most ED physicians do not rely on results of urine drug testing for emergent management decisions. This is in part because immunoassays, although rapid, have limitations in sensitivity and specificity and chromatographic assays, which are more definitive, are more labor-intensive. Ethyl alcohol is widely tested in the ED, and breath testing is a convenient procedure. Determinations made within the ED, however, require oversight by the clinical laboratory. Testing for toxic alcohols is needed, but rapid commercial assays are not available. The laboratory must provide stat assays for acetaminophen, salicylates, co-oximetry, cholinesterase, iron, and some therapeutic drugs, such as lithium and digoxin. Exposure to other heavy metals requires laboratory support for specimen collection but not for emergent testing.\n\n\nCONCLUSIONS\nImprovements are needed for immunoassays, particularly for amphetamines, benzodiazepines, opioids, and tricyclic antidepressants. Assays for new drugs of abuse must also be developed to meet changing abuse patterns. As no clinical laboratory can provide services to meet all needs, the National Academy of Clinical Biochemistry Committee recommends establishment of regional centers for specialized toxicology testing.", "title": "" }, { "docid": "951213cd4412570709fb34f437a05c72", "text": "In this paper, we present directional skip-gram (DSG), a simple but effective enhancement of the skip-gram model by explicitly distinguishing left and right context in word prediction. In doing so, a direction vector is introduced for each word, whose embedding is thus learned by not only word co-occurrence patterns in its context, but also the directions of its contextual words. Theoretical and empirical studies on complexity illustrate that our model can be trained as efficient as the original skip-gram model, when compared to other extensions of the skip-gram model. Experimental results show that our model outperforms others on different datasets in semantic (word similarity measurement) and syntactic (partof-speech tagging) evaluations, respectively.", "title": "" }, { "docid": "aae55ac310ba13906a948a45fcfe27ae", "text": "This overview addresses homocysteine and folate metabolism. Its functions and complexity are described, leading to explanations why disturbed homocysteine and folate metabolism is implicated in many different diseases, including congenital birth defects like congenital heart disease, cleft lip and palate, late pregnancy complications, different kinds of neurodegenerative and psychiatric diseases, osteoporosis and cancer. In addition, the inborn errors leading to hyperhomocysteinemia and homocystinuria are described. These extreme human hyperhomocysteinemia models provide knowledge about which part of the homocysteine and folate pathways are linked to which disease. For example, the very high risk for arterial and venous occlusive disease in patients with severe hyperhomocysteinemia irrespective of the location of the defect in remethylation or transsulphuration indicates that homocysteine itself or one of its “direct” derivatives is considered toxic for the cardiovascular system. Finally, common diseases associated with elevated homocysteine are discussed with the focus on cardiovascular disease and neural tube defects.", "title": "" }, { "docid": "f05718832e9e8611b4cd45b68d0f80e3", "text": "Conflict occurs frequently in any workplace; health care is not an exception. The negative consequences include dysfunctional team work, decreased patient satisfaction, and increased employee turnover. Research demonstrates that training in conflict resolution skills can result in improved teamwork, productivity, and patient and employee satisfaction. Strategies to address a disruptive physician, a particularly difficult conflict situation in healthcare, are addressed.", "title": "" }, { "docid": "6927647b1e1f6bf9bcf65db50e9f8d6e", "text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.", "title": "" }, { "docid": "4a163c071d54c641ef4c24a9c1b2299c", "text": "Current taint tracking systems suffer from high overhead and a lack of generality. In this paper, we solve both of these issues with an extensible system that is an order of magnitude more efficient than previous software taint tracking systems and is fully general to dynamic data flow tracking problems. Our system uses a compiler to transform untrusted programs into policy-enforcing programs, and our system can be easily reconfigured to support new analyses and policies without modifying the compiler or runtime system. Our system uses a sound and sophisticated static analysis that can dramatically reduce the amount of data that must be dynamically tracked. For server programs, our system's average overhead is 0.65% for taint tracking, which is comparable to the best hardware-based solutions. For a set of compute-bound benchmarks, our system produces no runtime overhead because our compiler can prove the absence of vulnerabilities, eliminating the need to dynamically track taint. After modifying these benchmarks to contain format string vulnerabilities, our system's overhead is less than 13%, which is over 6X lower than the previous best solutions. We demonstrate the flexibility and power of our system by applying it to file disclosure vulnerabilities, a problem that taint tracking cannot handle. To prevent such vulnerabilities, our system introduces an average runtime overhead of 0.25% for three open source server programs.", "title": "" }, { "docid": "cf0f63001493acd328a80c80430a5b44", "text": "Random forest classification is a well known machine learning technique that generates classifiers in the form of an ensemble (\"forest\") of decision trees. The classification of an input sample is determined by the majority classification by the ensemble. Traditional random forest classifiers can be highly effective, but classification using a random forest is memory bound and not typically suitable for acceleration using FPGAs or GP-GPUs due to the need to traverse large, possibly irregular decision trees. Recent work at Lawrence Livermore National Laboratory has developed several variants of random forest classifiers, including the Compact Random Forest (CRF), that can generate decision trees more suitable for acceleration than traditional decision trees. Our paper compares and contrasts the effectiveness of FPGAs, GP-GPUs, and multi-core CPUs for accelerating classification using models generated by compact random forest machine learning classifiers. Taking advantage of training algorithms that can produce compact random forests composed of many, small trees rather than fewer, deep trees, we are able to regularize the forest such that the classification of any sample takes a deterministic amount of time. This optimization then allows us to execute the classifier in a pipelined or single-instruction multiple thread (SIMT) fashion. We show that FPGAs provide the highest performance solution, but require a multi-chip / multi-board system to execute even modest sized forests. GP-GPUs offer a more flexible solution with reasonably high performance that scales with forest size. Finally, multi-threading via Open MP on a shared memory system was the simplest solution and provided near linear performance that scaled with core count, but was still significantly slower than the GP-GPU and FPGA.", "title": "" } ]
scidocsrr
626d7fee4a7fe819d54948c271e99787
A Comprehensive Survey on Educational Data Mining and Use of Data Mining Techniques for Improving Teaching and Predicting Student Performance
[ { "docid": "79eafa032a3f0cb367a008e5a7345dd5", "text": "Data Mining techniques are widely used in educational field to find new hidden patterns from student’s data. The hidden patterns that are discovered can be used to understand the problem arise in the educational field. This paper surveys the three elements needed to make prediction on Students’ Academic Performances which are parameters, methods and tools. This paper also proposes a framework for predicting the performance of first year bachelor students in computer science course. Naïve Bayes Classifier is used to extract patterns using the Data Mining Weka tool. The framework can be used as a basis for the system implementation and prediction of Students’ Academic Performance in Higher Learning Institutions.", "title": "" }, { "docid": "885542ef60e8c2dbcfe73d7158244f82", "text": "Three decades of active research on the teaching of introductory programming has had limited effect on classroom practice. Although relevant research exists across several disciplines including education and cognitive science, disciplinary differences have made this material inaccessible to many computing educators. Furthermore, computer science instructors have not had access to a comprehensive survey of research in this area. This paper collects and classifies this literature, identifies important work and mediates it to computing educators and professional bodies.\n We identify research that gives well-supported advice to computing academics teaching introductory programming. Limitations and areas of incomplete coverage of existing research efforts are also identified. The analysis applies publication and research quality metrics developed by a previous ITiCSE working group [74].", "title": "" } ]
[ { "docid": "63872db3bc792911d10d28ecf39ae79e", "text": "Stock market prediction has always been one of the hottest topics in research, as well as a great challenge due to its complex and volatile nature. However, most of the existing methods neglect the impact from mass media that will greatly affect the behavior of investors. In this paper we present a system that combines the information from both related news releases and technical indicators to enhance the predictability of the daily stock price trends. The performance shows that this system can achieve higher accuracy and return than a single source system.", "title": "" }, { "docid": "8c52c67dde20ce0a50ea22aaa4f917a5", "text": "This paper presents the vision of the Artificial Vision and Intelligent Systems Laboratory (VisLab) on future automated vehicles, ranging from sensor selection up to their extensive testing. VisLab's design choices are explained using the BRAiVE autonomous vehicle prototype as an example. BRAiVE, which is specifically designed to develop, test, and demonstrate advanced safety applications with different automation levels, features a high integration level and a low-cost sensor suite, which are mainly based on vision, as opposed to many other autonomous vehicle implementations based on expensive and invasive sensors. The importance of performing extensive tests to validate the design choices is considered to be a hard requirement, and different tests have been organized, including an intercontinental trip from Italy to China. This paper also presents the test, the main challenges, and the vehicles that have been specifically developed for this test, which was performed by four autonomous vehicles based on BRAiVE's architecture. This paper also includes final remarks on VisLab's perspective on future vehicles' sensor suite.", "title": "" }, { "docid": "9622a07c07b8a39d1c9cc6be28f77b17", "text": "This paper describes an intelligent system to help people share and filter information communicated by computer-based messaging systems. The system exploits concepts from artificial intelligence such as frames, production rules, and inheritance networks, but it avoids the unsolved problems of natural language understanding by providing users with a rich set of semi-structured message templates. A consistent set of “direct manipulation” editors simplifies the use of the system by individuals, and an incremental enhancement path simplifies the adoption of the system by groups.\nOne of the key problems that arises when any group of people cooperates to solve problems or make decisions is how to share information. Thus one of the central goals of designing good “organizational interfaces” (Malone, 1985) should be to help people share information in groups and organizations. In this paper, we will describe a prototype system, called the Information Lens, that focuses on one aspect of this problem: how to help people share the many diverse kinds of qualitative information that are communicated via electronic messaging systems.\nIt is already a common experience in mature computer-based messaging communities for people to feel flooded with large quantities of electronic “junk mail” (Denning, 1982; Palme, 1984; Wilson, 1984; Hiltz & Turoff, 1985), and the widespread availability of inexpensive communication capability has the potential to overwhelm people with even more messages that are of little or no value to them. At the same time, it is also a common experience for people to be ignorant of facts that would facilitate their work and that are known elsewhere in their organization. The system we will describe helps solve both these problems: it helps people filter, sort, and prioritize messages that are already addressed to them, and it also helps them find useful messages they would not otherwise have received.\n The most common previous approach to structuring information sharing in electronic messaging environments is to let users implicitly specify their general areas of interest by associating themselves with centralized distribution lists or conference topics related to particular subjects (e.g., Hiltz & Turoff, 1978). Since these methods of disseminating information are often targeted for relatively large audiences, however, it is usually impossible for all the information distributed to be of interest to all recipients.\n The Information Lens system uses much more detailed representations of message contents and receivers' interests to provide more sophisticated filtering possibilities. One of the key ideas behind this system is that many of the unsolved problems of natural language understanding can be avoided by using semi-structured templates (or frames) for different types of messages. These templates are used by the senders of messages to facilitate composing messages in the first place. Then, the same templates are used by the receivers of messages to facilitate constructing a set of rules to be used for filtering and categorizing messages of different types.", "title": "" }, { "docid": "ca8bfb8c6c40385613ef60c36c594c9a", "text": "The many functional partnerships and interactions that occur between proteins are at the core of cellular processing and their systematic characterization helps to provide context in molecular systems biology. However, known and predicted interactions are scattered over multiple resources, and the available data exhibit notable differences in terms of quality and completeness. The STRING database (http://string-db.org) aims to provide a critical assessment and integration of protein-protein interactions, including direct (physical) as well as indirect (functional) associations. The new version 10.0 of STRING covers more than 2000 organisms, which has necessitated novel, scalable algorithms for transferring interaction information between organisms. For this purpose, we have introduced hierarchical and self-consistent orthology annotations for all interacting proteins, grouping the proteins into families at various levels of phylogenetic resolution. Further improvements in version 10.0 include a completely redesigned prediction pipeline for inferring protein-protein associations from co-expression data, an API interface for the R computing environment and improved statistical analysis for enrichment tests in user-provided networks.", "title": "" }, { "docid": "5ae61b2cecb61ecc70c2ec2049426841", "text": "Advances in multiple-valued logic (MVL) have been inspired, in large part, by advances in integrated circuit technology. Multiple-valued logic has matured to the point where four-valued logic is now part of commercially available VLSI IC's. Besides reduction in chip area, MVL offers other benefits such as the potential for circuit test. This paper describes the historical and technical background of MVL, and areas of present and future application. It is intended, as well, to serve as a tutorial for the nonspecialist.", "title": "" }, { "docid": "500a9d141bc6bbd0972703413abef637", "text": "It is found that some “important” twitter users’ words can influence the stock prices of certain stocks. The stock price of Tesla – a famous electric automobile company – for example, recently seen a huge rise after Elon Musk, the CEO of Tesla, updated his twitter about the self-driving motors. Besides, the Dow Jones and S&P 500 indexes dropped by about one percent after the Twitter account of Associated Press falsely posted the message about an explosion in the White House.", "title": "" }, { "docid": "6ad344c7049abad62cd53dacc694c651", "text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.", "title": "" }, { "docid": "060ba80e2f3aeef5a3a8d69a14005645", "text": "This paper presents an application of dynamically driven recurrent networks (DDRNs) in online electric vehicle (EV) battery analysis. In this paper, a nonlinear autoregressive with exogenous inputs (NARX) architecture of the DDRN is designed for both state of charge (SOC) and state of health (SOH) estimation. Unlike other techniques, this estimation strategy is subject to the global feedback theorem (GFT) which increases both computational intelligence and robustness while maintaining reasonable simplicity. The proposed technique requires no model or knowledge of battery's internal parameters, but rather uses the battery's voltage, charge/discharge currents, and ambient temperature variations to accurately estimate battery's SOC and SOH simultaneously. The presented method is evaluated experimentally using two different batteries namely lithium iron phosphate (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{LiFePO}_4$</tex-math></inline-formula>) and lithium titanate (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{LTO}$</tex-math></inline-formula>) both subject to dynamic charge and discharge current profiles and change in ambient temperature. Results highlight the robustness of this method to battery's nonlinear dynamic nature, hysteresis, aging, dynamic current profile, and parametric uncertainties. The simplicity and robustness of this method make it suitable and effective for EVs’ battery management system (BMS).", "title": "" }, { "docid": "18c2f9a8f3b0709eb2cb1f973f86f655", "text": "SELF's debugging system provides complete source-level debugging (expected behavior) with globally optimized code. It shields the debugger from optimizations performed by the compiler by dynamically deoptimizing code on demand. Deoptimization only affects the procedure activations that are actively being debugged; all other code runs at full speed. Deoptimization requires the compiler to supply debugging information at discrete interrupt points; the compiler can still perform extensive optimizations between interrupt points without affecting debuggability. At the same time, the inability to interrupt between interrupt points is invisible to the user. Our debugging system also handles programming changes during debugging. Again, the system provides expected behavior: it is possible to change a running program and immediately observe the effects of the change. Dynamic deoptimization transforms old compiled code (which may contain inlined copies of the old version of the changed procedure) into new versions reflecting the current source-level state. To the best of our knowledge, SELF is the first practical system providing full expected behavior with globally optimized code.", "title": "" }, { "docid": "4ee6894fade929db82af9cb62fecc0f9", "text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "title": "" }, { "docid": "72223fa147c6a30b5645e16a71429b0c", "text": "This project explains the design and implementation of an electronic system based on GSM (Global System for Mobile communication), cloud computing and Internet of Things (IoT) for sensing the climatic parameters in the greenhouse. Based on the characteristics of accurate perception, efficient transmission and intelligent synthesis of Internet of Things and cloud computing, the system can obtain real-time environmental information for crop growth and then be transmitted. The system can monitor a variety of environmental parameters in greenhouse effectively and meet the actual agricultural production requirements. Devices such as temperature sensor, light sensor, relative humidity sensor and soil moisture sensor are integrated to demonstrate the proposed system. This research focuses on developing a system that can automatically measure and monitor changes of temperature, light, Humidity and moisture level in the greenhouse. The quantity and quality of production in greenhouses can be increased. The procedure used in our system provides the owner with the details online irrespective of their presence onsite. The main system collects environmental parameters inside greenhouse tunnel every 30 seconds. The parameters that are collected by a network of sensors are being logged and stored online using cloud computing and Internet of Things (IoT) together called as CloudIoT. KeywordsCloud computing, GSM modem, Internet of Things, lm35 sensor, moisture sensor, temperature sensor, humidity sensor, solar panel", "title": "" }, { "docid": "99ffc7cd601d1c43bbf7e3537632e95c", "text": "Despite numerous advances in IT security, many computer users are still vulnerable to security-related risks because they do not comply with organizational policies and procedures. In a network setting, individual risk can extend to all networked users. Endpoint security refers to the set of organizational policies, procedures, and practices directed at securing the endpoint of the network connections – the individual end user. As such, the challenges facing IT managers in providing effective endpoint security are unique in that they often rely heavily on end user participation. But vulnerability can be minimized through modification of desktop security programs and increased vigilance on the part of the system administrator or CSO. The cost-prohibitive nature of these measures generally dictates targeting high-risk users on an individual basis. It is therefore important to differentiate between individuals who are most likely to pose a security risk and those who will likely follow most organizational policies and procedures.", "title": "" }, { "docid": "8a1d0d2767a35235fa5ac70818ec92e7", "text": "This work demonstrates two 94 GHz SPDT quarter-wave shunt switches using saturated SiGe HBTs. A new mode of operation, called reverse saturation, using the emitter at the RF output node of the switch, is utilized to take advantage of the higher emitter doping and improved isolation from the substrate. The switches were designed in a 180 nm SiGe BiCMOS technology featuring 90 nm SiGe HBTs (selective emitter shrink) with fT/fmax of 250/300+ GHz. The forward-saturated switch achieves an insertion loss and isolation at 94 GHz of 1.8 dB and 19.3 dB, respectively. The reverse-saturated switch achieves a similar isolation, but reduces the insertion loss to 1.4 dB. This result represents a 30% improvement in insertion loss in comparison to the best CMOS SPDT at 94 GHz.", "title": "" }, { "docid": "7a56ca5ad5483aef5b886836c24bbb3b", "text": "Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.", "title": "" }, { "docid": "00eaa437ad2821482644ee75cfe6d7b3", "text": "A 65nm digitally-modulated polar transmitter incorporates a fully-integrated 2.4GHz efficient switching Inverse Class D power amplifier. Low power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54Mbps WLAN standard with 18% average efficiency.", "title": "" }, { "docid": "ea29b3421c36178680ae63c16b9cecad", "text": "Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.", "title": "" }, { "docid": "5ff0113fad326f21bd07843733eebb02", "text": "Outlier detection is an essential aspect of data quality control as it allows analysts and engineers the ability to identify data quality problems through the use of their own data as a tool. However, traditional data quality control methods are based on users’ experience or previously established business rules, and this limits performance in addition to being a very time consuming process and low accuracy. Utilizing big data, we can leverage computing resources and advanced techniques to overcome these challenges and provide greater value to the business. In this paper, we first review relevant works and discuss machine learning techniques, tools, and statistical quality models. Second, we offer a creative data profiling framework based on deep learning and statistical model algorithms for improving data quality. Third, authors use public Arkansas officials’ salaries, one of the open datasets available from the state of Arkansas’ official website, to demonstrate how to identify outlier data for improving data quality via machine learning. Finally, we discuss future works.", "title": "" }, { "docid": "e50253a714afe5ad36439ab821604ce8", "text": "INTRODUCTION\nAn approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study.\n\n\nMETHODS\nA combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others).\n\n\nRESULTS\nThe proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia.\n\n\nCONCLUSION\nThe proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others.", "title": "" }, { "docid": "9cb703cf5394a77bd15c0ad356928f04", "text": "Studies were undertaken to evaluate locally available subtrates for use in a culture medium for Phytophthora infestans (Mont.) de Bary employing a protocol similar to that used for the preparation of rye A agar. Test media preparations were assessed for growth, sporulation, oospore formation, and long-term storage of P. infestans. Media prepared from grains and fresh produce available in Thailand and Asian countries such as black bean (BB), red kidney bean (RKB), black sesame (BSS), sunflower (SFW) and sweet corn supported growth and sporulation of representative isolates compared with rye A, V8 and oat meal media. Oospores were successfully formed on BB and RKB media supplemented with β-sitosterol. The BB, RKB, BSS and SFW media maintained viable fungal cultures with sporulation ability for 8 months, similar to the rye A medium. Three percent and 33% of 135 isolates failed to grow on V8 and SFW media, respectively.", "title": "" }, { "docid": "f365988f4b131e39a59e00a39d428bc3", "text": "The ethanol and water extracts of Sansevieria trifasciata leaves showed dose-dependent and significant (P < 0.05) increase in pain threshold in tail-immersion test. Moreover, both the extracts (100 - 200 mg/kg) exhibited a dose-dependent inhibition of writhing and also showed a significant (P < 0.001) inhibition of both phases of the formalin pain test. The ethanol extract (200 mg/kg) significantly (P < 0.01) reversed yeast-induced fever. Preliminary phytochemical screening of the extracts showed the presence of alkaloids, flavonoids, saponins, glycosides, terpenoids, tannins, proteins and carbohydrates.", "title": "" } ]
scidocsrr
35acd1125604011e93fc78e2604ea45a
Image-Based Human Age Estimation by Manifold Learning and Locally Adjusted Robust Regression
[ { "docid": "b63591acc9a15a52029860806e2b1060", "text": "Age Specific Human-Computer Interaction (ASHCI) has vast potential applications in daily life. However, automatic age estimation technique is still underdeveloped. One of the main reasons is that the aging effects on human faces present several unique characteristics which make age estimation a challenging task that requires non-standard classification approaches. According to the speciality of the facial aging effects, this paper proposes the AGES (AGing pattErn Subspace) method for automatic age estimation. The basic idea is to model the aging pattern, which is defined as a sequence of personal aging face images, by learning a representative subspace. The proper aging pattern for an unseen face image is then determined by the projection in the subspace that can best reconstruct the face image, while the position of the face image in that aging pattern will indicate its age. The AGES method has shown encouraging performance in the comparative experiments either as an age estimator or as an age range estimator.", "title": "" }, { "docid": "1e2768be2148ff1fd102c6621e8da14d", "text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.", "title": "" }, { "docid": "83a644ac25c7db156d787629060fb32a", "text": "In this paper we study face recognition across ages within a real passport photo verification task. First, we propose using the gradient orientation pyramid for this task. Discarding the gradient magnitude and utilizing hierarchical techniques, we found that the new descriptor yields a robust and discriminative representation. With the proposed descriptor, we model face verification as a two-class problem and use a support vector machine as a classifier. The approach is applied to two passport data sets containing more than 1,800 image pairs from each person with large age differences. Although simple, our approach outperforms previously tested Bayesian technique and other descriptors, including the intensity difference and gradient with magnitude. In addition, it works as well as two commercial systems. Second, for the first time, we empirically study how age differences affect recognition performance. Our experiments show that, although the aging process adds difficulty to the recognition task, it does not surpass illumination or expression as a confounding factor.", "title": "" } ]
[ { "docid": "66382b88e0faa573251d5039ccd65d6c", "text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.", "title": "" }, { "docid": "a924ccb5a5465c1542fea5ac34749dd9", "text": "Self-awareness facilitates a proper assessment of cost-constrained cyber-physical systems, allocating limited resources where they are most needed. Together, situation awareness and attention are key enablers for self-awareness in efficient distributed sensing and computing networks.", "title": "" }, { "docid": "7c237153bbd9e43a93bccfdf5579ecfa", "text": "Over the last decade, efforts from industries and research communities have been made in addressing the security of Supervisory Control and Data Acquisition (SCADA) systems. However, the SCADA security deployed for critical infrastructures is still a challenging issue today. This paper gives an overview of the complexity of SCADA security. Products and applications in control network security are reviewed. Furthermore, new developments in SCADA security, especially the trend in technical and theoretical studies are presented. Some important topics on SCADA security are identified and highlighted and this can be served as the guide for future works in this area.", "title": "" }, { "docid": "0c1cd807339481f3a0b6da1fbe96950c", "text": "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.", "title": "" }, { "docid": "8bf9455d2eea2a2a5213ba8bb58e224c", "text": "Visual servoing is a well-known task in robotics. However, there are still challenges when multiple visual sources are combined to accurately guide the robot or occlusions appear. In this paper we present a novel visual servoing approach using hybrid multi-camera input data to lead a robot arm accurately to dynamically moving target points in the presence of partial occlusions. The approach uses four RGBD sensors as Eye-to-Hand (EtoH) visual input, and an arm-mounted stereo camera as Eye-in-Hand (EinH). A Master supervisor task selects between using the EtoH or the EinH, depending on the distance between the robot and target. The Master also selects the subset of EtoH cameras that best perceive the target. When the EinH sensor is used, if the target becomes occluded or goes out of the sensor's view-frustum, the Master switches back to the EtoH sensors to re-track the object. Using this adaptive visual input data, the robot is then controlled using an iterative planner that uses position, orientation and joint configuration to estimate the trajectory. Since the target is dynamic, this trajectory is updated every time-step. Experiments show good performance in four different situations: tracking a ball, targeting a bulls-eye, guiding a straw to a mouth and delivering an item to a moving hand. The experiments cover both simple situations such as a ball that is mostly visible from all cameras, and more complex situations such as the mouth which is partially occluded from some of the sensors.", "title": "" }, { "docid": "058a9737def3c1dc46218afe02e8d9b1", "text": "Covering point process theory, random geometric graphs, and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance, and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs, and is accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques, this is a comprehensive guide to the spatial stochastic models essential for modeling and analysis of wireless network performance.", "title": "" }, { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" }, { "docid": "60a3538ec6a64af6f8fd447ed0fb79f5", "text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.", "title": "" }, { "docid": "3e4a715c040ebb38674c057de6efc680", "text": "Agricultural data have a major role in the planning and success of rural development activities. Agriculturalists, planners, policy makers, government officials, farmers and researchers require relevant information to trigger decision making processes. This paper presents our approach towards extracting named entities from real-world agricultural data from different areas of agriculture using Conditional Random Fields (CRFs). Specifically, we have created a Named Entity tagset consisting of 19 fine grained tags. To the best of our knowledge, there is no specific tag set and annotated corpus available for the agricultural domain. We have performed several experiments using different combination of features and obtained encouraging results. Most of the issues observed in an error analysis have been addressed by post-processing heuristic rules, which resulted in a significant improvement of our system’s accuracy.", "title": "" }, { "docid": "78b358d12e94a100fc17beabcb34a43d", "text": "Model-free reinforcement learning has been shown to be a promising data driven approach for automatic dialogue policy optimization, but a relatively large amount of dialogue interactions is needed before the system reaches reasonable performance. Recently, Gaussian process based reinforcement learning methods have been shown to reduce the number of dialogues needed to reach optimal performance, and pre-training the policy with data gathered from different dialogue systems has further reduced this amount. Following this idea, a dialogue system designed for a single speaker can be initialised with data from other speakers, but if the dynamics of the speakers are very different the model will have a poor performance. When data gathered from different speakers is available, selecting the data from the most similar ones might improve the performance. We propose a method which automatically selects the data to transfer by defining a similarity measure between speakers, and uses this measure to weight the influence of the data from each speaker in the policy model. The methods are tested by simulating users with different severities of dysarthria interacting with a voice enabled environmental control system.", "title": "" }, { "docid": "970a76190e980afe51928dcaa6d594c8", "text": "Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.", "title": "" }, { "docid": "e9782d003112c64c3dc41c1f2a5c641e", "text": "Osgood-Schlatter's disease is a well known entity affecting the adolescent knee. Radiologic examination of the knee has been an integral part of the diagnosis of this condition for decades. However, the soft tissue changes have not been appreciated sufficiently. Emphasis is placed on the use of optimum radiographic technique and xeroradiography in the examination of the soft tissues of the knee.", "title": "" }, { "docid": "4a4a11d2779eab866ff32c564e54b69d", "text": "Although backpropagation neural networks generally predict better than decision trees do for pattern classiication problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, more often than not, explicit knowledge is needed by human experts. This work drives a symbolic representation for neural networks to make explicit each prediction of a neural network. An algorithm is proposed and implemented to extract symbolic rules from neural networks. Explicitness of the extracted rules is supported by comparing the symbolic rules generated by decision trees methods. Empirical study demonstrates that the proposed algorithm generates high quality rules from neural networks comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule. The symbolic rules from nerual networks preserve high predictive accuracy of original networks. An early and shorter version of this paper has been accepted for presentation at IJCAI'95.", "title": "" }, { "docid": "4301e426c7fac17358a68b815a03d2e3", "text": "What exactly is “nonconscious consumer psychology?” We use the term to describe a category of consumption behavior that is driven by processes that occur outside a consumer's conscious awareness. In other words, individuals engage in consumptionrelated cognition, motivation, decision making, emotion, and behavior without recognizing the role that nonconscious processes played in shaping them. A growing literature has documented that a wide range of consumption behaviors are strongly influenced by factors outside of people's conscious awareness. For instance, consumers are often unaware they have been exposed to an environmental cue that triggers a given consumption behavior, or are unaware of a mental process that is occurring outside conscious awareness, or are even unaware of the consumption-related outcome of such a nonconscious process (Chartrand, 2005). Such processes are often adaptive and highly functional, but at times can lead to undesirable outcomes for consumers. By shining a light on a wide range of nonconscious consumer psychology we hope to facilitate increased reliance on our unconscious systems in certain situations and equip consumers to defend themselves when unconscious processes can lead to negative outcomes. What exactly then is a nonconscious psychological process for a consumer? We define it as a subset of automatic processing (Bargh & Chartrand, 1999). An automatic process is one that, once set into motion, has no need of conscious intervention (Bargh, 1989). The labeling of automatic processes in social and cognitive psychology, including those set forth in dual process models, implies that processes are either automatic or they are not. Labels such as automatic/controlled, implicit/explicit, conscious/ nonconscious, spontaneous/deliberative, and System 1/System 2 by their dichotomous nature suggest that consumers are either in conscious decision making mode or have their unconscious driving their decision making entirely. However, there are", "title": "" }, { "docid": "7d43cf2e0fcc795f6af4bdbcfb56d13e", "text": "Vehicular Ad hoc Networks is a special kind of mobile ad hoc network to provide communication among nearby vehicles and between vehicles and nearby fixed equipments. VANETs are mainly used for improving efficiency and safety of (future) transportation. There are chances of a number of possible attacks in VANET due to open nature of wireless medium. In this paper, we have classified these security attacks and logically organized/represented in a more lucid manner based on the level of effect of a particular security attack on intelligent vehicular traffic. Also, an effective solution is proposed for DOS based attacks which use the redundancy elimination mechanism consists of rate decreasing algorithm and state transition mechanism as its components. This solution basically adds a level of security to its already existing solutions of using various alternative options like channel-switching, frequency-hopping, communication technology switching and multiple-radio transceivers to counter affect the DOS attacks. Proposed scheme enhances the security in VANETs without using any cryptographic scheme.", "title": "" }, { "docid": "cf073f910b70151eab2e066e13e96b94", "text": "Paying health care providers to meet quality goals is an idea with widespread appeal, given the common perception that quality of care in the United States remains unacceptably low despite a decade of benchmarking and public reporting. There has been little critical analysis of the design of the current generation of quality incentive programs. In this paper we examine public reports of paying for quality over the past five years and assess each of the identified programs in terms of key design features, including the market share of payers, the structure of the reward system, the amount of revenue at stake, and the targeted domains of health care quality.", "title": "" }, { "docid": "951213cd4412570709fb34f437a05c72", "text": "In this paper, we present directional skip-gram (DSG), a simple but effective enhancement of the skip-gram model by explicitly distinguishing left and right context in word prediction. In doing so, a direction vector is introduced for each word, whose embedding is thus learned by not only word co-occurrence patterns in its context, but also the directions of its contextual words. Theoretical and empirical studies on complexity illustrate that our model can be trained as efficient as the original skip-gram model, when compared to other extensions of the skip-gram model. Experimental results show that our model outperforms others on different datasets in semantic (word similarity measurement) and syntactic (partof-speech tagging) evaluations, respectively.", "title": "" }, { "docid": "37927017353dc0bab9c081629d33d48c", "text": "Generating a secret key between two parties by extracting the shared randomness in the wireless fading channel is an emerging area of research. Previous works focus mainly on single-antenna systems. Multiple-antenna devices have the potential to provide more randomness for key generation than single-antenna ones. However, the performance of key generation using multiple-antenna devices in a real environment remains unknown. Different from the previous theoretical work on multiple-antenna key generation, we propose and implement a shared secret key generation protocol, Multiple-Antenna KEy generator (MAKE) using off-the-shelf 802.11n multiple-antenna devices. We also conduct extensive experiments and analysis in real indoor and outdoor mobile environments. Using the shared randomness extracted from measured Received Signal Strength Indicator (RSSI) to generate keys, our experimental results show that using laptops with three antennas, MAKE can increase the bit generation rate by more than four times over single-antenna systems. Our experiments validate the effectiveness of using multi-level quantization when there is enough mutual information in the channel. Our results also show the trade-off between bit generation rate and bit agreement ratio when using multi-level quantization. We further find that even if an eavesdropper has multiple antennas, she cannot gain much more information about the legitimate channel.", "title": "" }, { "docid": "c82cecc94eadfa9a916d89a9ee3fac21", "text": "In this paper, we develop a supply chain network model consisting of manufacturers and retailers in which the demands associated with the retail outlets are random. We model the optimizing behavior of the various decision-makers, derive the equilibrium conditions, and establish the finite-dimensional variational inequality formulation. We provide qualitative properties of the equilibrium pattern in terms of existence and uniqueness results and also establish conditions under which the proposed computational procedure is guaranteed to converge. Finally, we illustrate the model through several numerical examples for which the equilibrium prices and product shipments are computed. This is the first supply chain network equilibrium model with random demands for which modeling, qualitative analysis, and computational results have been obtained.", "title": "" }, { "docid": "e3326903fe350778242c039856601dfa", "text": "A review was conducted on the use of thermochemical biomass gasification for producing biofuels, biopower and chemicals. The upstream processes for gasification are similar to other biomass processing methods. However, challenges remain in the gasification and downstream processing for viable commercial applications. The challenges with gasification are to understand the effects of operating conditions on gasification reactions for reliably predicting and optimizing the product compositions, and for obtaining maximal efficiencies. Product gases can be converted to biofuels and chemicals such as Fischer-Tropsch fuels, green gasoline, hydrogen, dimethyl ether, ethanol, methanol, and higher alcohols. Processes and challenges for these conversions are also summarized.", "title": "" } ]
scidocsrr
84741c1e90fb2b70e5890ffc63ebf038
SeGAN: Segmenting and Generating the Invisible
[ { "docid": "1d8cd516cec4ef74d72fa283059bf269", "text": "Current high-quality object detection approaches use the same scheme: salience-based object proposal methods followed by post-classification using deep convolutional features. This spurred recent research in improving object proposal methods [18, 32, 15, 11, 2]. However, domain agnostic proposal generation has the principal drawback that the proposals come unranked or with very weak ranking, making it hard to trade-off quality for running time. Also, it raises the more fundamental question of whether high-quality proposal generation requires careful engineering or can be derived just from data alone. We demonstrate that learning-based proposal methods can effectively match the performance of hand-engineered methods while allowing for very efficient runtime-quality trade-offs. Using our new multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with 0.5 mAP for a single model and 0.52 mAP for an ensemble of two models. MSC-Multibox significantly improves the proposal quality over its predecessor Multibox [4] method: AP increases from 0.42 to 0.53 for the ILSVRC detection challenge. Finally, we demonstrate improved bounding-box recall compared to Multiscale Combinatorial Grouping [18] with less proposals on the Microsoft-COCO [14] data set.", "title": "" } ]
[ { "docid": "5b0a1e4752c67b002ce16395640dbc1a", "text": "Cut-and-Paste Text Summarization", "title": "" }, { "docid": "ab08118b53dd5eee3579260e8b23a9c5", "text": "We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. On randomly generated potentials, for which there is no analytic form for either the potential or the ground-state energy, the neural network model was able to predict the ground-state energy to within chemical accuracy, with a median absolute error of 1.49 mHa. We also investigate the performance of the model in predicting other quantities such as the kinetic energy and the first excited-state energy of random potentials. While we demonstrated this approach on a simple, tractable problem, the transferability and excellent performance of the resulting model suggests further applications of deep neural networks to problems of electronic structure.", "title": "" }, { "docid": "abaf590dfff79cd3282b36db369c8a32", "text": "Classifying a visual concept merely from its associated online textual source, such as a Wikipedia article, is an attractive research topic in zero-shot learning because it alleviates the burden of manually collecting semantic attributes. Recent work has pursued this approach by exploring various ways of connecting the visual and text domains. In this paper, we revisit this idea by going further to consider one important factor: the textual representation is usually too noisy for the zero-shot learning application. This observation motivates us to design a simple yet effective zero-shot learning method that is capable of suppressing noise in the text. Specifically, we propose an l2,1-norm based objective function which can simultaneously suppress the noisy signal in the text and learn a function to match the text document and visual features. We also develop an optimization algorithm to efficiently solve the resulting problem. By conducting experiments on two large datasets, we demonstrate that the proposed method significantly outperforms those competing methods which rely on online information sources but with no explicit noise suppression. Furthermore, we make an in-depth analysis of the proposed method and provide insight as to what kind of information in documents is useful for zero-shot learning.", "title": "" }, { "docid": "91e32e80a6a2f2a504776b9fd86425ca", "text": "We propose a method for semi-supervised semantic segmentation using an adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. In addition, the fully convolutional discriminator enables semi-supervised learning through discovering the trustworthy regions in predicted results of unlabeled images, thereby providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images to enhance the segmentation model. Experimental results on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "2489fb3b63d40b3f851de5d1b5da4f45", "text": "HANDEXOS is an exoskeleton device for supporting th e human hand and performing teleoperation activities. It could be used to opera te both in remote-manipulation mode and directly in microgravity environments. In manipulation mode, crew or operators within the space ship could tele-control the endeffector of a robot in the space during the executi on of extravehicular activities (EVA) by means of an advanced upper limb exoskeleton. The ch oice of an appropriate man-machine interface (MMI) is important to allow a correct and dexterous grasp of objects of regular and irregular shapes in the space. Many different t chnologies have been proposed, from conventional joysticks to exoskeletons, but the ari sing number of more and more dexterous space manipulators such as Robonaut [1] or Eurobot [2] leads researchers to design novel MMIs with the aim to be more capable to exploit all functional advantages offered by new space robots. From this point of view exoskeletons better suite for execution of remote control-task than conventional joysticks, facilitat ing commanding of three dimensional trajectories and saving time in crew’s operation a nd training [3]. Moreover, it’s important to point out that in micro gravity environments the astronauts spend most time doing motor exercises, so HANDEXOS can be useful in supporting such motor practice, assisting human operators in overco ming physical limitations deriving from the fatigue in performing EVA. It is the goal of this paper to provide a detailed description of HANDEXOS mechanical design and to present the results of the preliminar y simulations derived from the implementation of the exoskeleton/human finger dyna mic model for different actuation solutions.", "title": "" }, { "docid": "09c50033443696a183dcdb1e0fc93cf0", "text": "In this paper, we introduce a novel FPGA architecture with memristor-based reconfiguration (mrFPGA). The proposed architecture is based on the existing CMOS-compatible memristor fabrication process. The programmable interconnects of mrFPGA use only memristors and metal wires so that the interconnects can be fabricated over logic blocks, resulting in significant reduction of overall area and interconnect delay but without using a 3D die-stacking process. Using memristors to build up the interconnects can also provide capacitance shielding from unused routing paths and reduce interconnect delay further. Moreover we propose an improved architecture that allows adaptive buffer insertion in interconnects to achieve more speedup. Compared to the fixed buffer pattern in conventional FPGAs, the positions of inserted buffers in mrFPGA are optimized on demand. A complete CAD flow is provided for mrFPGA, with an advanced P&R tool named mrVPR that was developed for mrFPGA. The tool can deal with the novel routing structure of mrFPGA, the memristor shielding effect, and the algorithm for optimal buffer insertion. We evaluate the area, performance and power consumption of mrFPGA based on the 20 largest MCNC benchmark circuits. Results show that mrFPGA achieves 5.18x area savings, 2.28x speedup and 1.63x power savings. Further improvement is expected with combination of 3D technologies and mrFPGA.", "title": "" }, { "docid": "93de57d8f47722229c2ac2fd02db12c1", "text": "This paper presents a high-speed LVDS I/O interface for mobile DRAMs. A data rate of 6Gbps/pin and a transmit-jitter of 57.31ps pk-pk were demonstrated, in which an 800MHz clock and a 200mV swing were used. The power consumption by I/O circuit is 6.2mW/pin when a 10pf load is connected to the I/O, and output supply voltage is 1.2V. The proposed mobile DRAM has 6 data pins and 4 address/command pins for a multi-chip package (MCP). The transmitter uses a feed-back LVDS output driver and a common-mode feed-back controller achieving the reduction of driver currents and the constant common-mode as half voltage level. To achieve a low-transmit jitter, we use a driver with a double step pre-emphasis. The receiver employs a shared preamplifier scheme, which ensures transmit power reduction. The proposed DRAM with LVDS I/O was fabricated using an 80-nm DRAM process. It exhibits 161.1mV times 150ps rms eye-windows on the given channel", "title": "" }, { "docid": "6aa1c48fcde6674990a03a1a15b5dc0e", "text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications.", "title": "" }, { "docid": "e767659e0d8a778dacda0f6642a3d292", "text": "Alrstract-We present a new self-organizing neural network model that has two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches ( e.g., the Kohonen feature map) is the ability o f the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process that also includes occasional removal o f units. The second variant of the model is a supervised learning method that results from the combination of the above-mentioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible--in contrast to earlier approaches--to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks that generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented that are better than any results previously published.", "title": "" }, { "docid": "84f2072f32d2a29d372eef0f4622ddce", "text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure", "title": "" }, { "docid": "f5ef7795ec28c8de19bfde30a2499350", "text": "DevOps and continuous development are getting popular in the software industry. Adopting these modern approaches in regulatory environments, such as medical device software, is not straightforward because of the demand for regulatory compliance. While DevOps relies on continuous deployment and integration, regulated environments require strict audits and approvals before releases. Therefore, the use of modern development approaches in regulatory environments is rare, as is the research on the topic. However, as software is more and more predominant in medical devices, modern software development approaches become attractive. This paper discusses the fit of DevOps for regulated medical device software development. We examine two related standards, IEC 62304 and IEC 82304-1, for obstacles and benefits of using DevOps for medical device software development. We found these standards to set obstacles for continuous delivery and integration. Respectively, development tools can help fulfilling the requirements of traceability and documentation of these standards.", "title": "" }, { "docid": "dc22d5dbb59b7e9b4a857e1e3dddd234", "text": "Issuer Delisting; Order Granting the Application of General Motors Corporation to Withdraw its Common Stock, $1 2/3 par value, from Listing and Registration on the Chicago Stock Exchange, Inc. File No. 1-00043 April 4, 2006 On March 2, 2006, General Motors Corporation, a Delaware corporation (\"Issuer\"), filed an application with the Securities and Exchange Commission (\"Commission\"), pursuant to Section 12(d) of the Securities Exchange Act of 1934 (\"Act\") and Rule 12d2-2(d) thereunder, to withdraw its common stock, $1 2/3 par value (\"Security\"), from listing and registration on the Chicago Stock Exchange, Inc. (\"CHX\"). Notice of such application requesting comments was published in the Federal Register on March 10, 2006. No comments were received. As discussed below, the Commission is granting the application. The Administrative Committee of the Issuer's Board of Directors (\"Board\") approved a resolution on September 9, 2005, to delist the Security from listing and registration on CHX. The Issuer stated that the purposes for seeking to delist the Security from CHX are to avoid dual regulatory oversight and dual listing fees. The Security is traded, and will continue to trade, on the New York Stock Exchange (\"NYSE\"). In addition, the Issuer stated that CHX advised the Issuer that the Security will continue to trade on CHX under unlisted trading privileges. The Issuer stated in its application that it has complied with applicable rules of CHX by providing CHX with the required documents governing the withdrawal of securities from listing and registration on CHX. The Issuer's application relates solely to the", "title": "" }, { "docid": "b32cd3e2763400dfc96c61e489673a6b", "text": "This paper presents a hybrid cascaded multilevel inverter for electric vehicles (EV) / hybrid electric vehicles (HEV) and utility interface applications. The inverter consists of a standard 3-leg inverter (one leg for each phase) and H-bridge in series with each inverter leg. It can use only a single DC power source to supply a standard 3-leg inverter along with three full H-bridges supplied by capacitors or batteries. Both fundamental frequency and high switching frequency PWM methods are used for the hybrid multilevel inverter. An experimental 5 kW prototype inverter is built and tested. The above two switching control methods are validated and compared experimentally.", "title": "" }, { "docid": "504b603449202c385bcd93c2a2934736", "text": "Understanding computer-mediated discussions : positivist and interpretive analyses of group support system use p. 47 Participation in groupware-mediated communities of practice : a socio-political analysis of knowledge working p. 82 Paradox lost? : firm-level evidence on the returns to information systems spending p. 109 Information technology and the nature of managerial work : from the productivity paradox to the icarus paradox? p. 128 IT value : the great divide between qualitative and quantitative and individual and organizational measures p. 152 The impact of information technology investment on enterprise performance : a case study p. 183 Competing though EDI at Brun Passot : achievements in France and ambitions for the single European market p. 199 Business value of information technology : a study of electronic data interchange p. 215 The performance impacts of quick response and strategic alignment in specialty retailing p. 235 Coordination and visualization : the role of electronic networks and personal relationships p. 257 Do electronic marketplaces lower the price of goods? p. 288 Next-generation trading in futures markets : a comparison of open outcry and order matching systems p. 298 Trust, technology and transaction costs : can theories transcend culture in a globalized world? p. 311 Reengineering the Dutch flower auctions : a framework for analyzing exchange organizations p. 341 Cross-cultural software production and use : a structurational analysis p. 372 Key issues in information systems management : an international perspective p. 392 The global digital divide : a sociological assessment of trends and causes p. 412 Information technology and transitions in the public service : a comparison of Scandinavia and the United States p. 430", "title": "" }, { "docid": "11ebaec69512af393fc30e96be2f7e20", "text": "Grammar induction is the task of learning syntactic structure in a setting where that structure is hidden. Grammar induction from words alone is interesting because it is similiar to the problem that a child learning a language faces. Previous work has typically assumed richer but cognitively implausible input, such as POS tag annotated data, which makes that work less relevant to human language acquisition. We show that grammar induction from words alone is in fact feasible when the model is provided with sufficient training data, and present two new streaming or mini-batch algorithms for PCFG inference that can learn from millions of words of training data. We compare the performance of these algorithms to a batch algorithm that learns from less data. The minibatch algorithms outperform the batch algorithm, showing that cheap inference with more data is better than intensive inference with less data. Additionally, we show that the harmonic initialiser, which previous work identified as essential when learning from small POStag annotated corpora (Klein and Manning, 2004), is not superior to a uniform initialisation.", "title": "" }, { "docid": "5a4c9b6626d2d740246433972ad60f16", "text": "We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:", "title": "" }, { "docid": "18487821406b5a262a72e1cb46a05d2b", "text": "This study presents the applicability of an ensemble of artificial neural networks (ANNs) and learning paradigms for weather forecasting in southern Saskatchewan, Canada. The proposed ensemble method for weather forecasting has advantages over other techniques like linear combination. Generally, the output of an ensemble is a weighted sum, which are weight-fixed, with the weights being determined from the training or validation data. In the proposed approach, weights are determined dynamically from the respective certainties of the network outputs. The more certain a network seems to be of its decision, the higher the weight. The proposed ensemble model performance is contrasted with multi-layered perceptron network (MLPN), Elman recurrent neural network (ERNN), radial basis function network (RBFN), Hopfield model (HFM) predictive models and regression techniques. The data of temperature, wind speed and relative humidity are used to train and test the different models. With each model, 24-h-ahead forecasts are made for the winter, spring, summer and fall seasons. Moreover, the performance and reliability of the seven models are then evaluated by a number of statistical measures. Among the direct approaches employed, empirical results indicate that HFM is relatively less accurate and RBFN is relatively more reliable for the weather forecasting problem. In comparison, the ensemble of neural networks produced the most accurate forecasts.", "title": "" }, { "docid": "a29a51df4eddfa0239903986f4011532", "text": "In recent years additive manufacturing, or threedimensional (3D) printing, it is becoming increasingly widespread and used also in the medical and biomedical field [1]. 3D printing is a technology that allows to print, in plastic or other material, solid objects of any shape from its digital model. The printing process takes place by overlapping layers of material corresponding to cross sections of the final product. The 3D models can be created de novo, with a 3D modeling software, or it is possible to replicate an existing object with the use of a 3D scanner. In the past years, the development of appropriate software packages allowed to generate 3D printable anatomical models from computerized tomography, magnetic resonance imaging and ultrasound scans [2,3]. Up to now there have been 3D printed objects of nearly any size (from nanostructures to buildings) and material. Plastics, metals, ceramics, graphene and even derivatives of human tissues. The so-called “bio-printers”, in fact, allow to print one above the other thin layers of cells immersed in a gelatinous matrix. Recent advances of 3D bioprinting enabled researchers to print biocompatible scaffolds and human tissues such as skin, bone, cartilage, vessels and are driving to the design and 3D printing of artificial organs like liver and kidney [4]. Dentistry, prosthetics, craniofacial reconstructive surgery, neurosurgery and orthopedic surgery are among the disciplines that have already shown versatility and possible applications of 3D printing in adults and children [2,5]. Only a few experiences have instead been reported in newborn and infants. 3D printed individualized bioresorbable airway splints have been used for the treatment of three infants with severe tracheobronchomalacia, ensuring resolution of pulmonary and extrapulmonary symptoms [6,7]. A 3D model of a complex congenital heart defects have been used for preoperative planning of intraoperative procedures, allowing surgeons to repair a complex defect in a single intervention [8]. As already shown for children with obstructive sleep apnea and craniofacial anomalies [9]. personalized 3D printed masks could improve CPAP effectiveness and comfort also in term and preterm neonates. Neonatal emergency transport services and rural hospitals could also benefit from this technology, making possible to print medical devices spare parts, surgical and medical instruments wherever not readily available. It is envisaged that 3D printing, in the next future, will give its contribute toward the individualization of neonatal care, although further multidisciplinary studies are still needed to evaluate safety, possible applications and realize its full potential.", "title": "" }, { "docid": "9accdf3edad1e9714282e58758d3c382", "text": "We present initial results from and quantitative analysis of two leading open source hypervisors, Xen and KVM. This study focuses on the overall performance, performance isolation, and scalability of virtual machines running on these hypervisors. Our comparison was carried out using a benchmark suite that we developed to make the results easily repeatable. Our goals are to understand how the different architectural decisions taken by different hypervisor developers affect the resulting hypervisors, to help hypervisor developers realize areas of improvement for their hypervisors, and to help users make informed decisions about their choice of hypervisor.", "title": "" } ]
scidocsrr
a75b16191965143012f3cc558f668eb6
Music emotion recognition using chord progressions
[ { "docid": "1c56fb7d4c5998c6bfab1cb35fe21681", "text": "With the growth of digital music, the development of music recommendation is helpful for users. The existing recommendation approaches are based on the users' preference on music. However, sometimes, recommending music according to the emotion is needed. In this paper, we propose a novel model for emotion-based music recommendation, which is based on the association discovery from film music. We investigated the music feature extraction and modified the affinity graph for association discovery between emotions and music features. Experimental result shows that the proposed approach achieves 85% accuracy in average.", "title": "" }, { "docid": "c692dd35605c4af62429edef6b80c121", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" } ]
[ { "docid": "00b2befc6cfa60d0d7799673de232461", "text": "During the last decade, various machine learning and data mining techniques have been applied to Intrusion Detection Systems (IDSs) which have played an important role in defending critical computer systems and networks from cyber attacks. Unsupervised anomaly detection techniques have received a particularly great amount of attention because they enable construction of intrusion detection models without using labeled training data (i.e., with instances preclassified as being or not being an attack) in an automated manner and offer intrinsic ability to detect unknown attacks; i.e., 0-day attacks. Despite the advantages, it is still not easy to deploy them into a real network environment because they require several parameters during their building process, and thus IDS operators and managers suffer from tuning and optimizing the required parameters based on changes of their network characteristics. In this paper, we propose a new anomaly detection method by which we can automatically tune and optimize the values of parameters without predefining them. We evaluated the proposed method over real traffic data obtained from Kyoto University honeypots. The experimental results show that the performance of the proposed method is superior to that of the previous one. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1184260e77b2f6eaab97c0b9e2a43afc", "text": "In pervasive and ubiquitous computing systems, human activity recognition has immense potential in a large number of application domains. Current activity recognition techniques (i) do not handle variations in sequence, concurrency and interleaving of complex activities; (ii) do not incorporate context; and (iii) require large amounts of training data. There is a lack of a unifying theoretical framework which exploits both domain knowledge and data-driven observations to infer complex activities. In this article, we propose, develop and validate a novel Context-Driven Activity Theory (CDAT) for recognizing complex activities. We develop a mechanism using probabilistic and Markov chain analysis to discover complex activity signatures and generate complex activity definitions. We also develop a Complex Activity Recognition (CAR) algorithm. It achieves an overall accuracy of 95.73% using extensive experimentation with real-life test data. CDAT utilizes context and links complex activities to situations, which reduces inference time by 32.5% and also reduces training data by 66%.", "title": "" }, { "docid": "8a6a26094a9752010bb7297ecc80cd15", "text": "This paper provides standard instructions on how to protect short text messages with one-time pad encryption. The encryption is performed with nothing more than a pencil and paper, but provides absolute message security. If properly applied, it is mathematically impossible for any eavesdropper to decrypt or break the message without the proper key.", "title": "" }, { "docid": "4d1be9aebf7534cce625b95bde4696c6", "text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.", "title": "" }, { "docid": "32dd24b2c3bcc15dd285b2ffacc1ba43", "text": "In this paper, we present for the first time the realization of a 77 GHz chip-to-rectangular waveguide transition realized in an embedded Wafer Level Ball Grid Array (eWLB) package. The chip is contacted with a coplanar waveguide (CPW). For the transformation of the transverse electromagnetic (TEM) mode of the CPW line to the transverse electric (TE) mode of the rectangular waveguide an insert is used in the eWLB package. This insert is based on radio-frequency (RF) printed circuit board (PCB) technology. Micro vias formed in the insert are used to realize the sidewalls of the rectangular waveguide structure in the fan-out area of the eWLB package. The redistribution layers (RDLs) on the top and bottom surface of the package form the top and bottom wall, respectively. We present two possible variants of transforming the TEM mode to the TE mode. The first variant uses a via realized in the rectangular waveguide structure. The second variant uses only the RDLs of the eWLB package for mode conversion. We present simulation and measurement results of both variants. We obtain an insertion loss of 1.5 dB and return loss better than 10 dB. The presented results show that this approach is an attractive candidate for future low loss and highly integrated RF systems.", "title": "" }, { "docid": "e289e25a86e743a189fd5fec1d911f74", "text": "Congestion avoidance mechanisms allow a network to operate in the optimal region of low delay and high throughput, thereby, preventing the network from becoming congested. This is different from the traditional congestion control mechanisms that allow the network to recover from the congested state of high delay and low throughput. Both congestion avoidance and congestion control mechanisms are basically resource management problems. They can be formulated as system control problems in which the system senses its state and feeds this back to its users who adjust their controls. The key component of any congestion avoidance scheme is the algorithm (or control function) used by the users to increase or decrease their load (window or rate). We abstractly characterize a wide class of such increase/decrease algorithms and compare them using several different performance metrics. They key metrics are efficiency, fairness, convergence time, and size of oscillations. It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for convergence to an efficient and fair state regardless of the starting state of the network. This is the algorithm finally chosen for implementation in the congestion avoidance scheme recommended for Digital Networking Architecture and OSI Transport Class 4 Networks.", "title": "" }, { "docid": "80c43c621e3eef1e39ac43fd34d74dc7", "text": "A novel stealth Vivaldi antenna is proposed in this paper that covers the entire X-band from 8 to 12 GHz. Based on the difference of the current distribution on the metal patch when the antenna radiates or scatters, the shape of the patch is modified so that the electromagnetic scattering is significantly decreased in a wide angle range within the entire operating band. Maximally 14dBsm radar cross section (RCS) reduction is achieved with satisfactory radiation performance maintained.", "title": "" }, { "docid": "3509f0bb534fbb5da5b232b91d81c8e9", "text": "BACKGROUND\nBlighia sapida is a woody perennial multipurpose fruit tree species native to the Guinean forests of West Africa. The fleshy arils of the ripened fruits are edible. Seeds and capsules of the fruits are used for soap-making and all parts of the tree have medicinal properties. Although so far overlooked by researchers in the region, the tree is highly valued by farmers and is an important component of traditional agroforestry systems in Benin. Fresh arils, dried arils and soap are traded in local and regional markets in Benin providing substantial revenues for farmers, especially women. Recently, ackee has emerged as high-priority species for domestication in Benin but information necessary to elaborate a clear domestication strategy is still very sketchy. This study addresses farmers' indigenous knowledge on uses, management and perception of variation of the species among different ethnic groups taking into account also gender differences.\n\n\nMETHODS\n240 randomly selected persons (50% women) belonging to five different ethnic groups, 5 women active in the processing of ackee fruits and 6 traditional healers were surveyed with semi-structured interviews. Information collected refer mainly to the motivation of the respondents to conserve ackee trees in their land, the local uses, the perception of variation, the preference in fruits traits, the management practices to improve the production and regenerate ackee.\n\n\nRESULTS\nPeople have different interests on using ackee, variable knowledge on uses and management practices, and have reported nine differentiation criteria mainly related to the fruits. Ackee phenotypes with preferred fruit traits are perceived by local people to be more abundant in managed in-situ and cultivated stands than in unmanaged wild stands, suggesting that traditional management has initiated a domestication process. As many as 22 diseases have been reported to be healed with ackee. In general, indigenous knowledge about ackee varies among ethnic and gender groups.\n\n\nCONCLUSIONS\nWith the variation observed among ethnic groups and gender groups for indigenous knowledge and preference in fruits traits, a multiple breeding sampling strategy is recommended during germplasm collection and multiplication. This approach will promote sustainable use and conservation of ackee genetic resources.", "title": "" }, { "docid": "5569fa921ab298e25a70d92489b273fc", "text": "We present Centiman, a system for high performance, elastic transaction processing in the cloud. Centiman provides serializability on top of a key-value store with a lightweight protocol based on optimistic concurrency control (OCC).\n Centiman is designed for the cloud setting, with an architecture that is loosely coupled and avoids synchronization wherever possible. Centiman supports sharded transaction validation; validators can be added or removed on-the-fly in an elastic manner. Processors and validators scale independently of each other and recover from failure transparently to each other. Centiman's loosely coupled design creates some challenges: it can cause spurious aborts and it makes it difficult to implement common performance optimizations for read-only transactions. To deal with these issues, Centiman uses a watermark abstraction to asynchronously propagate information about transaction commits through the system.\n In an extensive evaluation we show that Centiman provides fast elastic scaling, low-overhead serializability for read-heavy workloads, and scales to millions of operations per second.", "title": "" }, { "docid": "5691a43e4ea629e2cb2d5df928813247", "text": "Due to the inherent uncertainty involved in renewable energy forecasting, uncertainty quantification is a key input to maintain acceptable levels of reliability and profitability in power system operation. A proposal is formulated and evaluated here for the case of solar power generation, when only power and meteorological measurements are available, without sky-imaging and information about cloud passages. Our empirical investigation reveals that the distribution of forecast errors do not follow any of the common parametric densities. This therefore motivates the proposal of a nonparametric approach to generate very short-term predictive densities, i.e., for lead times between a few minutes to one hour ahead, with fast frequency updates. We rely on an Extreme Learning Machine (ELM) as a fast regression model, trained in varied ways to obtain both point and quantile forecasts of solar power generation. Four probabilistic methods are implemented as benchmarks. Rival approaches are evaluated based on a number of test cases for two solar power generation sites in different climatic regions, allowing us to show that our approach results in generation of skilful and reliable probabilistic forecasts in a computationally efficient manner.", "title": "" }, { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "e47c5a856956aff71eab83a61a7bdd24", "text": "To effectively reduce output ripple of switched-capacitor DC-DC converters which generate variable output voltages, a novel feedback control scheme is presented. The proposed scheme uses pulse density and width modulation (PDWM) to reduce the output ripple with low output voltage. The prototype chip was implemented using 65nm CMOS process. The switched-capacitor DC-DC converter has 0.2-V to 0.47-V output voltage and delivers 0.25-mA to 10-mA output current from a 1-V input supply with a peak efficiency of 87%. Compared with the conventional pulse density modulation (PDM), the proposed switched-capacitor DC-DC converter with PDWM reduces the output ripple by 57% in the low output voltage region with the efficiency penalty of 2%.", "title": "" }, { "docid": "1fa056e87c10811b38277d161c81c2ac", "text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.", "title": "" }, { "docid": "03ab3aeee4eb4505956a0c516cab26dd", "text": "The present study investigated the effect of 21 days of horizontal bed rest on cutaneous cold and warm sensitivity, and on behavioural temperature regulation. Healthy male subjects (N = 10) were accommodated in a hospital ward for the duration of the study and were under 24-h medical care. All activities (eating, drinking, hygiene, etc.) were conducted in the horizontal position. On the 1st and 22nd day of bed rest, cutaneous temperature sensitivity was tested by applying cold and warm stimuli of different magnitudes to the volar region of the forearm via a Peltier element thermode. Behavioural thermoregulation was assessed by having the subjects regulate the temperature of the water within a water-perfused suit (T wps) they were wearing. A control unit established a sinusoidal change in T wps, such that it varied from 27 to 42°C. The subjects could alter the direction of the change of T wps, when they perceived it as thermally uncomfortable. The magnitude of the oscillations towards the end of the trial was assumed to represent the upper and lower boundaries of the thermal comfort zone. The cutaneous threshold for detecting cold stimulus decreased (P < 0.05) from 1.6 (1.0)°C on day 1 to 1.0 (0.3)°C on day 22. No effect was observed on the ability to detect warm stimuli or on the regulated T wps. We conclude that although cold sensitivity increased after bed rest, it was not of sufficient magnitude to cause any alteration in behavioural thermoregulatory responses.", "title": "" }, { "docid": "ab069d82c7cc70f2640c36e31761d1a8", "text": "Since the end of the CoNLL-2014 shared task on grammatical error correction (GEC), research into language model (LM) based approaches to GEC has largely stagnated. In this paper, we re-examine LMs in GEC and show that it is entirely possible to build a simple system that not only requires minimal annotated data (∼1000 sentences), but is also fairly competitive with several state-of-the-art systems. This approach should be of particular interest for languages where very little annotated training data exists, although we also hope to use it as a baseline to motivate future research.", "title": "" }, { "docid": "a4ad254998fb765f3048158915855413", "text": "The ability to detect small objects and the speed of the object detector are very important for the application of autonomous driving, and in this paper, we propose an effective yet efficient one-stage detector, which gained the second place in the Road Object Detection competition of CVPR2018 workshop Workshop of Autonomous Driving(WAD). The proposed detector inherits the architecture of SSD and introduces a novel Comprehensive Feature Enhancement(CFE) module into it. Experimental results on this competition dataset as well as the MSCOCO dataset demonstrate that the proposed detector (named CFENet) performs much better than the original SSD and the stateof-the-art method RefineDet especially for small objects, while keeping high efficiency close to the original SSD. Specifically, the single scale version of the proposed detector can run at the speed of 21 fps, while the multi-scale version with larger input size achieves the mAP 29.69, ranking second on the leaderboard.", "title": "" }, { "docid": "9f9268761bd2335303cfe2797d7e9eaa", "text": "CYBER attacks have risen in recent times. The attack on Sony Pictures by hackers, allegedly from North Korea, has caught worldwide attention. The President of the United States of America issued a statement and “vowed a US response after North Korea’s alleged cyber-attack”.This dangerous malware termed “wiper” could overwrite data and stop important execution processes. An analysis by the FBI showed distinct similarities between this attack and the code used to attack South Korea in 2013, thus confirming that hackers re-use code from already existing malware to create new variants. This attack along with other recently discovered attacks such as Regin, Opcleaver give one clear message: current cyber security defense mechanisms are not sufficient enough to thwart these sophisticated attacks. Today’s defense mechanisms are based on scanning systems for suspicious or malicious activity. If such an activity is found, the files under suspect are either quarantined or the vulnerable system is patched with an update. These scanning methods are based on a variety of techniques such as static analysis, dynamic analysis and other heuristics based techniques, which are often slow to react to new attacks and threats. Static analysis is based on analyzing an executable without executing it, while dynamic analysis executes the binary and studies its behavioral characteristics. Hackers are familiar with these standard methods and come up with ways to evade the current defense mechanisms. They produce new malware variants that easily evade the detection methods. These variants are created from existing malware using inexpensive easily available “factory toolkits” in a “virtual factory” like setting, which then spread over and infect more systems. Once a system is compromised, it either quickly looses control and/or the infection spreads to other networked systems. While security techniques constantly evolve to keep up with new attacks, hackers too change their ways and continue to evade defense mechanisms. As this never-ending billion dollar “cat and mouse game” continues, it may be useful to look at avenues that can bring in novel alternative and/or orthogonal defense approaches to counter the ongoing threats. The hope is to catch these new attacks using orthogonal and complementary methods which may not be well known to hackers, thus making it more difficult and/or expensive for them to evade all detection schemes. This paper focuses on such orthogonal approaches from Signal and Image Processing that complement standard approaches.", "title": "" }, { "docid": "e483d914e00fa46a6be188fabd396165", "text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.", "title": "" }, { "docid": "db3c5c93daf97619ad927532266b3347", "text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.", "title": "" }, { "docid": "d88e4d9bba66581be16c9bd59d852a66", "text": "After five decades characterized by empiricism and several pitfalls, some of the basic mechanisms of action of ozone in pulmonary toxicology and in medicine have been clarified. The present knowledge allows to understand the prolonged inhalation of ozone can be very deleterious first for the lungs and successively for the whole organism. On the other hand, a small ozone dose well calibrated against the potent antioxidant capacity of blood can trigger several useful biochemical mechanisms and reactivate the antioxidant system. In detail, firstly ex vivo and second during the infusion of ozonated blood into the donor, the ozone therapy approach involves blood cells and the endothelium, which by transferring the ozone messengers to billions of cells will generate a therapeutic effect. Thus, in spite of a common prejudice, single ozone doses can be therapeutically used in selected human diseases without any toxicity or side effects. Moreover, the versatility and amplitude of beneficial effect of ozone applications have become evident in orthopedics, cutaneous, and mucosal infections as well as in dentistry.", "title": "" } ]
scidocsrr
b676e1b01dea17be42edf8cde04ca128
Learning to Resolve Natural Language Ambiguities: A Unified Approach
[ { "docid": "cce513c48e630ab3f072f334d00b67dc", "text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press", "title": "" } ]
[ { "docid": "e78af59ab9e02fc5116118652b9dab81", "text": "The purpose of this study was to calculate and validate reference standards for the 20-m shuttle run test (SR) in youths aged 10-18 years. Reference standards based on the number of completed SR laps were calculated by LMS method in a reference group of 5559 students. Cut-off values for SR laps were determined and tested by ROC curve analysis in a validation group (633 students), from which waist circumference, HDL-cholesterol, triglycerides, fasting glucose and mean arterial pressure were assessed to calculate a metabolic risk score, later dichotomized in low and high metabolic risk (HMRS). The accuracy of SR laps standards was significant for girls (AUC = 0.66; 95% CI = 0.58-0.74; p < .001) and boys (AUC = 0.71; 95% CI = 0.62-0.79; p < .001) for identifying subjects at HMRS. The 40th percentile was the best cut-off for SR laps in girls (SENS = 0.569; 1-SPEC = 0.330) and boys (SENS = 0.634; 1-SPEC = 0.266). New SR laps reference standards are able to discriminate metabolic risk levels, and may provide a valuable tool for early prevention of cardiovascular risk factors.", "title": "" }, { "docid": "ab96da9c77ada044e1a0e7a1993dbf1f", "text": "Industrial penicillin production with the filamentous fungus Penicillium chrysogenum is based on an unprecedented effort in microbial strain improvement. To gain more insight into penicillin synthesis, we sequenced the 32.19 Mb genome of P. chrysogenum Wisconsin54-1255 and identified numerous genes responsible for key steps in penicillin production. DNA microarrays were used to compare the transcriptomes of the sequenced strain and a penicillinG high-producing strain, grown in the presence and absence of the side-chain precursor phenylacetic acid. Transcription of genes involved in biosynthesis of valine, cysteine and α-aminoadipic acid—precursors for penicillin biosynthesis—as well as of genes encoding microbody proteins, was increased in the high-producing strain. Some gene products were shown to be directly controlling β-lactam output. Many key cellular transport processes involving penicillins and intermediates remain to be characterized at the molecular level. Genes predicted to encode transporters were strongly overrepresented among the genes transcriptionally upregulated under conditions that stimulate penicillinG production, illustrating potential for future genomics-driven metabolic engineering.", "title": "" }, { "docid": "b29d979edaf08c0dafe9864f28519a3a", "text": "We study the problem of bilingual lexicon induction (BLI) in a setting where some translation resources are available, but unknown translations are sought for certain, possibly domain-specific terminology. We frame BLI as a classification problem for which we design a neural network based classification architecture composed of recurrent long short-term memory and deep feed forward networks. The results show that wordand character-level representations each improve state-of-the-art results for BLI, and the best results are obtained by exploiting the synergy between these wordand character-level representations in the classification model.", "title": "" }, { "docid": "98cb849504f344253bc879704c698f1e", "text": "Serverless computing provides a small runtime container to execute lines of codes without infrastructure management which is similar to Platform as a Service (PaaS) but a functional level. Amazon started the event-driven compute named Lambda functions in 2014 with a 25 concurrent limitation, but it now supports at least a thousand of concurrent invocation to process event messages generated by resources like databases, storage and system logs. Other providers, i.e., Google, Microsoft, and IBM offer a dynamic scaling manager to handle parallel requests of stateless functions in which additional containers are provisioning on new compute nodes for distribution. However, while functions are often developed for microservices and lightweight workload, they are associated with distributed data processing using the concurrent invocations. We claim that the current serverless computing environments can support dynamic applications in parallel when a partitioned task is executable on a small function instance. We present results of throughput, network bandwidth, a file I/O and compute performance regarding the concurrent invocations. We deployed a series of functions for distributed data processing to address the elasticity and then demonstrated the differences between serverless computing and virtual machines for cost efficiency and resource utilization.", "title": "" }, { "docid": "4c0869847079b11ec8e0a6b9714b2d09", "text": "This paper provides a tutorial overview of the latest generation of passive optical network (PON) technology standards nearing completion in ITU-T. The system is termed NG-PON2 and offers a fiber capacity of 40 Gbit/s by exploiting multiple wavelengths at dense wavelength division multiplexing channel spacing and tunable transceiver technology in the subscriber terminals (ONUs). Here, the focus is on the requirements from network operators that are driving the standards developments and the technology selection prior to standardization. A prestandard view of the main physical layer optical specifications is also given, ahead of final ITU-T approval.", "title": "" }, { "docid": "89a9293fb0fcac7d55cfb44a8032ce71", "text": "Traditional spectral clustering methods cannot naturally learn the number of communities in a network and often fail to detect smaller community structure in dense networks because they are based upon external community connectivity properties such as graph cuts. We propose an algorithm for detecting community structure in networks called the leader-follower algorithm which is based upon the natural internal structure expected of communities in social networks. The algorithm uses the notion of network centrality in a novel manner to differentiate leaders (nodes which connect different communities) from loyal followers (nodes which only have neighbors within a single community). Using this approach, it is able to naturally learn the communities from the network structure and does not require the number of communities as an input, in contrast to other common methods such as spectral clustering. We prove that it will detect all of the communities exactly for any network possessing communities with the natural internal structure expected in social networks. More importantly, we demonstrate the effectiveness of the leader-follower algorithm in the context of various real networks ranging from social networks such as Facebook to biological networks such as an fMRI based human brain network. We find that the leader-follower algorithm finds the relevant community structure in these networks without knowing the number of communities beforehand. Also, because the leader-follower algorithm detects communities using their internal structure, we find that it can resolve a finer community structure in dense networks than common spectral clustering methods based on external community structure.", "title": "" }, { "docid": "9d530fbbdb4448175f655b6cc8b4d539", "text": "Cognitive big data: survey and review on big data research and its implications. What is really ‘new’ in big data? Artur Lugmayr Björn Stockleben Christoph Scheib Mathew Mailaparampil Article information: To cite this document: Artur Lugmayr Björn Stockleben Christoph Scheib Mathew Mailaparampil , (2017),\" Cognitive big data: survey and review on big data research and its implications. What is really ‘new’ in big data? \", Journal of Knowledge Management, Vol. 21 Iss 1 pp. Permanent link to this document: http://dx.doi.org/10.1108/JKM-07-2016-0307", "title": "" }, { "docid": "3d34dc15fa11e723a52b21dc209a939f", "text": "Valuable information can be hidden in images, however, few research discuss data mining on them. In this paper, we propose a general framework based on the decision tree for mining and processing image data. Pixel-wised image features were extracted and transformed into a database-like table which allows various data mining algorithms to make explorations on it. Each tuple of the transformed table has a feature descriptor formed by a set of features in conjunction with the target label of a particular pixel. With the label feature, we can adopt the decision tree induction to realize relationships between attributes and the target label from image pixels, and to construct a model for pixel-wised image processing according to a given training image dataset. Both experimental and theoretical analyses were performed in this study. Their results show that the proposed model can be very efficient and effective for image processing and image mining. It is anticipated that by using the proposed model, various existing data mining and image processing methods could be worked on together in different ways. Our model can also be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.", "title": "" }, { "docid": "3976419e9f78dbff8ae235dd7aee2d8d", "text": "A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision. Recently, deep reinforcement learning algorithms combined with self-play have achieved superhuman proficiency in Go, Chess, and Shogi without human data or domain knowledge. In these environments, a reward is always received at the end of the game; however, for many combinatorial optimization environments, rewards are sparse and episodes are not guaranteed to terminate. We introduce Autodidactic Iteration: a novel reinforcement learning algorithm that is able to teach itself how to solve the Rubik’s Cube with no human assistance. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.", "title": "" }, { "docid": "7b28505834de4346ef3c43e77a9444d6", "text": "With the development of the modern aircraft, the large-scale thin walled parts have been used in aeronautics and astronautics. In NC milling process, the thin walled plates are very easy to deform which will influence the accuracy and quality. From the point of view of theoretically and numerical calculation, the paper proposes a new analytical deformation model suitable for static machining error prediction of low- rigidity components. The part deformation is predicted using a theoretical big deformation equations model, which is established on the basis of the equations of Von Karman when the linear load acts on thin-wall plates. The part big deformation is simulated using FE analysis. The simulating results shown that the diverse cutting forces, milling location and thickness of the plate may lead to various deformation results.", "title": "" }, { "docid": "518a1b8ab1cefb70fede0f26991a5c78", "text": "Abstract. It is shown how an arbitrary set of points in the hypercube can be Latinized, i.e., can be transformed into a point set that has the Latin hypercube property. The effect of Latinization on the star discrepancy and other uniformity measures of a point set is analyzed. For a few selected but representative point sampling methods, evidence is provided to show that Latinization lowers the star discrepancy measure. A novel point sampling method is presented based on centroidal Voronoi tessellations of the hypercube. These point sets have excellent volumetric distributions, but have poor star discrepancies. Evidence is given that the Latinization of CVT points sets greatly lowers their star discrepancy measure but still preserves superior volumetric uniformity. As a result, means for determining improved Latin hypercube point samples are given.", "title": "" }, { "docid": "b236003ad282e973b3ebf270894c2c07", "text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.", "title": "" }, { "docid": "b45608b866edf56dbafe633824719dd6", "text": "classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.", "title": "" }, { "docid": "09a3836f9dd429b6820daf3d2c9b2944", "text": "Students attendance in the classroom is very important task and if taken manually wastes a lot of time. There are many automatic methods available for this purpose i.e. biometric attendance. All these methods also waste time because students have to make a queue to touch their thumb on the scanning device. This work describes the efficient algorithm that automatically marks the attendance without human intervention. This attendance is recorded by using a camera attached in front of classroom that is continuously capturing images of students, detect the faces in images and compare the detected faces with the database and mark the attendance. The paper review the related work in the field of attendance system then describes the system architecture, software algorithm and results.", "title": "" }, { "docid": "4eb18be4c47035477e8466fb346cc60a", "text": "Technology computer aided design (TCAD) simulations are conducted on a 4T PPD pixel, on a conventional gated photodiode (PD), and finally on a radiation hardened pixel. Simulations consist in demonstrating that it is possible to reduce the dark current due to interface states brought by the adjacent gate (AG), by means of a sharing mechanism between the PD and the drain. The sharing mechanism is activated and controlled by polarizing the AG at a positive OFF voltage, and consequently the dark current is reduced and not compensated. The drawback of the dark current reduction is a reduction of the full well capacity of the PD, which is not a problem when the pixel saturation is limited by the readout chain. Some measurements performed on pixel arrays confirm the TCAD results.", "title": "" }, { "docid": "8292d5c1e13042aa42f1efb60058ef96", "text": "The epithelial-to-mesenchymal transition (EMT) is a vital control point in metastatic breast cancer (MBC). TWIST1, SNAIL1, SLUG, and ZEB1, as key EMT-inducing transcription factors (EMT-TFs), are involved in MBC through different signaling cascades. This updated meta-analysis was conducted to assess the correlation between the expression of EMT-TFs and prognostic value in MBC patients. A total of 3,218 MBC patients from fourteen eligible studies were evaluated. The pooled hazard ratios (HR) for EMT-TFs suggested that high EMT-TF expression was significantly associated with poor prognosis in MBC patients (HRs = 1.72; 95% confidence intervals (CIs) = 1.53-1.93; P = 0.001). In addition, the overexpression of SLUG was the most impactful on the risk of MBC compared with TWIST1 and SNAIL1, which sponsored fixed models. Strikingly, the increased risk of MBC was less associated with ZEB1 expression. However, the EMT-TF expression levels significantly increased the risk of MBC in the Asian population (HR = 2.11, 95% CI = 1.70-2.62) without any publication bias (t = 1.70, P = 0.11). These findings suggest that the overexpression of potentially TWIST1, SNAIL1 and especially SLUG play a key role in the aggregation of MBC treatment as well as in the improvement of follow-up plans in Asian MBC patients.", "title": "" }, { "docid": "0b407f1f4d771a34e6d0bc59bf2ef4c4", "text": "Social advertisement is one of the fastest growing sectors in the digital advertisement landscape: ads in the form of promoted posts are shown in the feed of users of a social networking platform, along with normal social posts; if a user clicks on a promoted post, the host (social network owner) is paid a fixed amount from the advertiser. In this context, allocating ads to users is typically performed by maximizing click-through-rate, i.e., the likelihood that the user will click on the ad. However, this simple strategy fails to leverage the fact the ads can propagate virally through the network, from endorsing users to their followers. In this paper, we study the problem of allocating ads to users through the viral-marketing lens. Advertisers approach the host with a budget in return for the marketing campaign service provided by the host. We show that allocation that takes into account the propensity of ads for viral propagation can achieve significantly better performance. However, uncontrolled virality could be undesirable for the host as it creates room for exploitation by the advertisers: hoping to tap uncontrolled virality, an advertiser might declare a lower budget for its marketing campaign, aiming at the same large outcome with a smaller cost. This creates a challenging trade-off: on the one hand, the host aims at leveraging virality and the network effect to improve advertising efficacy, while on the other hand the host wants to avoid giving away free service due to uncontrolled virality. We formalize this as the problem of ad allocation with minimum regret, which we show is NP-hard and inapproximable w.r.t. any factor. However, we devise an algorithm that provides approximation guarantees w.r.t. the total budget of all advertisers. We develop a scalable version of our approximation algorithm, which we extensively test on four real-world data sets, confirming that our algorithm delivers high quality solutions, is scalable, and significantly outperforms several natural baselines.", "title": "" }, { "docid": "ed2fadc060fb79693c5d182d3719b686", "text": "We are dealing with the problem of fine-grained vehicle make&model recognition and verification. Our contribution is showing that extracting additional data from the video stream - besides the vehicle image itself - and feeding it into the deep convolutional neural network boosts the recognition performance considerably. This additional information includes: 3D vehicle bounding box used for \"unpacking\" the vehicle image, its rasterized low-resolution shape, and information about the 3D vehicle orientation. Experiments show that adding such information decreases classification error by 26% (the accuracy is improved from 0.772 to 0.832) and boosts verification average precision by 208% (0.378 to 0.785) compared to baseline pure CNN without any input modifications. Also, the pure baseline CNN outperforms the recent state of the art solution by 0.081. We provide an annotated set \"BoxCars\" of surveillance vehicle images augmented by various automatically extracted auxiliary information. Our approach and the dataset can considerably improve the performance of traffic surveillance systems.", "title": "" }, { "docid": "bf305e88c6f2878c424eca1223a02a8d", "text": "The first plausible scheme of fully homomorphic encryption (FHE), introduced by Gentry in 2009, was considered a major breakthrough in the field of information security. FHE allows the evaluation of arbitrary functions directly on encrypted data on untrusted servers. However, previous implementations of FHE on general-purpose processors had very long latency, which makes it impractical for cloud computing. The most computationally intensive components in the Gentry-Halevi FHE primitives are the large-number modular multiplications and additions. In this paper, we attempt to use customized circuits to speedup the large number multiplication. Strassen's algorithm is employed in the design of an efficient, high-speed large-number multiplier. In particular, we propose an architecture design of an 768K-bit multiplier. As a key compoment, an 64K-point finite-field fast Fourier transform (FFT) processor is designed and prototyped on the Stratix-V FPGA. At 100 MHz, the FPGA implementation is about twice as fast as the same FFT algorithm executed on the NVIDA C2050 GPU which has 448 cores running at 1.15 GHz but at much lower power consumption.", "title": "" }, { "docid": "e57f85949378039249f36999c5f9b76e", "text": "A good dialogue agent should have the ability to interact with users by both responding to questions and by asking questions, and importantly to learn from both types of interaction. In this work, we explore this direction by designing a simulator and a set of synthetic tasks in the movie domain that allow such interactions between a learner and a teacher. We investigate how a learner can benefit from asking questions in both offline and online reinforcement learning settings, and demonstrate that the learner improves when asking questions. Finally, real experiments with Mechanical Turk validate the approach. Our work represents a first step in developing such end-to-end learned interactive dialogue agents.", "title": "" } ]
scidocsrr
81c05160a1fdae91c7e8607538e0fc38
FINANCIAL TIME SERIES FORECASTING USING ARTIFICIAL NEURAL NETWORKS
[ { "docid": "41a19d0e799e1801bacfbab19b1da467", "text": "This paper presents a neural network model for technical analysis of stock market, and its application to a buying and selling timing prediction system for stock index. When the numbers of learning samples are uneven among categories, the neural network with normal learning has the problem that it tries to improve only the prediction accuracy of most dominant category. In this paper, a learning method is proposed for improving prediction accuracy of other categories, controlling the numbers of learning samples by using information about the importance of each category. Experimental simulation using actual price data is carried out to demonstrate the usefulness of the method.", "title": "" } ]
[ { "docid": "ed0b269f861775550edd83b1eb420190", "text": "The continuous innovation process of the Information and Communication Technology (ICT) sector shape the way businesses redefine their business models. Though, current drivers of innovation processes focus solely on a technical dimension, while disregarding social and environmental drivers. However, examples like Nokia, Yahoo or Hewlett-Packard show that even though a profitable business model exists, a sound strategic innovation process is needed to remain profitable in the long term. A sustainable business model innovation demands the incorporation of all dimensions of the triple bottom line. Nevertheless, current management processes do not take the responsible steps to remain sustainable and keep being in denial of the evolutionary direction in which the markets develop, because the effects are not visible in short term. The implications are of substantial effect and can bring the foundation of the company’s business model in danger. This work evaluates the decision process that lets businesses decide in favor of un-sustainable changes and points out the barriers that prevent the development towards a sustainable business model that takes the new balance of forces into account.", "title": "" }, { "docid": "45c04c80a5e4c852c4e84ba66bd420dd", "text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.", "title": "" }, { "docid": "2f94bd95e2b17b4ff517133544087fc9", "text": "MPEG DASH is a widely used standard for adaptive video streaming over HTTP. The conceptual architecture for DASH includes a web server and clients, which download media segments from the server. Clients select the resolution of video segments by using an Adaptive Bit-Rate (ABR) strategy; in particular, a throughput-based ABR is used in the case of live video applications. However, recent papers show that these strategies may suffer from the presence of proxies/caches in the network, which are instrumental in streaming video on a large scale. To face this issue, we propose to extend the MPEG DASH architecture with a Tracker functionality, enabling client-to-client sharing of control information. This extension paves the way to a novel family of Tracker-assisted strategies that allow a greater design flexibility, while solving the specific issue caused by proxies/caches; in addition, its utility goes beyond the problem at hand, as it can be used by other applications as well, e.g. for peer-to-peer streaming.", "title": "" }, { "docid": "edb0edc7962f8b09495240131681db9d", "text": "A new theory of motivation is described along with its applications to addiction and aversion. The theory assumes that many hedonic, affective, or emotional states are automatically opposed by central nervous system mechanisms which reduce the intensity of hedonic feelings, both pleasant and aversive. The opponent processes for most hedonic states are strengthened by use and are weakened by disuse. These simple assumptions lead to deductions of many known facts about acquired motivation. In addition, the theory suggests several new lines of research on motivation. It argues that the establishment of some types of acquired motivation does not depend on conditioning and is nonassociative in nature. The relationships between conditioning processes and postulated opponent processes are discussed. Finally, it is argued that the data on several types of acquired motivation, arising from either pleasurable or aversive stimulation, can be fruitfully reorganized and understood within the framework provided by the opponent-process model.", "title": "" }, { "docid": "f61567cb43dfa4941a8b87dedce0b051", "text": "A single layer wideband SIW-fed differential patch array is proposed in this paper. A SIW-CPS-CMS (substrate integrated waveguide - coupled lines - coupled microstrip line) transition is designed and has a bandwidth of about 50%, covering the E-band and W-band. The differential phase deviation between the coupled microstrip lines is less than 7.5° within the operation band. A 1×4 array and a 4×4 array are designed. The antenna is composed of SIW parallel power di vider network, SIW-CPS-CMS transition, and series differential-fed patch array. Simulated results show that the bandwidth of the 1×4 array and 4×4 array are 37% and 12%, and the realized gain are 10.5-12 dB and 17.2-20.2dB within the corresponding operation band, respectively. The features of single layer and wideband on impedance and gain of the proposed SIW-fed differential patch array make it a good candidate for automotive radar or other millimeter wave applications.", "title": "" }, { "docid": "83f14923970c83a55152464179e6bae9", "text": "Urine drug screening can detect cases of drug abuse, promote workplace safety, and monitor drugtherapy compliance. Compliance testing is necessary for patients taking controlled drugs. To order and interpret these tests, it is required to know of testing modalities, kinetic of drugs, and different causes of false-positive and false-negative results. Standard immunoassay testing is fast, cheap, and the preferred primarily test for urine drug screening. This method reliably detects commonly drugs of abuse such as opiates, opioids, amphetamine/methamphetamine, cocaine, cannabinoids, phencyclidine, barbiturates, and benzodiazepines. Although immunoassays are sensitive and specific to the presence of drugs/drug metabolites, false negative and positive results may be created in some cases. Unexpected positive test results should be checked with a confirmatory method such as gas chromatography/mass spectrometry. Careful attention to urine collection methods and performing the specimen integrity tests can identify some attempts by patients to produce false-negative test results.", "title": "" }, { "docid": "fdd7237680ee739b598cd508c4a2ed38", "text": "Rectovaginal Endometriosis (RVE) is a severe form of endometriosis classified by Kirtner as stage 4 [1,2]. It is less frequent than peritoneal or ovarian endometriosis affecting 3.8% to 37% of patients with endometriosis [3,4]. RVE infiltrates the rectum, vagina, and rectovaginal septum, up to obliteration of the pouch of Douglas [4]. Endometriotic nodules exceeding 30 mm in diameter have 17.9% risk of ureteral involvement [5], while 5.3% to 12% of patients have bowel endometriosis, most commonly found in the recto-sigmoid involving 74% of those patients [3,4].", "title": "" }, { "docid": "884aa1d674e431e2a781eb7e861f2541", "text": "A key question facing education policymakers in many emerging economies is whether to promote the local language, as opposed to English, in elementary schools. The dilemma is particularly strong in countries that underwent rapid globalization making English a lingua franca for international as well as domestic exchange. In this paper, we estimate the English premium in globalization globalizing economy, by exploiting an exogenous language policy intervention in India. English training was revoked from the primary grades of all public schools in the state of West Bengal. In a two-way fixed effects model we combine differences across birth cohorts and districts in the exposure to English education, to estimate the effect of the language policy on wage premium. In addition, since the policy was introduced only in the state of West Bengal, we combine other states with no such intervention to address the potential threat of differential district trends confounding our two-way estimates. Our results indicate a remarkably high English skill premium in the labor market. A 1% increase in the probability of learning English raises weekly wages by 1.6%. On the average this implies a 68% reduction in wages for those that do not learn English due to the change in language policy. We provide further evidence that occupational choice played a decisive role in determining the wage gap. JEL Classifications: H4, I2, J0, O1 1 We thank Sukkoo Kim, Sebastian Galiani, Charles Moul, Bruce Petersen, and Robert Pollak for their invaluable advice and support, Barry Chiswick for his helpful comments and seminar participants at the 2008 Canadian Economic Conference and NEUDC conference for the discussions. We also thank Daifeng He and Michael Plotzke for their feedback. We are grateful to the Bradley Foundation for providing research support and Center for Research in Economics and Strategy (CRES), in the Olin Business School, Washington University in St. Louis, for travel grants. All errors are ours. 2 Department of Economics, Washington University in St Louis, email: tchakrab@artsci.wustl.edu 3 Department of Economics, Washington University in St Louis, email: skapur@artsci.wustl.edu", "title": "" }, { "docid": "72142ddc1ad3906fd0b1320ab3a1e48f", "text": "The American Herbal Pharmacopoeia (AHP) today announced the release of a section of the soon-to-be-completed Cannabis Therapeutic Compendium Cannabis in the Management and Treatment of Seizures and Epilepsy. This scientific review is one of numerous scientific reviews that will encompass the broad range of science regarding the therapeutic effects and safety of cannabis. In recent months there has been considerable attention given to the potential benefit of cannabis for treating intractable seizure disorders including rare forms of epilepsy. For this reason, the author of the section, Dr. Ben Whalley, and AHP felt it important to release this section, in its near-finalized form, into the public domain for free dissemination. The full release of AHP's Therapeutic Compendium is scheduled for early 2014. Dr. Whalley is a Senior Lecturer in Pharmacology and Pharmacy Director of Research at the School of Pharmacy of the University of Reading in the United Kingdom. He is also a member of the UK Epilepsy Research Network. Dr. Whalley's research interests lie in investigating neuronal processes that underlie complex physiological functions such as neuronal hyperexcitability states and their consequential disorders such as epilepsy, ataxia and dystonias, as well as learning and memory. Since 2003, Dr. Whalley has authored and co-authored numerous scientific peer-reviewed papers on the potential effects of cannabis in relieving seizure disorders and investigating the underlying pathophysiological mechanisms of these disorders. The release of this comprehensive review is timely given the growing claims being made for cannabis to relieve even the most severe forms of seizures. According to Dr. Whalley: \" Recent announcements of regulated human clinical trials of pure components of cannabis for the treatment of epilepsy have raised hopes among patients with drug-resistant epilepsy, their caregivers, and clinicians. Also, claims in the media of the successful use of cannabis extracts for the treatment of epilepsies, particularly in children, have further highlighted the urgent need for new and effective treatments. \" However, Dr. Whalley added, \" We must bear in mind that the use of any new treatment, particularly in the critically ill, carries inherent risks. Releasing this section of the monograph into the public domain at this time provides clinicians, patients, and their caregivers with a single document that comprehensively summarizes the scientific knowledge to date regarding cannabis and epilepsy and so fully support informed, evidence-based decision making. \" This release also follows recommendations of the Epilepsy Foundation, which has called for increasing medical …", "title": "" }, { "docid": "dcda412c18e92650d9791023f13e4392", "text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.", "title": "" }, { "docid": "cb7a9b816fc1b83670cb9fb377974e5d", "text": "BACKGROUND\nCare attendants constitute the main workforce in nursing homes, but their heavy workload, low autonomy, and indefinite responsibility result in high levels of stress and may affect quality of care. However, few studies have focused of this problem.\n\n\nOBJECTIVES\nThe aim of this study was to examine work-related stress and associated factors that affect care attendants in nursing homes and to offer suggestions for how management can alleviate these problems in care facilities.\n\n\nMETHODS\nWe recruited participants from nine nursing homes with 50 or more beds located in middle Taiwan; 110 care attendants completed the questionnaire. The work stress scale for the care attendants was validated and achieved good reliability (Cronbach's alpha=0.93). We also conducted exploratory factor analysis.\n\n\nRESULTS\nSix factors were extracted from the work stress scale: insufficient ability, stressful reactions, heavy workload, trouble in care work, poor management, and working time problems. The explained variance achieved 64.96%. Factors related to higher work stress included working in a hospital-based nursing home, having a fixed schedule, night work, feeling burden, inconvenient facility, less enthusiasm, and self-rated higher stress.\n\n\nCONCLUSION\nWork stress for care attendants in nursing homes is related to human resource management and quality of care. We suggest potential management strategies to alleviate work stress for these workers.", "title": "" }, { "docid": "beff14cfa1d0e5437a81584596e666ea", "text": "Graphene has exceptional optical, mechanical, and electrical properties, making it an emerging material for novel optoelectronics, photonics, and flexible transparent electrode applications. However, the relatively high sheet resistance of graphene is a major constraint for many of these applications. Here we propose a new approach to achieve low sheet resistance in large-scale CVD monolayer graphene using nonvolatile ferroelectric polymer gating. In this hybrid structure, large-scale graphene is heavily doped up to 3 × 10(13) cm(-2) by nonvolatile ferroelectric dipoles, yielding a low sheet resistance of 120 Ω/□ at ambient conditions. The graphene-ferroelectric transparent conductors (GFeTCs) exhibit more than 95% transmittance from the visible to the near-infrared range owing to the highly transparent nature of the ferroelectric polymer. Together with its excellent mechanical flexibility, chemical inertness, and the simple fabrication process of ferroelectric polymers, the proposed GFeTCs represent a new route toward large-scale graphene-based transparent electrodes and optoelectronics.", "title": "" }, { "docid": "3a3d6fecb580c2448c21838317aec3e2", "text": "The Vehicle Routing Problem with Time windows (VRPTW) is an extension of the capacity constrained Vehicle Routing Problem (VRP). The VRPTW is NP-Complete and instances with 100 customers or more are very hard to solve optimally. We represent the VRPTW as a multi-objective problem and present a genetic algorithm solution using the Pareto ranking technique. We use a direct interpretation of the VRPTW as a multi-objective problem, in which the two objective dimensions are number of vehicles and total cost (distance). An advantage of this approach is that it is unnecessary to derive weights for a weighted sum scoring formula. This prevents the introduction of solution bias towards either of the problem dimensions. We argue that the VRPTW is most naturally viewed as a multi-objective problem, in which both vehicles and cost are of equal value, depending on the needs of the user. A result of our research is that the multi-objective optimization genetic algorithm returns a set of solutions that fairly consider both of these dimensions. Our approach is quite effective, as it provides solutions competitive with the best known in the literature, as well as new solutions that are not biased toward the number of vehicles. A set of well-known benchmark data are used to compare the effectiveness of the proposed method for solving the VRPTW.", "title": "" }, { "docid": "60afd7bbb52b4e644258bf73466be036", "text": "This article describes the physiology of wound healing, discusses considerations and techniques for dermabrasion, and presents case studies and figures for a series of patients who underwent dermabrasion after surgeries for facial trauma.", "title": "" }, { "docid": "f50f7daeac03fbd41f91ff48c054955b", "text": "Neuronal signalling and communication underpin virtually all aspects of brain activity and function. Network science approaches to modelling and analysing the dynamics of communication on networks have proved useful for simulating functional brain connectivity and predicting emergent network states. This Review surveys important aspects of communication dynamics in brain networks. We begin by sketching a conceptual framework that views communication dynamics as a necessary link between the empirical domains of structural and functional connectivity. We then consider how different local and global topological attributes of structural networks support potential patterns of network communication, and how the interactions between network topology and dynamic models can provide additional insights and constraints. We end by proposing that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.", "title": "" }, { "docid": "f7cdf631c12567fd37b04419eb8e4daa", "text": "A multiple-beam photonic beamforming receiver is proposed and demonstrated. The architecture is based on a large port-count demultiplexer and fast tunable lasers to achieve a passive design, with independent beam steering for multiple beam operation. A single true time delay module with four independent beams is experimentally demonstrated, showing extremely smooth RF response in the -band, fast switching capabilities, and negligible crosstalk.", "title": "" }, { "docid": "cc2a7d6ac63f12b29a6d30f20b5547be", "text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.", "title": "" }, { "docid": "4ce67aeca9e6b31c5021712f148108e2", "text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.", "title": "" }, { "docid": "3476f91f068102ccf35c3855102f4d1b", "text": "Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safety-critical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both in-house developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of building-block experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a validation benchmark to a complex engineering system of interest.", "title": "" }, { "docid": "17287942eaf5c590b0d48b73eac7bc7c", "text": "The successof the Particle Swarm Optimization (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) hasmotivated researchers to extend the useof this bioinspired techniqueto other areas.One of them is multiobjective optimization. Despite the fact that the first proposalof a Multi-Objecti veParticle SwarmOptimizer (MOPSO) is over six years old, a considerable number of other algorithms have beenproposedsincethen. This paper presentsa comprehensi ve review of the various MOPSOsreported in the specializedliteratur e. As part of this review, we include a classificationof the approaches,and weidentify the main featuresof eachproposal. In the last part of the paper, we list someof the topicswithin this field that weconsideraspromisingareasof futur e research.", "title": "" } ]
scidocsrr
7971b64586ce88ece20007f3feb99a5c
Normalization as a canonical neural computation
[ { "docid": "1ca692464d5d7f4e61647bf728941519", "text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.", "title": "" } ]
[ { "docid": "73e804508e6ff5d9709be369640a2985", "text": "Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.", "title": "" }, { "docid": "97107561103eec062d9a2d4ae28ffb9e", "text": "Development of loyalty in customers is a strategic goal of many firms and organizations and today, the main effort of many firms is allocated to retain customers and obtaining even more ones. Characteristics of loyal customers and method for formation of loyalty in customers in internet space are different to those in traditional one in some respects and study of them may be beneficial in improving performance of firms, organizations and shops involving in this field of business. Also it may help managers of these types of businesses to make efficient and effective decisions towards success of their organizations. Thus, present study aims to investigate the effects of e-service quality in three aspects of information, system and web-service on e-trust and e-satisfaction as key factors influencing creation of e-loyalty of Iranian customers in e-business context; Also it was tried to demonstrate moderating effect of situational factors e.g. time poverty, geographic distance, physical immobility and lack of transportation on e-loyalty level. Totally, 400 questionnaires were distributed to university students, that 382 questionnaires were used for the final analysis, which the results from analysis of them based on simple linear regression and multiple hierarchical regression show that customer loyalty to e-shops is directly influenced by e-trust in and e-satisfaction with e-shops which in turn are determined by e-service quality; also the obtained results shows that situational variables can moderate relationship between e-trust and/or e-satisfaction and e-loyalty. Therefore situational variables studied in present research can influence initiation of transaction of customer with online retailer and customer attitude importance and this in turn makes it necessary for managers to pay special attention to situational effects in examination of current attitude and behavior of customers.", "title": "" }, { "docid": "9164a2823f9f7c7ee17f4c3e843b090f", "text": "Clustering is a key data mining problem. Density and grid based technique is a popular way to mine clusters in a large multi-dimensional space wherein clusters are regarded as dense regions than their surroundings. The attribute values and ranges of these attributes characterize the clusters. Fine grid sizes lead to a huge amount of computation while coarse grid sizes result in loss in quality of clusters found. Also, varied grid sizes result in discovering clusters with different cluster descriptions. The technique of Adaptive grids enables to use grids based on the data distribution and does not require the user to specify any parameters like the grid size or the density thresholds. Further, clusters could be embedded in a subspace of a high dimensional space. We propose a modified bottom-up subspace clustering algorithm to discover clusters in all possible subspaces. Our method scales linearly with the data dimensionality and the size of the data set. Experimental results on a wide variety of synthetic and real data sets demonstrate the effectiveness of Adaptive grids and the effect of the modified subspace clustering algorithm. Our algorithm explores at-least an order of magnitude more number of subspaces than the original algorithm and the use of adaptive grids yields on an average of two orders of magnitude speedup as compared to the method with user specified grid size and threshold.", "title": "" }, { "docid": "5b162e19be006891a08d1787d468de41", "text": "Scheduling latency under Linux and its principal real-time variant, the PREEMPT RT patch, are typically measured using cyclictest, a tracing tool that treats the kernel as a black box and directly reports scheduling latency. LITMUS, a real-time extension of Linux focused on algorithmic improvements, is typically evaluated using Feather-Trace, a finedgrained tracing mechanism that produces a comprehensive overhead profile suitable for overhead-aware schedulability analysis. This difference in tracing tools and output has to date prevented a direct comparison. This paper reports on a port of cyclictest to LITMUS and a case study comparing scheduling latency on a 16-core Intel platform. The main conclusions are: (i) LITMUS introduces only minor overhead itself, but (ii) it also inherits mainline Linux’s severe limitations in the presence of I/O-bound background tasks.", "title": "" }, { "docid": "a1b9827493928d1c53ac1be8750bf928", "text": "Image-based localization is an important problem in robotics and an integral part of visual mapping and navigation systems. An approach to robustly match images to previously recorded ones must be able to cope with seasonal changes especially when it is supposed to work reliably over long periods of time. In this paper, we present a novel approach to visual localization of mobile robots in outdoor environments, which is able to deal with substantial seasonal changes. We formulate image matching as a minimum cost flow problem in a data association graph to effectively exploit sequence information. This allows us to deal with non-matching image sequences that result from temporal occlusions or from visiting new places. We present extensive experimental evaluations under substantial seasonal changes. Our approach achieves accurate matching across seasons and outperforms existing state-of-the-art methods such as FABMAP2 and SeqSLAM.", "title": "" }, { "docid": "f074965ee3a1d6122f1e68f49fd11d84", "text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.", "title": "" }, { "docid": "056f5179fa5c0cdea06d29d22a756086", "text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017", "title": "" }, { "docid": "9164dab8c4c55882f8caecc587c32eb1", "text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).", "title": "" }, { "docid": "e6332297afd2883e41888be243b27d1d", "text": "The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/.", "title": "" }, { "docid": "22646672196b49cc0fde4b6c6e187fd1", "text": "There is a tremendous increase in the research of data mining. Data mining is the process of extraction of data from large database. Knowledge Discovery in database (KDD) is another name of data mining. Privacy protection has become a necessary requirement in many data mining applications due to emerging privacy legislation and regulations. One of the most important topics in research community is Privacy Preserving Data Mining (PPDM). Privacy preserving data mining (PPDM) deals with protecting the privacy of individual data or sensitive knowledge without sacrificing the utility of the data. The Success of Privacy Preserving data mining algorithms is measured in terms of its performance, data utility, level of uncertainty or resistance to data mining algorithms etc. In this paper we will review on various privacy preserving techniques like Data perturbation, condensation etc.", "title": "" }, { "docid": "fc9910cdff91f69d06d8510ff57fec8e", "text": "Microplastics, plastics particles <5 mm in length, are a widespread pollutant of the marine environment. Oral ingestion of microplastics has been reported for a wide range of marine biota, but uptake into the body by other routes has received less attention. Here, we test the hypothesis that the shore crab (Carcinus maenas) can take up microplastics through inspiration across the gills as well as ingestion of pre-exposed food (common mussel Mytilus edulis). We used fluorescently labeled polystyrene microspheres (8-10 μm) to show that ingested microspheres were retained within the body tissues of the crabs for up to 14 days following ingestion and up to 21 days following inspiration across the gill, with uptake significantly higher into the posterior versus anterior gills. Multiphoton imaging suggested that most microspheres were retained in the foregut after dietary exposure due to adherence to the hairlike setae and were found on the external surface of gills following aqueous exposure. Results were used to construct a simple conceptual model of particle flow for the gills and the gut. These results identify ventilation as a route of uptake of microplastics into a common marine nonfilter feeding species.", "title": "" }, { "docid": "5aed256aaca0a1f2fe8a918e6ffb62bd", "text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: schangpi@google.com Wei-Lun Chao Cornell University, Department of Computer Science E-mail: weilunchao760414@gmail.com Boqing Gong Tencent AI Lab E-mail: boqinggo@outlook.com Fei Sha University of Southern California, Department of Computer Science E-mail: feisha@usc.edu", "title": "" }, { "docid": "785a6d08ef585302d692864d09b026fe", "text": "Linear Discriminant Analysis (LDA) is a well-known method for dimensionality reduction and classification. LDA in the binaryclass case has been shown to be equivalent to linear regression with the class label as the output. This implies that LDA for binary-class classifications can be formulated as a least squares problem. Previous studies have shown certain relationship between multivariate linear regression and LDA for the multi-class case. Many of these studies show that multivariate linear regression with a specific class indicator matrix as the output can be applied as a preprocessing step for LDA. However, directly casting LDA as a least squares problem is challenging for the multi-class case. In this paper, a novel formulation for multivariate linear regression is proposed. The equivalence relationship between the proposed least squares formulation and LDA for multi-class classifications is rigorously established under a mild condition, which is shown empirically to hold in many applications involving high-dimensional data. Several LDA extensions based on the equivalence relationship are discussed.", "title": "" }, { "docid": "5f3e426c67716d96da444f2a37024dc4", "text": "Automated Border Control (ABC) systems are being increasingly used to perform a fast, accurate, and reliable verification of the travelers' identity. These systems use biometric technologies to verify the identity of the person crossing the border. In this context, fingerprint verification systems are widely adopted due to their high accuracy and user acceptance. Matching score normalization methods can improve the performance of fingerprint recognition in ABC systems and mitigate the effect of non-idealities typical of this scenario without modifying the existing biometric technologies. However, privacy protection regulations restrict the use of biometric data captured in ABC systems and can compromise the applicability of these techniques. Cohort score normalization methods based only on impostor scores provide a suitable solution, due to their limited use of sensible data and to their promising performance. In this paper, we propose a privacy-compliant and adaptive normalization approach for enhancing fingerprint recognition in ABC systems. The proposed approach computes cohort scores from an external public dataset and uses computational intelligence to learn and improve the matching score distribution. The use of a public dataset permits to apply cohort normalization strategies in contexts in which privacy protection regulations restrict the storage of biometric data. We performed a technological and a scenario evaluation using a commercial matcher currently adopted in real ABC systems and we used data simulating different conditions typical of ABC systems, obtaining encouraging results.", "title": "" }, { "docid": "207d3e95d3f04cafa417478ed9133fcc", "text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "01b147cb417ceedf40dadcb3ee31a1b2", "text": "BACKGROUND\nPurposeful and timely rounding is a best practice intervention to routinely meet patient care needs, ensure patient safety, decrease the occurrence of patient preventable events, and proactively address problems before they occur. The Institute for Healthcare Improvement (IHI) endorsed hourly rounding as the best way to reduce call lights and fall injuries, and increase both quality of care and patient satisfaction. Nurse knowledge regarding purposeful rounding and infrastructure supporting timeliness are essential components for consistency with this patient centred practice.\n\n\nOBJECTIVES\nThe project aimed to improve patient satisfaction and safety through implementation of purposeful and timely nursing rounds. Goals for patient satisfaction scores and fall volume were set. Specific objectives were to determine current compliance with evidence-based criteria related to rounding times and protocols, improve best practice knowledge among staff nurses, and increase compliance with these criteria.\n\n\nMETHODS\nFor the objectives of this project the Joanna Briggs Institute's Practical Application of Clinical Evidence System and Getting Research into Practice audit tool were used. Direct observation of staff nurses on a medical surgical unit in the United States was employed to assess timeliness and utilization of a protocol when rounding. Interventions were developed in response to baseline audit results. A follow-up audit was conducted to determine compliance with the same criteria. For the project aims, pre- and post-intervention unit-level data related to nursing-sensitive elements of patient satisfaction and safety were compared.\n\n\nRESULTS\nRounding frequency at specified intervals during awake and sleeping hours nearly doubled. Use of a rounding protocol increased substantially to 64% compliance from zero. Three elements of patient satisfaction had substantive rate increases but the hospital's goals were not reached. Nurse communication and pain management scores increased modestly (5% and 11%, respectively). Responsiveness of hospital staff increased moderately (15%) with a significant sub-element increase in toileting (41%). Patient falls decreased by 50%.\n\n\nCONCLUSIONS\nNurses have the ability to improve patient satisfaction and patient safety outcomes by utilizing nursing round interventions which serve to improve patient communication and staff responsiveness. Having a supportive infrastructure and an organized approach, encompassing all levels of staff, to meet patient needs during their hospital stay was a key factor for success. Hard-wiring of new practices related to workflow takes time as staff embrace change and understand how best practice interventions significantly improve patient outcomes.", "title": "" }, { "docid": "d244509f1f38b93d2c04b4b4fa8070a4", "text": "Recent research has shown the usefulness of using collective user interaction data (e.g., query logs) to recommend query modification suggestions for Intranet search. However, most of the query suggestion approaches for Intranet search follow an ``one size fits all'' strategy, whereby different users who submit an identical query would get the same query suggestion list. This is problematic, as even with the same query, different users may have different topics of interest, which may change over time in response to the user's interaction with the system.\n We address the problem by proposing a personalised query suggestion framework for Intranet search. For each search session, we construct two temporal user profiles: a click user profile using the user's clicked documents and a query user profile using the user's submitted queries. We then use the two profiles to re-rank the non-personalised query suggestion list returned by a state-of-the-art query suggestion method for Intranet search. Experimental results on a large-scale query logs collection show that our personalised framework significantly improves the quality of suggested queries.", "title": "" }, { "docid": "1be58e70089b58ca3883425d1a46b031", "text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.", "title": "" }, { "docid": "bef4cf486ddc37d8ff4d5ed7a2b72aba", "text": "We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two grid maps provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and real robots experiments show the efficiency of our approach and also show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial.", "title": "" }, { "docid": "d15dbbbcc98ebb9a4576e6f3e4667bc8", "text": "A new, rapid technique for the propagation of amaryllis (Hippeastrum spp. hybrids) by means of tissue culture is reported. Leaf bases, scapes, peduncles, inner bulb scales and ovaries were cultured successfully in vitro and plantlets were induced readily at various concentrations of growth regulators. Some plantlets also were produced in the absence of growth regulators. The most productive tissues for propagation were inverted scapes and peduncles, cultured in a modified Murashige and Skoog salt solution with added organic constituents and 1 mg per 1 (4.5μM) 2,4-dichlorophenoxyacetic acid (2,4-D) and 1 mg per 1 (4.4μM) 6-benzylaminopurine (BAP). Plantlets induced axenically also grew roots on the generalized shoot-inducing medium so that no special rooting medium was required. Although friable callus was obtained from ovary tissue cultured on a medium containing 2 mg per 1 (11μM) naphthaleneacetic acid and 4 mg per 1 (18μM) BAP, it produced shoots after 8 weeks of further subculture on the same medium. An average of 10 rooted plantlets was obtained from each scape or peduncle explant on the shoot-propagating medium. Thus, if 45 explants are obtained from each bulb, 450 rudimentary plantlets could be obtained from each mother bulb in 8 weeks of culture. This is a substantial increase over present propagation methods.", "title": "" } ]
scidocsrr
62cd0a44f153ff70e7aea69a91c1cc66
Bayesian Optimization with Gradients
[ { "docid": "48b3ee93758294ffa7b24584c53cbda1", "text": "Engineering design problems requiring the construction of a cheap-to-evaluate 'surrogate' model f that emulates the expensive response of some black box f come in a variety of forms, but they can generally be distilled down to the following template. Here ffx is some continuous quality, cost or performance metric of a product or process defined by a k-vector of design variables x ∈ D ⊂ R k. In what follows we shall refer to D as the design space or design domain. Beyond the assumption of continuity, the only insight we can gain into f is through discrete observations or samples x ii → y ii = ffx ii i = 1 n. These are expensive to obtain and therefore must be used sparingly. The task is to use this sparse set of samples to construct an approximation f , which can then be used to make a cheap performance prediction for any design x ∈ D. Much of this book is made up of recipes for constructing f , given a set of samples. Excepting a few pathological cases, the mathematical formulations of these modelling approaches are well-posed, regardless of how the sampling plan X = x 1 x 2 x nn determines the spatial arrangement of the observations we have built them upon. Some models do require a minimum number n of data points but, once we have passed this threshold, we can use them to build an unequivocally defined surrogate. However, a well-posed model does not necessarily generalize well, that is it may still be poor at predicting unseen data, and this feature does depend on the sampling plan X. For example, measuring the performance of a design at the extreme values of its parameters may leave a great deal of interesting behaviour undiscovered, say, in the centre of the design space. Equally, spraying points liberally in certain parts of the inside of the domain, forcing the surrogate model to make far-reaching extrapolations elsewhere, may lead us to (false) global conclusions based on patchy, local knowledge of the objective landscape. Of course, we do not always have a choice in the matter. We may be using data obtained by someone else for some other purpose or the available observations may come from a variety of external sources and we may not be able to add to them. The latter situation often occurs in conceptual design, where we …", "title": "" } ]
[ { "docid": "5cb4a7a6486eaba444b88b7a48e9cea8", "text": "UNLABELLED\nThis Guideline is an official statement of the European Society of Gastrointestinal Endoscopy (ESGE). The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system 1 2 was adopted to define the strength of recommendations and the quality of evidence.\n\n\nMAIN RECOMMENDATIONS\n1 ESGE recommends endoscopic en bloc resection for superficial esophageal squamous cell cancers (SCCs), excluding those with obvious submucosal involvement (strong recommendation, moderate quality evidence). Endoscopic mucosal resection (EMR) may be considered in such lesions when they are smaller than 10 mm if en bloc resection can be assured. However, ESGE recommends endoscopic submucosal dissection (ESD) as the first option, mainly to provide an en bloc resection with accurate pathology staging and to avoid missing important histological features (strong recommendation, moderate quality evidence). 2 ESGE recommends endoscopic resection with a curative intent for visible lesions in Barrett's esophagus (strong recommendation, moderate quality evidence). ESD has not been shown to be superior to EMR for excision of mucosal cancer, and for that reason EMR should be preferred. ESD may be considered in selected cases, such as lesions larger than 15 mm, poorly lifting tumors, and lesions at risk for submucosal invasion (strong recommendation, moderate quality evidence). 3 ESGE recommends endoscopic resection for the treatment of gastric superficial neoplastic lesions that possess a very low risk of lymph node metastasis (strong recommendation, high quality evidence). EMR is an acceptable option for lesions smaller than 10 - 15 mm with a very low probability of advanced histology (Paris 0-IIa). However, ESGE recommends ESD as treatment of choice for most gastric superficial neoplastic lesions (strong recommendation, moderate quality evidence). 4 ESGE states that the majority of colonic and rectal superficial lesions can be effectively removed in a curative way by standard polypectomy and/or by EMR (strong recommendation, moderate quality evidence). ESD can be considered for removal of colonic and rectal lesions with high suspicion of limited submucosal invasion that is based on two main criteria of depressed morphology and irregular or nongranular surface pattern, particularly if the lesions are larger than 20 mm; or ESD can be considered for colorectal lesions that otherwise cannot be optimally and radically removed by snare-based techniques (strong recommendation, moderate quality evidence).", "title": "" }, { "docid": "e5206df50c9a1477928df7a21e054489", "text": "Reasoning about the relationships between object pairs in images is a crucial task for holistic scene understanding. Most of the existing works treat this task as a pure visual classification task: each type of relationship or phrase is classified as a relation category based on the extracted visual features. However, each kind of relationships has a wide variety of object combination and each pair of objects has diverse interactions. Obtaining sufficient training samples for all possible relationship categories is difficult and expensive. In this work, we propose a natural language guided framework to tackle this problem. We propose to use a generic bi-directional recurrent neural network to predict the semantic connection between the participating objects in the relationship from the aspect of natural language. The proposed simple method achieves the state-of-the-art on the Visual Relationship Detection (VRD) and Visual Genome datasets, especially when predicting unseen relationships (e.g., recall improved from 76.42% to 89.79% on VRD zeroshot testing set).", "title": "" }, { "docid": "74c7ffaf4064218920f503a31a0f97b0", "text": "In this paper, we present a new method for the control of soft robots with elastic behavior, piloted by several actuators. The central contribution of this work is the use of the Finite Element Method (FEM), computed in real-time, in the control algorithm. The FEM based simulation computes the nonlinear deformations of the robots at interactive rates. The model is completed by Lagrange multipliers at the actuation zones and at the end-effector position. A reduced compliance matrix is built in order to deal with the necessary inversion of the model. Then, an iterative algorithm uses this compliance matrix to find the contribution of the actuators (force and/or position) that will deform the structure so that the terminal end of the robot follows a given position. Additional constraints, like rigid or deformable obstacles, or the internal characteristics of the actuators are integrated in the control algorithm. We illustrate our method using simulated examples of both serial and parallel structures and we validate it on a real 3D soft robot made of silicone.", "title": "" }, { "docid": "ed9f79cab2dfa271ee436b7d6884bc13", "text": "This study conducts a phylogenetic analysis of extant African papionin craniodental morphology, including both quantitative and qualitative characters. We use two different methods to control for allometry: the previously described narrow allometric coding method, and the general allometric coding method, introduced herein. The results of this study strongly suggest that African papionin phylogeny based on molecular systematics, and that based on morphology, are congruent and support a Cercocebus/Mandrillus clade as well as a Papio/Lophocebus/Theropithecus clade. In contrast to previous claims regarding papionin and, more broadly, primate craniodental data, this study finds that such data are a source of valuable phylogenetic information and removes the basis for considering hard tissue anatomy \"unreliable\" in phylogeny reconstruction. Among highly sexually dimorphic primates such as papionins, male morphologies appear to be particularly good sources of phylogenetic information. In addition, we argue that the male and female morphotypes should be analyzed separately and then added together in a concatenated matrix in future studies of sexually dimorphic taxa. Character transformation analyses identify a series of synapomorphies uniting the various papionin clades that, given a sufficient sample size, should potentially be useful in future morphological analyses, especially those involving fossil taxa.", "title": "" }, { "docid": "8e3b1f49ca8a5afe20a9b66e0088a56a", "text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.", "title": "" }, { "docid": "d59cc1c197099db86aba4d9f79cb6267", "text": "The rapid growth of the Internet as an environment for information exchange and the lack of enforceable standards regarding the information it contains has lead to numerous information qual ity problems. A major issue is the inability of Search Engine technology to wade through the vast expanse of questionable content and return \"quality\" results to a user's query. This paper attempts to address some of the issues involved in determining what quality is, as it pertains to information retrieval on the Internet. The IQIP model is presented as an approach to managing the choice and implementation of quality related algorithms of an Internet crawling Search Engine.", "title": "" }, { "docid": "50ebb851bb0fceeddd39fdee66941e6c", "text": "Machine learning involves optimizing a loss function on unlabeled data points given examples of labeled data points, where the loss function measures the performance of a learning algorithm. We give an overview of techniques, called reductions, for converting a problem of minimizing one loss function into a problem of minimizing another, simpler loss function. This tutorial discusses how to create robust reductions that perform well in practice. The reductions discussed here can be used to solve any supervised learning problem with a standard binary classification or regression algorithm available in any machine learning toolkit. We also discuss common design flaws in folklore reductions.", "title": "" }, { "docid": "b0eea601ef87dbd1d7f39740ea5134ae", "text": "Syndromal classification is a well-developed diagnostic system but has failed to deliver on its promise of the identification of functional pathological processes. Functional analysis is tightly connected to treatment but has failed to develop testable. replicable classification systems. Functional diagnostic dimensions are suggested as a way to develop the functional classification approach, and experiential avoidance is described as 1 such dimension. A wide range of research is reviewed showing that many forms of psychopathology can be conceptualized as unhealthy efforts to escape and avoid emotions, thoughts, memories, and other private experiences. It is argued that experiential avoidance, as a functional diagnostic dimension, has the potential to integrate the efforts and findings of researchers from a wide variety of theoretical paradigms, research interests, and clinical domains and to lead to testable new approaches to the analysis and treatment of behavioral disorders. Steven C. Haves, Kelly G. Wilson, Elizabeth V. Gifford, and Victoria M. Follette. Department of Psychology. University of Nevada: Kirk Strosahl, Mental Health Center, Group Health Cooperative, Seattle, Washington. Preparation of this article was supported in part by Grant DA08634 from the National Institute on Drug Abuse. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, Mailstop 296, College of Arts and Science. University of Nevada, Reno, Nevada 89557-0062. The process of classification lies at the root of all scientific behavior. It is literally impossible to speak about a truly unique event, alone and cut off from all others, because words themselves are means of categorization (Brunei, Goodnow, & Austin, 1956). Science is concerned with refined and systematic verbal formulations of events and relations among events. Because \"events\" are always classes of events, and \"relations\" are always classes of relations, classification is one of the central tasks of science. The field of psychopathology has seen myriad classification systems (Hersen & Bellack, 1988; Sprock & Blashfield, 1991). The differences among some of these approaches are both long-standing and relatively unchanging, in part because systems are never free from a priori assumptions and guiding principles that provide a framework for organizing information (Adams & Cassidy, 1993). In the present article, we briefly examine the differences between two core classification strategies in psychopathology syndromal and functional. We then articulate one possible functional diagnostic dimension: experiential avoidance. Several common syndromal categories are examined to see how this dimension can organize data found among topographical groupings. Finally, the utility and implications of this functional dimensional category are examined. Comparing Syndromal and Functional Classification Although there are many purposes to diagnostic classification, most researchers seem to agree that the ultimate goal is the development of classes, dimensions, or relational categories that can be empirically wedded to treatment strategies (Adams & Cassidy, 1993: Hayes, Nelson & Jarrett, 1987: Meehl, 1959). Syndromal classification – whether dimensional or categorical – can be traced back to Wundt and Galen and, thus, is as old as scientific psychology itself (Eysenck, 1986). Syndromal classification starts with constellations of signs and symptoms to identify the disease entities that are presumed to give rise to these constellations. Syndromal classification thus starts with structure and, it is hoped, ends with utility. The attempt in functional classification, conversely, is to start with utility by identifying functional processes with clear treatment implications. It then works backward and returns to the issue of identifiable signs and symptoms that reflect these processes. These differences are fundamental. Syndromal Classification The economic and political dominance of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (e.g., 4th ed.; DSM -IV; American Psychiatric Association, 1994) has lead to a worldwide adoption of syndromal classification as an analytic strategy in psychopathology. The only widely used alternative, the International Classification of Diseases (ICD) system, was a source document for the original DSM, and continuous efforts have been made to ensure their ongoing compatibility (American Psychiatric Association 1994). The immediate goal of syndromal classification (Foulds. 1971) is to identify collections of signs (what one sees) and symptoms (what the client's complaint is). The hope is that these syndromes will lead to the identification of disorders with a known etiology, course, and response to treatment. When this has been achieved, we are no longer speaking of syndromes but of diseases. Because the construct of disease involves etiology and response to treatment, these classifications are ultimately a kind of functional unit. Thus, the syndromal classification approach is a topographically oriented classification strategy for the identification of functional units of abnormal behavior. When the same topographical outcome can be established by diverse processes, or when very different topographical outcomes can come from the same process, the syndromal model has a difficult time actually producing its intended functional units (cf. Bandura, 1982; Meehl, 1978). Some medical problems (e.g., cancer) have these features, and in these areas medical researchers no longer look to syndromal classification as a quick route to an understanding of the disease processes involved. The link between syndromes (topography of signs and symptoms) and diseases (function) has been notably weak in psychopathology. After over 100 years of effort, almost no psychological diseases have been clearly identified. With the exception of general paresis and a few clearly neurological disorders, psychiatric syndromes have remained syndromes indefinitely. In the absence of progress toward true functional entities, syndromal classification of psychopathology has several down sides. Symptoms are virtually non-falsifiable, because they depend only on certain formal features. Syndromal categories tend to evolve changing their names frequently and splitting into ever finer subcategories but except for political reasons (e.g., homosexuality as a disorder) they rarely simply disappear. As a result, the number of syndromes within the DSM system has increased exponentially (Follette, Houts, & Hayes, 1992). Increasingly refined topographical distinctions can always be made without the restraining and synthesizing effect of the identification of common etiological processes. In physical medicine, syndromes regularly disappear into disease categories. A wide variety of symptoms can be caused by a single disease, or a common symptom can be explained by very different diseases entities. For example, \"headaches\" are not a disease, because they could be due to influenza, vision problems, ruptured blood vessels, or a host of other factors. These etiological factors have very different treatment implications. Note that the reliability of symptom detection is not what is at issue. Reliably diagnosing headaches does not translate into reliably diagnosing the underlying functional entity, which after all is the crucial factor for treatment decisions. In the same way, the increasing reliability of DSM diagnoses is of little consolation in and of itself. The DSM system specifically eschews the primary importance of functional processes: \"The approach taken in DSM-III is atheoretical with regard to etiology or patho-physiological process\" (American Psychiatric Association, 1980, p. 7). This spirit of etiological agnosticism is carried forward in the most recent DSM incarnation. It is meant to encourage users from widely varying schools of psychology to use the same classification system. Although integration is a laudable goal, the price paid may have been too high (Follette & Hayes, 1992). For example, the link between syndromal categories and biological markers or change processes has been consistently disappointing. To date, compellingly sensitive and specific physiological markers have not been identified for any psychiatric syndrome (Hoes, 1986). Similarly, the link between syndromes and differential treatment has long been known to be weak (see Hayes et al., 1987). We still do not have compelling evidence that syndromal classification contributes substantially to treatment outcome (Hayes et al., 1987). Even in those few instances and not others, mechanisms of change are often unclear of unexamined (Follette, 1995), in part because syndromal categories give researchers few leads about where even to look. Without attention to etiology, treatment utility, and pathological process, the current syndromal system seems unlikely to evolve rapidly into a functional, theoretically relevant system. Functional Classification In a functional approach to classification, the topographical characteristics of any particular individual's behavior is not the basis for classification; instead, behaviors and sets of behaviors are organized by the functional processes that are thought to have produced and maintained them. This functional method is inherently less direct and naive than a syndromal approach, as it requires the application of pre-existing information about psychological processes to specific response forms. It thus integrates at least rudimentary forms of theory into the classification strategy, in sharp contrast with the atheoretical goals of the DSM system. Functional Diagnostic Dimensions as a Method of Functional Classification Classical functional analysis is the most dominant example of a functional classification system. It consists of six steps (Hayes & Follette, 1992) -Step 1: identify potentially relevant characterist", "title": "" }, { "docid": "c1ff6ca38eef3530fc02fc0edee05379", "text": "Open-source software frameworks such as Apache Hadoop and Robot Operating System (ROS) are helped researchers to reduce arduous engineering work releasing them to concentrate more on the core research. The impact of extensive knowledge repositories of robot operating system interconnected with a large distributed data storage framework entails enormous developmental step in the future robotic systems era. Hadoop and ROS integrate the key frameworks presented in this paper to obtain the open source architecture. The main subject of research, an approach to apply service-oriented robotic (SOR) cloud enabling PaaS (Platform-as-a-Service) mobile multi-robot architecture, is presented. Moreover, the focus is based on cloudcommunicating service robots with lightweight workload requirements for the purpose of offloading heavy computation into a cloud. In an integrated experiment, the Ubuntu Linux environment is experimented after installing Hadoop cloud computing platform and ROS. Keywords-robot operating system (ROS); Apache Hadoop; open source; service-oriented robot; cloud robotics; mobile robotics __________________________________________________*****_________________________________________________", "title": "" }, { "docid": "3e0a52bc1fdf84279dee74898fcd93bf", "text": "A variety of abnormal imaging findings of the petrous apex are encountered in children. Many petrous apex lesions are identified incidentally while images of the brain or head and neck are being obtained for indications unrelated to the temporal bone. Differential considerations of petrous apex lesions in children include “leave me alone” lesions, infectious or inflammatory lesions, fibro-osseous lesions, neoplasms and neoplasm-like lesions, as well as a few rare miscellaneous conditions. Some lesions are similar to those encountered in adults, and some are unique to children. Langerhans cell histiocytosis (LCH) and primary and metastatic pediatric malignancies such as neuroblastoma, rhabomyosarcoma and Ewing sarcoma are more likely to be encountered in children. Lesions such as petrous apex cholesterol granuloma, cholesteatoma and chondrosarcoma are more common in adults and are rarely a diagnostic consideration in children. We present a comprehensive pictorial review of CT and MRI appearances of pediatric petrous apex lesions.", "title": "" }, { "docid": "c4b12a7b060f6da3ad61247722151cf1", "text": "The API Economy trend is nowadays a concrete opportunity to go beyond the traditional development of vertical ICT solutions and to unlock additional business value by enabling innovative collaboration patterns between different players, e.g., companies, public authorities and researchers. Thus, an effective API Economy initiative has to be comprehensive, focusing not only on technical issues but also on other complementary dimensions.\n This paper illustrates a successful API Economy initiative, the E015 Digital Ecosystem developed for Expo Milano 2015, showing how a comprehensive approach to information systems interoperability and Service-Oriented Architectures can foster synergetic collaboration between industry and academia in particular, hence enabling the development of value-added solutions for the end-users.", "title": "" }, { "docid": "856d7881b30b18d9ca219af504d2d500", "text": "Complementary security systems are widely deployed in networks to protect digital assets. Alert correlation is essential to understanding the security threats and taking appropriate actions. This paper proposes a novel correlation approach based on triggering events and common resources. One of the key concepts in our approach is triggering events, which are the (low-level) events that trigger alerts. By grouping alerts that share \"similar\" triggering events, a set of alerts can be partitioned into different clusters such that the alerts in the same cluster may correspond to the same attack. Our approach further examines whether the alerts in each cluster are consistent with relevant network and host configurations, which help analysts to partially identify the severity of alerts and clusters. The other key concept in our approach is input and output resources. Intuitively, input resources are the necessary resources for an attack to succeed, and output resources are the resources that an attack supplies if successful. This paper proposes to model each attack through specifying input and output resources. By identifying the \"common\" resources between output resources of one attack and input resources of another, it discovers causal relationships between alert clusters and builds attack scenarios. The experimental results demonstrate the usefulness of the proposed techniques.", "title": "" }, { "docid": "293ee71024fc9f973ec523457133c7ef", "text": "This paper describes our system used in the Aspect Based Sentiment Analysis (ABSA) task of SemEval 2016. Our system uses Maximum Entropy classifier for the aspect category detection and for the sentiment polarity task. Conditional Random Fields (CRF) are used for opinion target extraction. We achieve state-of-the-art results in 9 experiments among the constrained systems and in 2 experiments among the unconstrained systems.", "title": "" }, { "docid": "8218ce22ac1cccd73b942a184c819d8c", "text": "The extended SMAS facelift techniques gave plastic surgeons the ability to correct the nasolabial fold and medial cheek. Retensioning the SMAS transmits the benefit through the multilinked fibrous support system of the facial soft tissues. The effect is to provide a recontouring of the ptotic soft tissues, which fills out the cheeks as it reduces nasolabial fullness. Indirectly, dermal tightening occurs to a lesser but more natural degree than with traditional facelift surgery. Although details of current techniques may be superseded, the emerging surgical principles are becoming more clearly defined. This article presents these principles and describes the author's current surgical technique.", "title": "" }, { "docid": "7d57caa810120e1590ad277fb8113222", "text": "Cancer is increasing the total number of unexpected deaths around the world. Until now, cancer research could not significantly contribute to a proper solution for the cancer patient, and as a result, the high death rate is uncontrolled. The present research aim is to extract the significant prevention factors for particular types of cancer. To find out the prevention factors, we first constructed a prevention factor data set with an extensive literature review on bladder, breast, cervical, lung, prostate and skin cancer. We subsequently employed three association rule mining algorithms, Apriori, Predictive apriori and Tertius algorithms in order to discover most of the significant prevention factors against these specific types of cancer. Experimental results illustrate that Apriori is the most useful association rule-mining algorithm to be used in the discovery of prevention factors.", "title": "" }, { "docid": "a2f15d76368aa2b9c3e34eef5b6d925f", "text": "OBJECTIVES\nTo review the sonographic features of spinal anomalies in first-trimester fetuses presenting for screening for chromosomal abnormalities.\n\n\nMETHODS\nFetuses with a spinal abnormality diagnosed prenatally or postnatally that underwent first-trimester sonographic evaluation at our institution had their clinical information retrieved and their sonograms reviewed.\n\n\nRESULTS\nA total of 21 fetuses complied with the entry criteria including eight with body stalk anomaly, seven with spina bifida, two with Vertebral, Anal, Cardiac, Tracheal, Esophageal, Renal, and Limb (VACTERL) association, and one case each of isolated kyphoscoliosis, tethered cord, iniencephaly, and sacrococcygeal teratoma. One fetus with body stalk anomaly and another with VACTERL association also had a myelomeningocele, making a total of nine cases of spina bifida in our series. Five of the nine (56%) cases with spina bifida, one of the two cases with VACTERL association, and the cases with tethered cord and sacrococcygeal teratoma were undiagnosed in the first trimester. Although increased nuchal translucency was found in seven (33%) cases, chromosomal analysis revealed only one case of aneuploidy in this series.\n\n\nCONCLUSIONS\nFetal spinal abnormalities diagnosed in the first trimester are usually severe and frequently associated with other major defects. The diagnosis of small defects is difficult and a second-trimester scan is still necessary to detect most cases of spina bifida.", "title": "" }, { "docid": "aa6156d21ddb525fca036040aeb3db37", "text": "The rapid development of the Internet-of-Things requires hardware that is both energy-efficient and flexible, and an ultra-low-power Field-Programmable-Gate-Array (FPGA) is a very promising solution. This paper presents a near/sub-threshold FPGA with low-swing global interconnect, folded switch box (SB), per-path voltage scaling, and power-gating. A fully programmable 512-look-up-table FPGA chip is fabricated in 130nm CMOS. When implementing a 4bit-adder, the measured energy of the proposed FPGA is 15% less than the normalized energy of the state-of-the-art. When implementing fifteen selected low-power applications, the estimated energy of the proposed FPGA is on average 75x lower than Microsemi IGLOO.", "title": "" }, { "docid": "c84a0f630b4fb2e547451d904e1c63a5", "text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.", "title": "" }, { "docid": "ab152b8a696519abb4406dd8f7c15407", "text": "While real scenes produce a wide range of brightness variations, vision systems use low dynamic range image detectors that typically provide 8 bits of brightness data at each pixel. The resulting low quality images greatly limit what vision can accomplish today. This paper proposes a very simple method for significantly enhancing the dynamic range of virtually any imaging system. The basic principle is to simultaneously sample the spatial and exposure dimensions of image irradiance. One of several ways to achieve this is by placing an optical mask adjacent to a conventional image detector array. The mask has a pattern with spatially varying transmittance, thereby giving adjacent pixels on the detector different exposures to the scene. The captured image is mapped to a high dynamic range image using an efficient image reconstruction algorithm. The end result is an imaging system that can measure a very wide range of scene radiances and produce a substantially larger number of brightness levels, with a slight reduction in spatial resolution. We conclude with several examples of high dynamic range images computed using spatially varying pixel exposures. 1 High Dynamic Range Imaging Any real-world scene has a significant amount of brightness variation within it. The human eye has a remarkable dynamic range that enables it to detect subtle contrast variations and interpret scenes under a large variety of illumination conditions [Blackwell, 1946]. In contrast, a typical video camera, or a digital still camera, provides only about 8 bits (256 levels) of brightness information at each pixel. As a result, virtually any image captured by a conventional imaging system ends up being too dark in some areas and possibly saturated in others. In computational vision, it is such low quality images that we are left with the task of interpreting. Clearly, the low dynamic range of existing image detectors poses a severe limitation on what computational vision can accomplish. This paper presents a very simple modification that can be made to any conventional imaging system to dramatically increases its dynamic range. The availability of extra bits of data at each image pixel is expected to enhance the robustness of vision algorithms. This work was supported in part by an ONR/DARPA MURI grant under ONR contract No. N00014-97-1-0553 and in part by a David and Lucile Packard Fellowship. Tomoo Mitsunaga is supported by the Sony Corporation. 2 Existing Approaches First, we begin with a brief summary of existing techniques for capturing a high dynamic range image with a low dynamic range image detector. 2.1 Sequential Exposure Change The most obvious approach is to sequentially capture multiple images of the same scene using different exposures. The exposure for each image is controlled by either varying the F-number of the imaging optics or the exposure time of the image detector. Clearly, a high exposure image will be saturated in the bright scene areas but capture the dark regions well. In contrast, a low exposure image will have less saturation in bright regions but end up being too dark and noisy in the dark areas. The complementary nature of these images allows one to combine them into a single high dynamic range image. Such an approach has been employed in [Azuma and Morimura, 1996], [Saito, 1995], [Konishi et al., 1995], [Morimura, 1993], [Ikeda, 1998], [Takahashi et al., 1997], [Burt and Kolczynski, 1993], [Madden, 1993] [Tsai, 1994]. In [Mann and Picard, 1995], [Debevec and Malik, 1997] and [Mitsunaga and Nayar, 1999] this approach has been taken one step further by using the acquired images to compute the radiometric response function of the imaging system. The above methods are of course suited only to static scenes; the imaging system, the scene objects and their radiances must remain constant during the sequential capture of images under different exposures. 2.2 Multiple Image Detectors The stationary scene restriction faced by sequential capture is remedied by using multiple imaging systems. This approach has been taken by several investigators [Doi et al., 1986], [Saito, 1995], [Saito, 1996], [Kimura, 1998], [Ikeda, 1998]. Beam splitters are used to generate multiple copies of the optical image of the scene. Each copy is detected by an image detector whose exposure is preset by using an optical attenuator or by changing the exposure time of the detector. This approach has the advantage of producing high dynamic range images in real time. Hence, the scene objects and the imaging system are free to move during the capture process. The disadvantage of course is that this approach is expensive as it requires multiple image detectors, precision optics for the alignment of all the acquired images and additional hardware for the capture and processing of multiple images. 1063-6919/00 $10.0", "title": "" }, { "docid": "eb87bd5be6f183d039b5d0964a1f5e67", "text": "One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user’s intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents.", "title": "" } ]
scidocsrr
c5d5d57e84d3291a2b4b3470bbd25c4d
Database resources of the National Center for Biotechnology Information
[ { "docid": "06b9f83845f3125272115894676b5e5d", "text": "For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy alignment algorithm with particularly good performance and show that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data. An implementation of this algorithm is currently used in a program that assembles the UniGene database at the National Center for Biotechnology Information.", "title": "" } ]
[ { "docid": "a13ca3d83e6ec1693bd9ad53323d2f63", "text": "BACKGROUND\nThis study examined longitudinal patterns of heroin use, other substance use, health, mental health, employment, criminal involvement, and mortality among heroin addicts.\n\n\nMETHODS\nThe sample was composed of 581 male heroin addicts admitted to the California Civil Addict Program (CAP) during the years 1962 through 1964; CAP was a compulsory drug treatment program for heroin-dependent criminal offenders. This 33-year follow-up study updates information previously obtained from admission records and 2 face-to-face interviews conducted in 1974-1975 and 1985-1986; in 1996-1997, at the latest follow-up, 284 were dead and 242 were interviewed.\n\n\nRESULTS\nIn 1996-1997, the mean age of the 242 interviewed subjects was 57.4 years. Age, disability, years since first heroin use, and heavy alcohol use were significant correlates of mortality. Of the 242 interviewed subjects, 20.7% tested positive for heroin (with additional 9.5% urine refusal and 14.0% incarceration, for whom urinalyses were unavailable), 66.9% reported tobacco use, 22.1% were daily alcohol drinkers, and many reported illicit drug use (eg, past-year heroin use was 40.5%; marijuana, 35.5%; cocaine, 19.4%; crack, 10.3%; amphetamine, 11.6%). The group also reported high rates of health problems, mental health problems, and criminal justice system involvement. Long-term heroin abstinence was associated with less criminality, morbidity, psychological distress, and higher employment.\n\n\nCONCLUSIONS\nWhile the number of deaths increased steadily over time, heroin use patterns were remarkably stable for the group as a whole. For some, heroin addiction has been a lifelong condition associated with severe health and social consequences.", "title": "" }, { "docid": "aef051dd5cc521359f2f40b01ae80e35", "text": "Despite the promising progress made in recent years, person re-identification (re-ID) remains a challenging task due to the complex variations in human appearances from different camera views. For this challenging problem, a large variety of algorithms have been developed in the fully supervised setting, requiring access to a large amount of labeled training data. However, the main bottleneck for fully supervised re-ID is the limited availability of labeled training samples. To address this problem, we propose a self-trained subspace learning paradigm for person re-ID that effectively utilizes both labeled and unlabeled data to learn a discriminative subspace where person images across disjoint camera views can be easily matched. The proposed approach first constructs pseudo-pairwise relationships among unlabeled persons using the k-nearest neighbors algorithm. Then, with the pseudo-pairwise relationships, the unlabeled samples can be easily combined with the labeled samples to learn a discriminative projection by solving an eigenvalue problem. In addition, we refine the pseudo-pairwise relationships iteratively, which further improves learning performance. A multi-kernel embedding strategy is also incorporated into the proposed approach to cope with the non-linearity in a person’s appearance and explore the complementation of multiple kernels. In this way, the performance of person re-ID can be greatly enhanced when training data are insufficient. Experimental results on six widely used datasets demonstrate the effectiveness of our approach, and its performance can be comparable to the reported results of most state-of-the-art fully supervised methods while using much fewer labeled data.", "title": "" }, { "docid": "10fff590f9c8e99ebfd1b4b4e453241f", "text": "Object-oriented programming has many advantages over conventional procedural programming languages for constructing highly flexible, adaptable, and extensible systems. Therefore a transformation of procedural programs to object-oriented architectures becomes an important process to enhance the reuse of procedural programs. Moreover, it would be useful to assist by automatic methods the software developers in transforming procedural code into an equivalent object-oriented one. In this paper we aim at introducing an agglomerative hierarchical clustering algorithm that can be used for assisting software developers in the process of transforming procedural code into an object-oriented architecture. We also provide a code example showing how our approach works, emphasizing, this way, the potential of our proposal.", "title": "" }, { "docid": "26f24eb20be055de6e7c2d6e87b3df8f", "text": "The Internet is the latest in a series of technological breakthroughs in interpersonal communication, following the telegraph, telephone, radio, and television. It combines innovative features of its predecessors, such as bridging great distances and reaching a mass audience. However, the Internet has novel features as well, most critically the relative anonymity afforded to users and the provision of group venues in which to meet others with similar interests and values. We place the Internet in its historical context, and then examine the effects of Internet use on the user's psychological well-being, the formation and maintenance of personal relationships, group memberships and social identity, the workplace, and community involvement. The evidence suggests that while these effects are largely dependent on the particular goals that users bring to the interaction-such as self-expression, affiliation, or competition-they also interact in important ways with the unique qualities of the Internet communication situation.", "title": "" }, { "docid": "3fe9dfb8334111ea56d40010ff7a70fa", "text": "1 Summary. The paper presents the LINK application, which is a decision-support system dedicated for operational and investigational activities of homeland security services. The paper briefly discusses issues of criminal analysis, possibilities of utilizing spatial (geographical) information together with crime mapping and spatial analyses. LINK – ŚRODOWISKO ANALIZ KRYMINALNYCH WYKORZYSTUJĄCE NARZĘRZIA ANALIZ GEOPRZESTRZENNYCH Streszczenie. Artykuł prezentuje system LINK będący zintegrowanym środowi-skiem wspomagania analizy kryminalnej przeznaczonym do działań operacyjnych i śledczych służb bezpieczeństwa wewnętrznego. W artykule omówiono problemy analizy kryminalnej, możliwość wykorzystania informacji o charakterze przestrzen-nym oraz narzędzia i metody analiz geoprzestrzennych.", "title": "" }, { "docid": "dc4d11c0478872f3882946580bb10572", "text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.", "title": "" }, { "docid": "d4c7efe10b1444d0f9cb6032856ba4e1", "text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.", "title": "" }, { "docid": "2bd6dab3aa836728f606732652e4a46d", "text": "A method called the eigensystem realization algorithm is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular-value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system and noise modes. For illustration of the algorithm, an example is shown using experimental data from the Galileo spacecraft.", "title": "" }, { "docid": "6b659a4bc83f173b8e6e4adf41da6e67", "text": "Pervasive smart meters that continuously measure power usage by consumers within a smart (power) grid are providing utilities and power systems researchers with unprecedented volumes of information through streams that need to be processed and analyzed in near realtime. We introduce the use of Cloud platforms to perform scalable, latency sensitive stream processing for eEngineering applications in the smart grid domain. One unique aspect of our work is the use of adaptive rate control to throttle the rate of generation of power events by smart meters, which meets accuracy requirements of smart grid applications while consuming 50% lesser bandwidth resources in the Cloud.", "title": "" }, { "docid": "6e527b021720cc006ec18a996abf36b5", "text": "Flow cytometry is a sophisticated instrument measuring multiple physical characteristics of a single cell such as size and granularity simultaneously as the cell flows in suspension through a measuring device. Its working depends on the light scattering features of the cells under investigation, which may be derived from dyes or monoclonal antibodies targeting either extracellular molecules located on the surface or intracellular molecules inside the cell. This approach makes flow cytometry a powerful tool for detailed analysis of complex populations in a short period of time. This review covers the general principles and selected applications of flow cytometry such as immunophenotyping of peripheral blood cells, analysis of apoptosis and detection of cytokines. Additionally, this report provides a basic understanding of flow cytometry technology essential for all users as well as the methods used to analyze and interpret the data. Moreover, recent progresses in flow cytometry have been discussed in order to give an opinion about the future importance of this technology.", "title": "" }, { "docid": "012a194f9296a510f209e0cd33f2f3da", "text": "Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments.", "title": "" }, { "docid": "ead7484035be253c2d879992bc7ef632", "text": "Solutions are urgently required for the growing number of infections caused by antibiotic-resistant bacteria. Bacteriocins, which are antimicrobial peptides produced by certain bacteria, might warrant serious consideration as alternatives to traditional antibiotics. These molecules exhibit significant potency against other bacteria (including antibiotic-resistant strains), are stable and can have narrow or broad activity spectra. Bacteriocins can even be produced in situ in the gut by probiotic bacteria to combat intestinal infections. Although the application of specific bacteriocins might be curtailed by the development of resistance, an understanding of the mechanisms by which such resistance could emerge will enable researchers to develop strategies to minimize this potential problem.", "title": "" }, { "docid": "5e9dce428a2bcb6f7bc0074d9fe5162c", "text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.", "title": "" }, { "docid": "70f396f6904d7012e5af1099bbb11e2f", "text": "A 6-day-old, male, Kilis goat kid with complaints of poor sucking reflex, dysuria, and swelling on the scrotal area was referred and it began to urinate when the sac was pressed on. On the clinical examination of the kid, it was observed that the urethral orifice and process narrowed down. Skin laid between anus-scrotum did not close fully on the ventral line. The most important finding was the penile urethral dilatation, which caused the fluctuating swelling on the scrotal region. Phimosis and two ectopic testis were also found on the right and left side in front of the preputium. There were no pathological changes in the hematological and urine analyses. Urethral diverticulum was treated by urethrostomy and hypoplasia of penis was noted during operation. No treatment for hypoplasia penis, phimosis and ectopic testis was performed. Postoperatively, kid healed and urination via urethral fistula without any complications was observed.", "title": "" }, { "docid": "9a2038b1dc1a48081ba30a4535c498c2", "text": "Social media sites (e.g., Twitter) have been used for surveillance of drug safety at the population level, but studies that focus on the effects of medications on specific sets of individuals have had to rely on other sources of data. Mining social media data for this information would require the ability to distinguish indications of personal medication intake in this media. Towards that end, this paper presents an annotated corpus that can be used to train machine learning systems to determine whether a tweet that mentions a medication indicates that the individual posting has taken that medication (at a specific time). To demonstrate the utility of the corpus as a training set, we present baseline results of supervised classification.", "title": "" }, { "docid": "97f8b8ee60e3f03e64833a16aaf5e743", "text": "OBJECTIVE\nA pilot randomized controlled trial (RCT) of the effectiveness of occupational therapy using a sensory integration approach (OT-SI) was conducted with children who had sensory modulation disorders (SMDs). This study evaluated the effectiveness of three treatment groups. In addition, sample size estimates for a large scale, multisite RCT were calculated.\n\n\nMETHOD\nTwenty-four children with SMD were randomly assigned to one of three treatment conditions; OT-SI, Activity Protocol, and No Treatment. Pretest and posttest measures of behavior, sensory and adaptive functioning, and physiology were administered.\n\n\nRESULTS\nThe OT-SI group, compared to the other two groups, made significant gains on goal attainment scaling and on the Attention subtest and the Cognitive/Social composite of the Leiter International Performance Scale-Revised. Compared to the control groups, OT-SI improvement trends on the Short Sensory Profile, Child Behavior Checklist, and electrodermal reactivity were in the hypothesized direction.\n\n\nCONCLUSION\nFindings suggest that OT-SI may be effective in ameliorating difficulties of children with SMD.", "title": "" }, { "docid": "d7cf6950e58d7971eda60ea7a3b172d9", "text": "Affect detection is a key component in developing intelligent educational interfaces that are capable of responding to the affective needs of students. In this paper, computer vision and machine learning techniques were used to detect students' affect as they used an educational game designed to teach fundamental principles of Newtonian physics. Data were collected in the real-world environment of a school computer lab, which provides unique challenges for detection of affect from facial expressions (primary channel) and gross body movements (secondary channel) - up to thirty students at a time participated in the class, moving around, gesturing, and talking to each other. Results were cross validated at the student level to ensure generalization to new students. Classification was successful at levels above chance for off-task behavior (area under receiver operating characteristic curve or (AUC = .816) and each affective state including boredom (AUC =.610), confusion (.649), delight (.867), engagement (.679), and frustration (.631) as well as a five-way overall classification of affect (.655), despite the noisy nature of the data. Implications and prospects for affect-sensitive interfaces for educational software in classroom environments are discussed.", "title": "" }, { "docid": "b1932eb235932a45a6bd533876ee3867", "text": "Received: 22 October 2009 Revised: 5 September 2011 Accepted: 16 September 2011 Abstract Enterprise Content Management (ECM) focuses on managing all types of content being used in organizations. It is a convergence of previous approaches that focus on managing only particular types of content, as for example documents or web pages. In this paper, we present an overview of previous research by categorizing the existing literature. We show that scientific literature on ECM is limited and there is no consensus on the definition of ECM. Therefore, the literature review surfaced several ECM definitions that we merge into a more consistent and comprehensive definition of ECM. The Functional ECM Framework (FEF) provides an overview of the potential functionalities of ECM systems (ECMSs). We apply the FEF in three case studies. The FEF can serve to communicate about ECMSs, to understand them and to direct future research. It can also be the basis for a more formal reference architecture and it can be used as an assessment tool by practitioners for comparing the functionalities provided by existing ECMSs. European Journal of Information Systems (2011) advance online publication, 25 October 2011; doi:10.1057/ejis.2011.41", "title": "" }, { "docid": "c09391a25defcb797a7c8da3f429fafa", "text": "BACKGROUND\nTo examine the postulated relationship between Ambulatory Care Sensitive Conditions (ACSC) and Primary Health Care (PHC) in the US context for the European context, in order to develop an ACSC list as markers of PHC effectiveness and to specify which PHC activities are primarily responsible for reducing hospitalization rates.\n\n\nMETHODS\nTo apply the criteria proposed by Solberg and Weissman to obtain a list of codes of ACSC and to consider the PHC intervention according to a panel of experts. Five selection criteria: i) existence of prior studies; ii) hospitalization rate at least 1/10,000 or 'risky health problem'; iii) clarity in definition and coding; iv) potentially avoidable hospitalization through PHC; v) hospitalization necessary when health problem occurs. Fulfilment of all criteria was required for developing the final ACSC list. A sample of 248,050 discharges corresponding to 2,248,976 inhabitants of Catalonia in 1996 provided hospitalization rate data. A Delphi survey was performed with a group of 44 experts reviewing 113 ICD diagnostic codes (International Classification of Diseases, 9th Revision, Clinical Modification), previously considered to be ACSC.\n\n\nRESULTS\nThe five criteria selected 61 ICD as a core list of ACSC codes and 90 ICD for an expanded list.\n\n\nCONCLUSIONS\nA core list of ACSC as markers of PHC effectiveness identifies health conditions amenable to specific aspects of PHC and minimizes the limitations attributable to variations in hospital admission policies. An expanded list should be useful to evaluate global PHC performance and to analyse market responsibility for ACSC by PHC and Specialist Care.", "title": "" }, { "docid": "8c428f4a51091f62f1af26c85dd588fc", "text": "In this study, we explored application of Word2Vec and Doc2Vec for sentiment analysis of clinical discharge summaries. We applied unsupervised learning since the data sets did not have sentiment annotations. Note that unsupervised learning is a more realistic scenario than supervised learning which requires an access to a training set of sentiment-annotated data. We aim to detect if there exists any underlying bias towards or against a certain disease. We used SentiWordNet to establish a gold sentiment standard for the data sets and evaluate performance of Word2Vec and Doc2Vec methods. We have shown that the Word2vec and Doc2Vec methods complement each other’s results in sentiment analysis of the data sets.", "title": "" } ]
scidocsrr
4106ab11f75c0839533b3a0757f6989b
A Knowledge Compilation Map
[ { "docid": "fcaab6be1862a55036fa360d01b7952d", "text": "The paper presents algorithm directional resolution, a variation on the original DavisPutnam algorithm, and analyzes its worstcase behavior as a function of the topological structure of the theories. The notions of induced width and diversity are shown to play a key role in bounding the complexity of the procedure. The importance of our analysis lies in highlighting structure-based tractable classes of satis ability and in providing theoretical guarantees on the time and space complexity of the algorithm. Contrary to previous assessments, we show that for many theories directional resolution could be an e ective procedure. Our empirical tests con rm theoretical prediction, showing that on problems with special structures, like chains, directional resolution greatly outperforms one of the most e ective satis ability algorithm known to date, namely the popular DavisPutnam procedure.", "title": "" } ]
[ { "docid": "122c3bb1eef57338f841d9ad6b2756c0", "text": "In this paper the concept of interval valued intuitionistic fuzzy soft rough sets is introduced. Also interval valued intuitionistic fuzzy soft rough set based multi criteria group decision making scheme is presented, which refines the primary evaluation of the whole expert group and enables us to select the optimal object in a most reliable manner. The proposed scheme is illustrated by an example regarding the candidate selection problem. 2010 AMS Classification: 54A40, 03E72, 03E02, 06D72", "title": "" }, { "docid": "e27f6b7f6d2ccd0fd3420ffeae63ac84", "text": "XML (eXtensible Markup Language) has emerged as a prevalent standard for document representation and exchange on the Web. It is often the case that XML documents contain information of different sensitivity degrees that must be selectively shared by (possibly large) user communities. There is thus the need for models and mechanisms enabling the specification and enforcement of access control policies for XML documents. Mechanisms are also required enabling a secure and selective dissemination of documents to users, according to the authorizations that these users have. In this article, we make several contributions to the problem of secure and selective dissemination of XML documents. First, we define a formal model of access control policies for XML documents. Policies that can be defined in our model take into account both user profiles, and document contents and structures. We also propose an approach, based on an extension of the Cryptolope#8482; approach [Gladney and Lotspiech 1997], which essentially allows one to send the same document to all users, and yet to enforce the stated access control policies. Our approach consists of encrypting different portions of the same document according to different encryption keys, and selectively distributing these keys to the various users according to the access control policies. We show that the number of encryption keys that have to be generated under our approach is minimal and we present an architecture to support document distribution.", "title": "" }, { "docid": "50442aa4ef1d7c89822d77a5b3a0ee85", "text": "The utilization of an AC induction motor (ACIM) ranges from consumer to automotive applications, with a variety of power and sizes. From the multitude of possible applications, some require the achievement of high speed while having a high torque value only at low speeds. Two applications needing this requirement are washing machines in consumer applications and traction in powertrain applications. These requirements impose a certain type of approach for induction motor control, which is known as “field weakening.”", "title": "" }, { "docid": "be4defd26cf7c7a29a85da2e15132be9", "text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.", "title": "" }, { "docid": "7ab87738e0dc081d26a8cf223b957833", "text": "We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis. We also explored feature selection techniques, including the use of AdaBoost for feature selection prior to classification by SVM or LDA. Best results were obtained by selecting a subset of Gabor filters using AdaBoost followed by classification with support vector machines. The system operates in real-time, and obtained 93% correct generalization to novel subjects for a 7-way forced choice on the Cohn-Kanade expression dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics. We applied the system to to fully automated recognition of facial actions (FACS). The present system classifies 17 action units, whether they occur singly or in combination with other actions, with a mean accuracy of 94.8%. We present preliminary results for applying this system to spontaneous facial expressions.", "title": "" }, { "docid": "0b9ed15b4aaefb22aa8f0bb2b6c8fa00", "text": "Most existing Multi-View Stereo (MVS) algorithms employ the image matching method using Normalized Cross-Correlation (NCC) to estimate the depth of an object. The accuracy of the estimated depth depends on the step size of the depth in NCC-based window matching. The step size of the depth must be small for accurate 3D reconstruction, while the small step significantly increases computational cost. To improve the accuracy of depth estimation and reduce the computational cost, this paper proposes an efficient image matching method for MVS. The proposed method is based on Phase-Only Correlation (POC), which is a high-accuracy image matching technique using the phase components in Fourier transforms. The advantages of using POC are (i) the correlation function is obtained only by one window matching and (ii) the accurate sub-pixel displacement between two matching windows can be estimated by fitting the analytical correlation peak model of the POC function. Thus, using POC-based window matching for MVS makes it possible to estimate depth accurately from the correlation function obtained only by one window matching. Through a set of experiments using the public MVS datasets, we demonstrate that the proposed method performs better in terms of accuracy and computational cost than the conventional method.", "title": "" }, { "docid": "ee8b20f685d4c025e1d113a676728359", "text": "Two experiments were conducted to evaluate the effects of increasing concentrations of glycerol in concentrate diets on total tract digestibility, methane (CH4) emissions, growth, fatty acid profiles, and carcass traits of lambs. In both experiments, the control diet contained 57% barley grain, 14.5% wheat dried distillers grain with solubles (WDDGS), 13% sunflower hulls, 6.5% beet pulp, 6.3% alfalfa, and 3% mineral-vitamin mix. Increasing concentrations (7, 14, and 21% dietary DM) of glycerol in the dietary DM were replaced for barley grain. As glycerol was added, alfalfa meal and WDDGS were increased to maintain similar concentrations of CP and NDF among diets. In Exp.1, nutrient digestibility and CH4 emissions from 12 ram lambs were measured in a replicated 4 × 4 Latin square experiment. In Exp. 2, lamb performance was evaluated in 60 weaned lambs that were blocked by BW and randomly assigned to 1 of the 4 dietary treatments and fed to slaughter weight. In Exp. 1, nutrient digestibility and CH4 emissions were not altered (P = 0.15) by inclusion of glycerol in the diets. In Exp.2, increasing glycerol in the diet linearly decreased DMI (P < 0.01) and tended (P = 0.06) to reduce ADG, resulting in a linearly decreased final BW. Feed efficiency was not affected by glycerol inclusion in the diets. Carcass traits and total SFA or total MUFA proportions of subcutaneous fat were not affected (P = 0.77) by inclusion of glycerol, but PUFA were linearly decreased (P < 0.01). Proportions of 16:0, 10t-18:1, linoleic acid (18:2 n-6) and the n-6/n-3 ratio were linearly reduced (P < 0.01) and those of 18:0 (stearic acid), 9c-18:1 (oleic acid), linearly increased (P < 0.01) by glycerol. When included up to 21% of diet DM, glycerol did not affect nutrient digestibility or CH4 emissions of lambs fed barley based finishing diets. Glycerol may improve backfat fatty acid profiles by increasing 18:0 and 9c-18:1 and reducing 10t-18:1 and the n-6/n-3 ratio.", "title": "" }, { "docid": "0444b38c0d20c999df4cb1294b5539c3", "text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "924ad8ede64cf872d979098f41214528", "text": "BACKGROUND\nSurveys are popular methods to measure public perceptions in emergencies but can be costly and time consuming. We suggest and evaluate a complementary \"infoveillance\" approach using Twitter during the 2009 H1N1 pandemic. Our study aimed to: 1) monitor the use of the terms \"H1N1\" versus \"swine flu\" over time; 2) conduct a content analysis of \"tweets\"; and 3) validate Twitter as a real-time content, sentiment, and public attention trend-tracking tool.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nBetween May 1 and December 31, 2009, we archived over 2 million Twitter posts containing keywords \"swine flu,\" \"swineflu,\" and/or \"H1N1.\" using Infovigil, an infoveillance system. Tweets using \"H1N1\" increased from 8.8% to 40.5% (R(2) = .788; p<.001), indicating a gradual adoption of World Health Organization-recommended terminology. 5,395 tweets were randomly selected from 9 days, 4 weeks apart and coded using a tri-axial coding scheme. To track tweet content and to test the feasibility of automated coding, we created database queries for keywords and correlated these results with manual coding. Content analysis indicated resource-related posts were most commonly shared (52.6%). 4.5% of cases were identified as misinformation. News websites were the most popular sources (23.2%), while government and health agencies were linked only 1.5% of the time. 7/10 automated queries correlated with manual coding. Several Twitter activity peaks coincided with major news stories. Our results correlated well with H1N1 incidence data.\n\n\nCONCLUSIONS\nThis study illustrates the potential of using social media to conduct \"infodemiology\" studies for public health. 2009 H1N1-related tweets were primarily used to disseminate information from credible sources, but were also a source of opinions and experiences. Tweets can be used for real-time content analysis and knowledge translation research, allowing health authorities to respond to public concerns.", "title": "" }, { "docid": "f400b94dd5f4d4210bd6873b44697e3a", "text": "A system for monitoring and forecasting urban air pollution is presented in this paper. The system uses low-cost air-quality monitoring motes that are equipped with an array of gaseous and meteorological sensors. These motes wirelessly communicate to an intelligent sensing platform that consists of several modules. The modules are responsible for receiving and storing the data, preprocessing and converting the data into useful information, forecasting the pollutants based on historical information, and finally presenting the acquired information through different channels, such as mobile application, Web portal, and short message service. The focus of this paper is on the monitoring system and its forecasting module. Three machine learning (ML) algorithms are investigated to build accurate forecasting models for one-step and multi-step ahead of concentrations of ground-level ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2). These ML algorithms are support vector machines, M5P model trees, and artificial neural networks (ANN). Two types of modeling are pursued: 1) univariate and 2) multivariate. The performance evaluation measures used are prediction trend accuracy and root mean square error (RMSE). The results show that using different features in multivariate modeling with M5P algorithm yields the best forecasting performances. For example, using M5P, RMSE is at its lowest, reaching 31.4, when hydrogen sulfide (H2S) is used to predict SO2. Contrarily, the worst performance, i.e., RMSE of 62.4, for SO2 is when using ANN in univariate modeling. The outcome of this paper can be significantly useful for alarming applications in areas with high air pollution levels.", "title": "" }, { "docid": "f1efe8868f19ccbb4cf2ab5c08961cdb", "text": "High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.", "title": "" }, { "docid": "9c87c09676570500f6b87ed694aff1dc", "text": "The integration of Doubly Fed Induction Generator (DFIG) based wind farm into the power grid has become a major concern for power system engineers today. Voltage stability is a key factor to maintain DFIG-based wind farm in service during the grid disturbances. This paper investigates the implementation of STATCOM to overcome the voltage stability issue for DFIG-based wind farm connected to a distribution network. The study includes the implementation of a static synchronous compensator (STATCOM) as a dynamic reactive power compensator at the point of common coupling to maintain stable voltage by protecting DFIG-based wind farm interconnected to a distribution system from going offline during and after the disturbances. The developed system is simulated in MATLAB/Simulink and the results show that the STATCOM improves the transient voltage stability and therefore helps the wind turbine generator system to remain in service during grid faults.", "title": "" }, { "docid": "bd671831032f704a06344bd46ba8f694", "text": "There has been an increase in the attention paid to the strategic potential of information systems and a new willingness to accept the possibility that information systems can be the source of strategic gains. This belief is reflected in a host of publications, from the popular press to respected journals. Much of this has been supported by a very limited set of prominent and well publicized success stories, principally involving marketing and distribution, financial services, and the airlines. Unfortunately, there has been little attempt at an analysis that abstracts from these experiences to determine factors that determine strategic success. This can be attributed in part to the absence of attention paid to unsuccessful ventures in the use of information technology for competitive advantage. Although this paper relies on the same anecdotes, it augments them with data on a few unsuccessful attempts to exploit information technology and with economic theory where appropriate. General conditions that appear necessary for sustainable competitive advantage are developed.", "title": "" }, { "docid": "d78609519636e288dae4b1fce36cb7a6", "text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.", "title": "" }, { "docid": "a8265b42dca4a70a017960fa064d728e", "text": "Community is an important attribute of Pocket Switched Networks (PSN), because mobile devices are carried by people who tend to belong to communities. We analysed community structure from mobility traces and used for forwarding algorithms [12], which shows significant impact of community. Here, we propose and evaluate three novel distributed community detection approaches with great potential to detect both static and temporal communities. We find that with suitable configuration of the threshold values, the distributed community detection can approximate their corresponding centralised methods up to 90% accuracy.", "title": "" }, { "docid": "792694fbea0e2e49a454ffd77620da47", "text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical", "title": "" }, { "docid": "294d15a70c2b9abb9da21efc41c41c90", "text": "Intelligent fault diagnosis is a promising tool to deal with mechanical big data due to its ability in rapidly and efficiently processing collected signals and providing accurate diagnosis results. In traditional intelligent diagnosis methods, however, the features are manually extracted depending on prior knowledge and diagnostic expertise. Such processes take advantage of human ingenuity but are time-consuming and labor-intensive. Inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data, a two-stage learning method is proposed for intelligent diagnosis of machines. In the first learning stage of the method, sparse filtering, an unsupervised two-layer neural network, is used to directly learn features from mechanical vibration signals. In the second stage, softmax regression is employed to classify the health conditions based on the learned features. The proposed method is validated by a motor bearing dataset and a locomotive bearing dataset, respectively. The results show that the proposed method obtains fairly high diagnosis accuracies and is superior to the existing methods for the motor bearing dataset. Because of learning features adaptively, the proposed method reduces the need of human labor and makes intelligent fault diagnosis handle big data more easily.", "title": "" }, { "docid": "14683ea24390341900318cbb75e8b85d", "text": "Through numerical simulations and calibrated experiments, the aim of the work presented in this paper is placed on the development of a polarity changeable magnetizing platform for a rotary magnetic encoder using a permanent magnet and thin silicon steel plate. Parameter studies are conducted via finite-element analyses to obtain optimized geometric design parameters for the magnetizer, leading to desirable magnetic flux density and effective magnetized patterns on the magnetic media.", "title": "" }, { "docid": "2959be17f8186f6db5c479d39cc928db", "text": "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8% and 40.5% respectively on PASCAL VOC 2009.", "title": "" } ]
scidocsrr
8ed009a4d730c4f32609a178f64f471e
Scene Text Recognition: No Country for Old Men?
[ { "docid": "58e3444f3d35d0ad45e5637e7c53efb5", "text": "An efficient method for text localization and recognition in real-world images is proposed. Thanks to effective pruning, it is able to exhaustively search the space of all character sequences in real time (200ms on a 640x480 image). The method exploits higher-order properties of text such as word text lines. We demonstrate that the grouping stage plays a key role in the text localization performance and that a robust and precise grouping stage is able to compensate errors of the character detector. The method includes a novel selector of Maximally Stable Extremal Regions (MSER) which exploits region topology. Experimental validation shows that 95.7% characters in the ICDAR dataset are detected using the novel selector of MSERs with a low sensitivity threshold. The proposed method was evaluated on the standard ICDAR 2003 dataset where it achieved state-of-the-art results in both text localization and recognition.", "title": "" }, { "docid": "823d77bc2d761810467c8eea87c2dd31", "text": "This paper proposes a novel hybrid method to robustly and accurately localize texts in natural scene images. A text region detector is designed to generate a text confidence map, based on which text components can be segmented by local binarization approach. A Conditional Random Field (CRF) model, considering the unary component property as well as binary neighboring component relationship, is then presented to label components as \"text\" or \"non-text\". Last, text components are grouped into text lines with an energy minimization approach. Experimental results show that the proposed method gives promising performance comparing with the existing methods on ICDAR 2003 competition dataset.", "title": "" }, { "docid": "8c3639614a66f1ec04d7a57b51377124", "text": "Scene text extraction methodologies are usually based in classification of individual regions or patches, using a priori knowledge for a given script or language. Human perception of text, on the other hand, is based on perceptual organisation through which text emerges as a perceptually significant group of atomic objects. Therefore humans are able to detect text even in languages and scripts never seen before. In this paper, we argue that the text extraction problem could be posed as the detection of meaningful groups of regions. We present a method built around a perceptual organisation framework that exploits collaboration of proximity and similarity laws to create text-group hypotheses. Experiments demonstrate that our algorithm is competitive with state of the art approaches on a standard dataset covering text in variable orientations and two languages.", "title": "" } ]
[ { "docid": "e0b96837b0908aa859fa56a2b0a5701c", "text": "Being able to automatically describe the content of an image using properly formed English sentences is a challenging task, but it could have great impact by helping visually impaired people better understand their surroundings. Most modern mobile phones are able to capture photographs, making it possible for the visually impaired to make images of their environments. These images can then be used to generate captions that can be read out loud to the visually impaired, so that they can get a better sense of what is happening around them. In this paper, we present a deep recurrent architecture that automatically generates brief explanations of images. Our models use a convolutional neural network (CNN) to extract features from an image. These features are then fed into a vanilla recurrent neural network (RNN) or a Long Short-Term Memory (LSTM) network to generate a description of the image in valid English. Our models achieve comparable to state of the art performance, and generate highly descriptive captions that can potentially greatly improve the lives of visually impaired people.", "title": "" }, { "docid": "558218868956bcd05363825fb42ef75e", "text": "Imitation learning algorithms learn viable policies by imitating an expert’s behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert’s behavior is available as a fixed set of trajectories.We evaluate in terms of the expert’s cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-atRisk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus, the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.", "title": "" }, { "docid": "79560f7ec3c5f42fe5c5e0ad175fe6a0", "text": "The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximum amount of input or sensor perturbation which is still tolerated. This problem of computing maximum perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximum resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.", "title": "" }, { "docid": "68bcc07055db2c714001ccb5f8dc96a7", "text": "The concept of an antipodal bipolar fuzzy graph of a given bipolar fuzzy graph is introduced. Characterizations of antipodal bipolar fuzzy graphs are presented when the bipolar fuzzy graph is complete or strong. Some isomorphic properties of antipodal bipolar fuzzy graph are discussed. The notion of self median bipolar fuzzy graphs of a given bipolar fuzzy graph is also introduced.", "title": "" }, { "docid": "8a35d273a4f45e64b43bf3a7d02db4ed", "text": "Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.", "title": "" }, { "docid": "60999276e84cbd46d778c62439014598", "text": "Graph comprehension is constrained by the goals of the cognitive system that processes the graph and by the context in which the graph appears. In this paper we report the results of a study using a sentence-graph verification paradigm. We recorded participants’ reaction times to indicate whether the information contained in a simple bar graph matched a written description of the graph. Aside from the consistency of visual and verbal information, we manipulated whether the graph was ascending or descending, the relational term in the verbal description, and the labels of the bars of the graph. Our results showed that the biggest source of variance in people’s reaction times is whether the order in which the referents appear in the graph is the same as the order in which they appear in the sentence. The implications of this finding for contemporary theories of graph comprehension are discussed.", "title": "" }, { "docid": "f722d8576190722b55d701cc32f6f268", "text": "The analysis of network data is an area that is rapidly growing, both within and outside of the discipline of statistics. This review provides a concise summary of methods and models used in the statistical analysis of network data, including the Erdős-Renyi model, the exponential family class of network models and recently developed latent variable models. Many of the methods and models are illustrated by application to the well-known Zachary karate dataset. Software routines available for implementing methods are emphasised throughout. The aim of this paper is to provide a review with enough detail about many common classes of network model to whet the appetite and to point the way to further reading. Corresponding Author Clique Strategic Research Cluster, School of Mathematical Sciences & Complex and Adaptive Systems Laboratory, University College Dublin, Dublin 4, Ireland. This material is based upon work supported by the Science Foundation Ireland under Grant No. 08/SRC/I1407: Clique: Graph & Network Analysis Cluster", "title": "" }, { "docid": "3fca910955e8412bc6433e2063f93965", "text": "Channel rendezvous is a prerequisite for secondary users (SUs) to set up communications in cognitive radio networks (CRNs). It is expected that the rendezvous can be achieved within a short finite time for delay-sensitive applications and over all available channels to increase the robustness to unstable channels. Some existing works suffer from a small number of rendezvous channels and can only guarantee rendezvous under the undesired requirements such as synchronous clock, homogeneous available channels, predetermined roles and explicit SUs' identifiers (IDs). In this paper, to address these limitations, we employ the notion of Disjoint Set Cover (DSC) and propose a DSC-based Rendezvous (DSCR) algorithm. We first present an approximation algorithm to construct one DSC. The variant permutations of elements in the ingeniously constructed DSC are then utilized to regulate the order of accessing channels, enabling SUs to rendezvous on all available channels within a short duration. We derive the theoretical maximum and expected rendezvous latency and prove the full rendezvous degree of the DSCR algorithm. Extensive simulations show that the DSCR algorithm can significantly reduce the rendezvous latency compared to existing algorithms.", "title": "" }, { "docid": "85c687f7b01d635fa9f46d0dd61098d3", "text": "This paper provides a comprehensive survey of the technical achievements in the research area of Image Retrieval , especially Content-Based Image Retrieval, an area so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multi-dimensional indexing, and system design, three of the fundamental bases of Content-Based Image Retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identiied, and future promising research directions are suggested.", "title": "" }, { "docid": "f315dca8c08645292c96aa1425d94a24", "text": "WebRTC has quickly become popular as a video conferencing platform, partly due to the fact that many browsers support it. WebRTC utilizes the Google Congestion Control (GCC) algorithm to provide congestion control for realtime communications over UDP. The performance during a WebRTC call may be influenced by several factors, including the underlying WebRTC implementation, the device and network characteristics, and the network topology. In this paper, we perform a thorough performance evaluation of WebRTC both in emulated synthetic network conditions as well as in real wired and wireless networks. Our evaluation shows that WebRTC streams have a slightly higher priority than TCP flows when competing with cross traffic. In general, while in several of the considered scenarios WebRTC performed as expected, we observed important cases where there is room for improvement. These include the wireless domain and the newly added support for the video codecs VP9 and H.264 that does not perform as expected.", "title": "" }, { "docid": "5eda080188512f8d3c5f882c1114e1c8", "text": "Knowledge mapping is one of the most popular techniques used to identify knowledge in organizations. Using knowledge mapping techniques; a large and complex set of knowledge resources can be acquired and navigated more easily. Knowledge mapping has attracted the senior managers' attention as an assessment tool in recent years and is expected to measure deep conceptual understanding and allow experts in organizations to characterize relationships between concepts within a domain visually. Here the very critical issue is how to identify and choose an appropriate knowledge mapping technique. This paper aims to explore the different types of knowledge mapping techniques and give a general idea of their target contexts to have the way for choosing the appropriate map. It attempts to illustrate which techniques are appropriate, why and where they can be applied, and how these mapping techniques can be managed. The paper is based on the comprehensive review of papers on knowledge mapping techniques. In addition, this paper attempts to further clarify the differences among these knowledge mapping techniques and the main purpose for using each. Eventually, it is recommended that experts must understand the purpose for which the map is being developed before proceeding to activities related to any knowledge management dimensions; in order to the appropriate knowledge mapping technique .", "title": "" }, { "docid": "abaf3d722acb6a641a481cb5324bc765", "text": "Numerous studies have demonstrated a strong connection between the experience of stigma and the well-being of the stigmatized. But in the area of mental illness there has been controversy surrounding the magnitude and duration of the effects of labeling and stigma. One of the arguments that has been used to downplay the importance of these factors is the substantial body of evidence suggesting that labeling leads to positive effects through mental health treatment. However, as Rosenfield (1997) points out, labeling can simultaneously induce both positive consequences through treatment and negative consequences through stigma. In this study we test whether stigma has enduring effects on well-being by interviewing 84 men with dual diagnoses of mental disorder and substance abuse at two points in time--at entry into treatment, when they were addicted to drugs and had many psychiatric symptoms and then again after a year of treatment, when they were far less symptomatic and largely drug- and alcohol-free. We found a relatively strong and enduring effect of stigma on well-being. This finding indicates that stigma continues to complicate the lives of the stigmatized even as treatment improves their symptoms and functioning. It follows that if health professionals want to maximize the well-being of the people they treat, they must address stigma as a separate and important factor in its own right.", "title": "" }, { "docid": "ec3eac65e52b62af3ab1599e643c49ac", "text": "Topic models for text corpora comprise a popular family of methods that have inspired many extensions to encode properties such as sparsity, interactions with covariates, and the gradual evolution of topics. In this paper, we combine certain motivating ideas behind variations on topic models with modern techniques for variational inference to produce a flexible framework for topic modeling that allows for rapid exploration of different models. We first discuss how our framework relates to existing models, and then demonstrate that it achieves strong performance, with the introduction of sparsity controlling the trade off between perplexity and topic coherence. We have released our code and preprocessing scripts to support easy future comparisons and exploration.", "title": "" }, { "docid": "c528fce759ab23fb00c697c2b279ad19", "text": "Part of the long lasting cultural heritage of humanity is the art of classical poems, which are created by fitting words into certain formats and representations. Automatic poetry composition by computers is considered as a challenging problem which requires high Artificial Intelligence assistance. This study attracts more and more attention in the research community. In this paper, we formulate the poetry composition task as a natural language generation problem using recurrent neural networks. Given user specified writing intents, the system generates a poem via sequential language modeling. Unlike the traditional one-pass generation for previous neural network models, poetry composition needs polishing to satisfy certain requirements. Hence, we propose a new generative model with a polishing schema, and output a refined poem composition. In this way, the poem is generated incrementally and iteratively by refining each line. We run experiments based on large datasets of 61,960 classic poems in Chinese. A comprehensive evaluation, using perplexity and BLEU measurements as well as human judgments, has demonstrated the effectiveness of our proposed approach.", "title": "" }, { "docid": "73b4cceb1546a94260c75ae8bed8edd8", "text": "We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.", "title": "" }, { "docid": "42d27f1a6ad81e13c449a08a6ada34d6", "text": "Face detection of comic characters is a necessary step in most applications, such as comic character retrieval, automatic character classification and comic analysis. However, the existing methods were developed for simple cartoon images or small size comic datasets, and detection performance remains to be improved. In this paper, we propose a Faster R-CNN based method for face detection of comic characters. Our contribution is twofold. First, for the binary classification task of face detection, we empirically find that the sigmoid classifier shows a slightly better performance than the softmax classifier. Second, we build two comic datasets, JC2463 and AEC912, consisting of 3375 comic pages in total for characters face detection evaluation. Experimental results have demonstrated that the proposed method not only performs better than existing methods, but also works for comic images with different drawing styles.", "title": "" }, { "docid": "990067864c123b45e5c3d06ef1a0cf7d", "text": "BACKGROUND\nRetrospective single-centre series have shown the feasibility of sentinel lymph-node (SLN) identification in endometrial cancer. We did a prospective, multicentre cohort study to assess the detection rate and diagnostic accuracy of the SLN procedure in predicting the pathological pelvic-node status in patients with early stage endometrial cancer.\n\n\nMETHODS\nPatients with International Federation of Gynecology and Obstetrics (FIGO) stage I-II endometrial cancer had pelvic SLN assessment via cervical dual injection (with technetium and patent blue), and systematic pelvic-node dissection. All lymph nodes were histopathologically examined and SLNs were serial sectioned and examined by immunochemistry. The primary endpoint was estimation of the negative predictive value (NPV) of sentinel-node biopsy per hemipelvis. This is an ongoing study for which recruitment has ended. The study is registered with ClinicalTrials.gov, number NCT00987051.\n\n\nFINDINGS\nFrom July 5, 2007, to Aug 4, 2009, 133 patients were enrolled at nine centres in France. No complications occurred after injection of technetium colloid and no anaphylactic reactions were noted after patent blue injection. No surgical complications were reported during SLN biopsy, including procedures that involved conversion to open surgery. At least one SLN was detected in 111 of the 125 eligible patients. 19 of 111 (17%) had pelvic-lymph-node metastases. Five of 111 patients (5%) had an associated SLN in the para-aortic area. Considering the hemipelvis as the unit of analysis, NPV was 100% (95% CI 95-100) and sensitivity 100% (63-100). Considering the patient as the unit of analysis, three patients had false-negative results (two had metastatic nodes in the contralateral pelvic area and one in the para-aortic area), giving an NPV of 97% (95% CI 91-99) and sensitivity of 84% (62-95). All three of these patients had type 2 endometrial cancer. Immunohistochemistry and serial sectioning detected metastases undiagnosed by conventional histology in nine of 111 (8%) patients with detected SLNs, representing nine of the 19 patients (47%) with metastases. SLN biopsy upstaged 10% of patients with low-risk and 15% of those with intermediate-risk endometrial cancer.\n\n\nINTERPRETATION\nSLN biopsy with cervical dual labelling could be a trade-off between systematic lymphadenectomy and no dissection at all in patients with endometrial cancer of low or intermediate risk. Moreover, our study suggests that SLN biopsy could provide important data to tailor adjuvant therapy.\n\n\nFUNDING\nDirection Interrégionale de Recherche Clinique, Ile-de-France, Assistance Publique-Hôpitaux de Paris.", "title": "" }, { "docid": "be5b0dd659434e77ce47034a51fd2767", "text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks", "title": "" }, { "docid": "22eac984935d6040db2ab96eeb5d2bc9", "text": "Under frequency load shedding (UFLS) and under voltage load shedding (UVLS) are attracting more attention, as large disturbances occur more frequently than in the past. Usually, these two schemes work independently from each other, and are not designed in an integrated way to exploit their combined effect on load shedding. Besides, reactive power is seldom considered in the load shedding process. To fill this gap, we propose in this paper a new centralized, adaptive load shedding algorithm, which uses both voltage and frequency information provided by phasor measurement units (PMUs). The main contribution of the new method is the consideration of reactive power together with active power in the load shedding strategy. Therefore, this method addresses the combined voltage and frequency stability issues better than the independent approaches. The new method is tested on the IEEE 39-Bus system, in order to compare it with other methods. Simulation results show that, after large disturbance, this method can bring the system back to a new stable steady state that is better from the point of view of frequency and voltage stability, and loadability.", "title": "" }, { "docid": "4d14cef59071aeeb0d27b445f793c058", "text": "This study examined the motivation of young people in internet gaming using the dualistic model of passion. Path analysis was used to examine the relationships between the two types of passion: obsessive and harmonious passion, behavioral regulations, and flow. A total of 1074 male secondary school students from Singapore took part in the study. The results of the path analysis showed that external, introjected, and identified regulations positively predicted obsessive passion, while harmonious passion was predicted by identified and intrinsic regulations. Flow in digital gaming was predicted directly by harmonious passion, as well as indirectly through intrinsic regulation. This study supports the proposed dualistic model of passion in explaining young people’s motivation in internet gaming. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
169501ecb613c34287e0ff45354f5ad5
SALSA-TEXT : self attentive latent space based adversarial text generation
[ { "docid": "c81e823de071ae451420326e9fbb2e3d", "text": "Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.", "title": "" }, { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" }, { "docid": "548e1962ac4a2ea36bf90db116c4ff49", "text": "LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.", "title": "" }, { "docid": "f4c8fa37408d5341c2b54f92f0dfff4f", "text": "Generative adversarial networks are an effective approach for learning rich latent representations of continuous data, but have proven difficult to apply directly to discrete structured data, such as text sequences or discretized images. Ideally we could encode discrete structures in a continuous code space to avoid this problem, but it is difficult to learn an appropriate general-purpose encoder. In this work, we consider a simple approach for handling these two challenges jointly, employing a discrete structure autoencoder with a code space regularized by generative adversarial training. The model learns a smooth regularized code space while still being able to model the underlying data, and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. We demonstrate empirically how key properties of the data are captured in the model’s latent space, and evaluate the model itself on the tasks of discrete image generation, text generation, and semi-supervised learning.", "title": "" } ]
[ { "docid": "ab2f1f27b11a5a41ff6b2b79bc044c2f", "text": "ABSTACT: Trajectory tracking has been an extremely active research area in robotics in the past decade.In this paper, a kinematic model of two wheel mobile robot for reference trajectory tracking is analyzed and simulated. For controlling the wheeled mobile robot PID controllers are used. For finding the optimal parameters of PID controllers, in this work particle swarm optimization (PSO) is used. The proposed methodology is shown to be a successful solutionfor solving the problem.", "title": "" }, { "docid": "88048217d8d052dbe1d2b74145be76b5", "text": "Human learners, including infants, are highly sensitive to structure in their environment. Statistical learning refers to the process of extracting this structure. A major question in language acquisition in the past few decades has been the extent to which infants use statistical learning mechanisms to acquire their native language. There have been many demonstrations showing infants' ability to extract structures in linguistic input, such as the transitional probability between adjacent elements. This paper reviews current research on how statistical learning contributes to language acquisition. Current research is extending the initial findings of infants' sensitivity to basic statistical information in many different directions, including investigating how infants represent regularities, learn about different levels of language, and integrate information across situations. These current directions emphasize studying statistical language learning in context: within language, within the infant learner, and within the environment as a whole. WIREs Cogn Sci 2010 1 906-914 This article is categorized under: Linguistics > Language Acquisition Psychology > Language.", "title": "" }, { "docid": "0e672586c4be2e07c3e794ed1bb3443d", "text": "In this thesis, the multi-category dataset has been incorporated with the robust feature descriptor using the scale invariant feature transform (SIFT), SURF and FREAK along with the multi-category enabled support vector machine (mSVM). The multi-category support vector machine (mSVM) has been designed with the iterative phases to make it able to work with the multi-category dataset. The mSVM represents the training samples of main class as the primary class in every iterative phase and all other training samples are categorized as the secondary class for the support vector machine classification. The proposed model is made capable of working with the variations in the indoor scene image dataset, which are noticed in the form of the color, texture, light, image orientation, occlusion and color illuminations. Several experiments have been conducted over the proposed model for the performance evaluation of the indoor scene recognition system in the proposed model. The results of the proposed model have been obtained in the form of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. The proposed model has clearly outperformed the existing models in the terms of the overall accuracy. The proposed model improvement has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.", "title": "" }, { "docid": "8bb0077bf14426f02a6339dd1be5b7f2", "text": "Astrocytes are thought to play a variety of key roles in the adult brain, such as their participation in synaptic transmission, in wound healing upon brain injury, and adult neurogenesis. However, to elucidate these functions in vivo has been difficult because of the lack of astrocyte-specific gene targeting. Here we show that the inducible form of Cre (CreERT2) expressed in the locus of the astrocyte-specific glutamate transporter (GLAST) allows precisely timed gene deletion in adult astrocytes as well as radial glial cells at earlier developmental stages. Moreover, postnatal and adult neurogenesis can be targeted at different stages with high efficiency as it originates from astroglial cells. Taken together, this mouse line will allow dissecting the molecular pathways regulating the diverse functions of astrocytes as precursors, support cells, repair cells, and cells involved in neuronal information processing.", "title": "" }, { "docid": "5a1f4efc96538c1355a2742f323b7a0e", "text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.", "title": "" }, { "docid": "81bd2987a3c5c82379ef69a6f065b17f", "text": "Although accumulating evidence highlights a crucial role of the insular cortex in feelings, empathy and processing uncertainty in the context of decision making, neuroscientific models of affective learning and decision making have mostly focused on structures such as the amygdala and the striatum. Here, we propose a unifying model in which insula cortex supports different levels of representation of current and predictive states allowing for error-based learning of both feeling states and uncertainty. This information is then integrated in a general subjective feeling state which is modulated by individual preferences such as risk aversion and contextual appraisal. Such mechanisms could facilitate affective learning and regulation of body homeostasis, and could also guide decision making in complex and uncertain environments.", "title": "" }, { "docid": "b6af904a2746862d76a4588d050f093c", "text": "This paper presents a fast algorithm for smooth digital elevation model interpolation and approximation from scattered elevation data. The global surface is reconstructed by subdividing it into overlapping local subdomains using a perfectly balanced binary tree. In each tree leaf, a smooth local surface is reconstructed using radial basis functions. Finally a hierarchical blending is done to create the final C<sup>1</sup>-continuous surface using a family of functions called Partition of Unity. We present two terrain data sets and show that our method is robust since the number of data points in the Partition of Unity blending areas is explicitly specified.", "title": "" }, { "docid": "0d1e889a69ea17e43c5f65bac38bba79", "text": "In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.", "title": "" }, { "docid": "394e99bd9c0b3b5a0765f49f2fc38c53", "text": "We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.", "title": "" }, { "docid": "c451d86c6986fab1a1c4cd81e87e6952", "text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.", "title": "" }, { "docid": "a7ca3ffcae09ad267281eb494532dc54", "text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.", "title": "" }, { "docid": "9f0cb11b8ec05933a10a9c82803f7ce4", "text": "From 2005 to 2012, injuries to children under five increased by 10%. Using the expansion of ATT’s 3G network, I find that smartphone adoption has a causal impact on child injuries. This effect is strongest amongst children ages 0-5, but not children ages 6-10, and in activities where parental supervision matters. I put this forward as indirect evidence that this increase is due to parents being distracted while supervising children, and not due to increased participation in accident-prone activities.", "title": "" }, { "docid": "5953dafaebde90a0f6af717883452d08", "text": "Compact high-voltage Marx generators have found wide ranging applications for driving resistive and capacitive loads. Parasitic or leakage capacitance in compact low-energy Marx systems has proved useful in driving resistive loads, but it can be detrimental when driving capacitive loads where it limits the efficiency of energy transfer to the load capacitance. In this paper, we show how manipulating network designs consisting of these parasitic elements along with internal and external components can optimize the performance of such systems.", "title": "" }, { "docid": "b492c624d1593515d55b3d9b6ac127a7", "text": "We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.", "title": "" }, { "docid": "39271e70afb7ea1b1876b57dfab1d745", "text": "This study examined the patterns or mechanism for conflict resolution in traditional African societies with particular reference to Yoruba and Igbo societies in Nigeria and Pondo tribe in South Africa. The paper notes that conflict resolution in traditional African societies provides opportunity to interact with the parties concerned, it promotes consensus-building, social bridge reconstructions and enactment of order in the society. The paper submits further that the western world placed more emphasis on the judicial system presided over by council of elders, kings’ courts, peoples (open place)", "title": "" }, { "docid": "275a5302219385f22706b483ecc77a74", "text": "This paper describes a bilingual text-to-speech (TTS) system, Microsoft Mulan, which switches between Mandarin and English smoothly and which maintains the sentence level intonation even for mixed-lingual texts. Mulan is constructed on the basis of the Soft Prediction Only prosodic strategy and the Prosodic-Constraint Orient unit-selection strategy. The unitselection module of Mulan is shared across languages. It is insensitive to language identity, even though the syllable is used as the smallest unit in Mandarin, and the phoneme in English. Mulan has a unique module, the language-dispatching module, which dispatches texts to the language-specific front-ends and merges the outputs of the two front-ends together. The mixed texts are “uttered” out with the same voice. According to our informal listening test, the speech synthesized with Mulan sounds quite natural. Sample waves can be heard at: http://research.microsoft.com/~echang/projects/tts/mulan.htm.", "title": "" }, { "docid": "d039154425d05fa996810b4a00364671", "text": "Community structure is an important area of research. It has received a considerable attention from the scientific community. Despite its importance, one of the key problems in locating information about community detection is the diverse spread of related articles across various disciplines. To the best of our knowledge, there is no current comprehensive review of recent literature which uses a scientometric analysis using complex networks analysis covering all relevant articles from the Web of Science (WoS). Here we present a visual survey of key literature using CiteSpace. The idea is to identify emerging trends besides using network techniques to examine the evolution of the domain. Towards that end, we identify the most influential, central, as well as active nodes using scientometric analyses. We examine authors, key articles, cited references, core subject categories, key journals, institutions, as well as countries. The exploration of the scientometric literature of the domain reveals that Yong Wang is a pivot node with the highest centrality. Additionally, we have observed that Mark Newman is the most highly cited author in the network. We have also identified that the journal, \"Reviews of Modern Physics\" has the strongest citation burst. In terms of cited documents, an article by Andrea Lancichinetti has the highest centrality score. We have also discovered that the origin of the key publications in this domain is from the United States. Whereas Scotland has the strongest and longest citation burst. Additionally, we have found that the categories of \"Computer Science\" and \"Engineering\" lead other categories based on frequency and centrality respectively.", "title": "" }, { "docid": "a68cec6fd069499099c8bca264eb0982", "text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.", "title": "" }, { "docid": "04d06629a3683536fb94228f6295a7d3", "text": "User profiling is an important step for solving the problem of personalized news recommendation. Traditional user profiling techniques often construct profiles of users based on static historical data accessed by users. However, due to the frequent updating of news repository, it is possible that a user’s finegrained reading preference would evolve over time while his/her long-term interest remains stable. Therefore, it is imperative to reason on such preference evaluation for user profiling in news recommenders. Besides, in content-based news recommenders, a user’s preference tends to be stable due to the mechanism of selecting similar content-wise news articles with respect to the user’s profile. To activate users’ reading motivations, a successful recommender needs to introduce ‘‘somewhat novel’’ articles to", "title": "" } ]
scidocsrr
f8b704d75e1aa835ad61212e9214ccad
Embedding Deep Metric for Person Re-identication A Study Against Large Variations
[ { "docid": "d98186e7dde031b99330be009b600e43", "text": "This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.", "title": "" } ]
[ { "docid": "e7230519f0bd45b70c1cbd42f09cb9e8", "text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.", "title": "" }, { "docid": "f9b11e55be907175d969cd7e76803caf", "text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.", "title": "" }, { "docid": "c460ac78bb06e7b5381506f54200a328", "text": "Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.", "title": "" }, { "docid": "3891138c186fa72cdf8a19ef6be33638", "text": "In the past decade, internet of things (IoT) has been a focus of research. Security and privacy are the key issues for IoT applications, and still face some enormous challenges. In order to facilitate this emerging domain, we in brief review the research progress of IoT, and pay attention to the security. By means of deeply analyzing the security architecture and features, the security requirements are given. On the basis of these, we discuss the research status of key technologies including encryption mechanism, communication security, protecting sensor data and cryptographic algorithms, and briefly outline the challenges.", "title": "" }, { "docid": "caad330df7dd6feb957af45a5dcfc524", "text": "FPGA-based hardware accelerator for convolutional neural networks (CNNs) has obtained great attentions due to its higher energy efficiency than GPUs. However, it has been a challenge for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this paper, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipeline (temporal parallelism) stages. Experiment results show that the proposed architecture running at 90 MHz on a Xilinx Virtex-7 FPGA achieves a computing throughput of 7.663 TOPS with a power consumption of 8.2 W regardless of the batch size of input data. This is 8.3x faster and 75x more energy-efficient than a Titan X GPU for processing online individual requests (in small batch size). For processing static data (in large batch size), the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5x higher energy efficiency.", "title": "" }, { "docid": "e706c5071b87561f08ee8f9610e41e2e", "text": "Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been proposed to limit the information provided to the adversary by omitting probability scores, significantly impacting the utility of the provided service. In this work, we illustrate how a service provider can still provide useful, albeit misleading, class probability information, while significantly limiting the success of the attack. Our defense forces the adversary to discard the class probabilities, requiring significantly more queries before they can train a model with comparable performance. We evaluate several attack strategies, model architectures, and hyperparameters under varying adversarial models, and evaluate the efficacy of our defense against the strongest adversary. Finally, we quantify the amount of noise injected into the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can degrade the accuracy of the stolen model at least 20%, or require up to 64 times more queries while keeping the accuracy of the protected model almost intact.", "title": "" }, { "docid": "d034e1b08f704c7245a50bb383206001", "text": "Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general.", "title": "" }, { "docid": "bb13ad5b41abbf80f7e7c70a9098cd15", "text": "OBJECTIVE\nThis study assessed the psychological distress in Spanish college women and analyzed it in relation to sociodemographic and academic factors.\n\n\nPARTICIPANTS AND METHODS\nThe authors selected a stratified random sampling of 1,043 college women (average age of 22.2 years). Sociodemographic and academic information were collected, and psychological distress was assessed with the Symptom Checklist-90-Revised.\n\n\nRESULTS\nThis sample of college women scored the highest on the depression dimension and the lowest on the phobic anxiety dimension. The sample scored higher than women of the general population on the dimensions of obsessive-compulsive, interpersonal sensitivity, paranoid ideation, psychoticism, and on the Global Severity Index. Scores in the sample significantly differed based on age, relationship status, financial independence, year of study, and area of study.\n\n\nCONCLUSION\nThe results indicated an elevated level of psychological distress among college women, and therefore college health services need to devote more attention to their mental health.", "title": "" }, { "docid": "9546f8a74577cc1119e48fae0921d3cf", "text": "Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.", "title": "" }, { "docid": "86d8a61771cd14a825b6fc652f77d1d6", "text": "The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.", "title": "" }, { "docid": "08ab7142ae035c3594d3f3ae339d3e27", "text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.", "title": "" }, { "docid": "8f9e3bb85b4a2fcff3374fd700ac3261", "text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.", "title": "" }, { "docid": "ff2b53e0cecb849d1cbb503300f1ab9a", "text": "Receiving rapid, accurate and comprehensive knowledge about the conditions of damaged buildings after earthquake strike and other natural hazards is the basis of many related activities such as rescue, relief and reconstruction. Recently, commercial high-resolution satellite imagery such as IKONOS and QuickBird is becoming more powerful data resource for disaster management. In this paper, a method for automatic detection and classification of damaged buildings using integration of high-resolution satellite imageries and vector map is proposed. In this method, after extracting buildings position from vector map, they are located in the pre-event and post-event satellite images. By measuring and comparing different textural features for extracted buildings in both images, buildings conditions are evaluated through a Fuzzy Inference System. Overall classification accuracy of 74% and kappa coefficient of 0.63 were acquired. Results of the proposed method, indicates the capability of this method for automatic determination of damaged buildings from high-resolution satellite imageries.", "title": "" }, { "docid": "1324ee90acbdfe27a14a0d86d785341a", "text": "Though autonomous vehicles are currently operating in several places, many important questions within the field of autonomous vehicle research remain to be addressed satisfactorily. In this paper, we examine the role of communication between pedestrians and autonomous vehicles at unsignalized intersections. The nature of interaction between pedestrians and autonomous vehicles remains mostly in the realm of speculation currently. Of course, pedestrian’s reactions towards autonomous vehicles will gradually change over time owing to habituation, but it is clear that this topic requires urgent and ongoing study, not least of all because engineers require some working model for pedestrian-autonomous-vehicle communication. Our paper proposes a decision-theoretic model that expresses the interaction between a pedestrian and a vehicle. The model considers the interaction between a pedestrian and a vehicle as expressed an MDP, based on prior work conducted by psychologists examining similar experimental conditions. We describe this model and our simulation study of behavior it exhibits. The preliminary results on evaluating the behavior of the autonomous vehicle are promising and we believe it can help reduce the data needed to develop fuller models.", "title": "" }, { "docid": "8b5bf8cf3832ac9355ed5bef7922fb5c", "text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.", "title": "" }, { "docid": "687414897eabd32ebbbca6ae792d7148", "text": "When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.", "title": "" }, { "docid": "89105546031fd478f1a1f3dcb9e25cdf", "text": "Effective and accurate diagnosis of Alzheimer's disease (AD) or mild cognitive impairment (MCI) can be critical for early treatment and thus has attracted more and more attention nowadays. Since first introduced, machine learning methods have been gaining increasing popularity for AD related research. Among the various identified biomarkers, magnetic resonance imaging (MRI) are widely used for the prediction of AD or MCI. However, before a machine learning algorithm can be applied, image features need to be extracted to represent the MRI images. While good representations can be pivotal to the classification performance, almost all the previous studies typically rely on human labelling to find the regions of interest (ROI) which may be correlated to AD, such as hippocampus, amygdala, precuneus, etc. This procedure requires domain knowledge and is costly and tedious. Instead of relying on extraction of ROI features, it is more promising to remove manual ROI labelling from the pipeline and directly work on the raw MRI images. In other words, we can let the machine learning methods to figure out these informative and discriminative image structures for AD classification. In this work, we propose to learn deep convolutional image features using unsupervised and supervised learning. Deep learning has emerged as a powerful tool in the machine learning community and has been successfully applied to various tasks. We thus propose to exploit deep features of MRI images based on a pre-trained large convolutional neural network (CNN) for AD and MCI classification, which spares the effort of manual ROI annotation process.", "title": "" }, { "docid": "3228df5de3c7d4a4ae61da815afa2bba", "text": "Abstract: The proposed zero-current-switching switched-capacitor quasi-resonant DC–DC converter is a new type of bidirectional power flow control conversion scheme. It possesses the conventional features of resonant switched-capacitor converters: low weight, small volume, high efficiency, low EMI emission and current stress. A zero-current-switching switched-capacitor stepup/step-down bidirectional converter is presented that can improve the current stress problem during bidirectional power flow control processing. It can provide a high voltage conversion ratio using four power MOSFET main switches, a set of switched capacitors and a small resonant inductor. The converter operating principle of the proposed bidirectional power conversion scheme is described in detail with circuit model analysis. Simulation and experiment are carried out to verify the concept and performance of the proposed bidirectional DC–DC converter.", "title": "" }, { "docid": "c63dcdd615007dfddca77e7bdf52c0eb", "text": "Essential tremor (ET) is a common movement disorder but its pathogenesis remains poorly understood. This has limited the development of effective pharmacotherapy. The current therapeutic armamentaria for ET represent the product of careful clinical observation rather than targeted molecular modeling. Here we review their pharmacokinetics, metabolism, dosing, and adverse effect profiles and propose a treatment algorithm. We also discuss the concept of medically refractory tremor, as therapeutic trials should be limited unless invasive therapy is contraindicated or not desired by patients.", "title": "" }, { "docid": "56e406924a967700fba3fe554b9a8484", "text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.", "title": "" } ]
scidocsrr
91fc47e6263131bd21e7748f7a2b49fa
BitScope: Automatically Dissecting Malicious Binaries
[ { "docid": "f1f0c6518a34c0938e65e4de2b5ca7c0", "text": "Disassembly is the process of recovering a symbolic representation of a program’s machine code instructions from its binary representation. Recently, a number of techniques have been proposed that attempt to foil the disassembly process. These techniques are very effective against state-of-the-art disassemblers, preventing a substantial fraction of a binary program from being disassembled correctly. This could allow an attacker to hide malicious code from static analysis tools that depend on correct disassembler output (such as virus scanners). The paper presents novel binary analysis techniques that substantially improve the success of the disassembly process when confronted with obfuscated binaries. Based on control flow graph information and statistical methods, a large fraction of the program’s instructions can be correctly identified. An evaluation of the accuracy and the performance of our tool is provided, along with a comparison to several state-of-the-art disassemblers.", "title": "" } ]
[ { "docid": "bd7664e9ff585a48adca12c0a8d9bf95", "text": "Fueled by the widespread adoption of sensor-enabled smartphones, mobile crowdsourcing is an area of rapid innovation. Many crowd-powered sensor systems are now part of our daily life -- for example, providing highway congestion information. However, participation in these systems can easily expose users to a significant drain on already limited mobile battery resources. For instance, the energy burden of sampling certain sensors (such as WiFi or GPS) can quickly accumulate to levels users are unwilling to bear. Crowd system designers must minimize the negative energy side-effects of participation if they are to acquire and maintain large-scale user populations.\n To address this challenge, we propose Piggyback CrowdSensing (PCS), a system for collecting mobile sensor data from smartphones that lowers the energy overhead of user participation. Our approach is to collect sensor data by exploiting Smartphone App Opportunities -- that is, those times when smartphone users place phone calls or use applications. In these situations, the energy needed to sense is lowered because the phone need no longer be woken from an idle sleep state just to collect data. Similar savings are also possible when the phone either performs local sensor computation or uploads the data to the cloud. To efficiently use these sporadic opportunities, PCS builds a lightweight, user-specific prediction model of smartphone app usage. PCS uses this model to drive a decision engine that lets the smartphone locally decide which app opportunities to exploit based on expected energy/quality trade-offs.\n We evaluate PCS by analyzing a large-scale dataset (containing 1,320 smartphone users) and building an end-to-end crowdsourcing application that constructs an indoor WiFi localization database. Our findings show that PCS can effectively collect large-scale mobile sensor datasets (e.g., accelerometer, GPS, audio, image) from users while using less energy (up to 90% depending on the scenario) compared to a representative collection of existing approaches.", "title": "" }, { "docid": "398b72faa5922bd7af153f055c6344b5", "text": "As a key component of a plug-in hybrid electric vehicle (PHEV) charger system, the front-end ac-dc converter must achieve high efficiency and power density. This paper presents a topology survey evaluating topologies for use in front end ac-dc converters for PHEV battery chargers. The topology survey is focused on several boost power factor corrected converters, which offer high efficiency, high power factor, high density, and low cost. Experimental results are presented and interpreted for five prototype converters, converting universal ac input voltage to 400 V dc. The results demonstrate that the phase shifted semi-bridgeless PFC boost converter is ideally suited for automotive level I residential charging applications in North America, where the typical supply is limited to 120 V and 1.44 kVA or 1.92 kVA. For automotive level II residential charging applications in North America and Europe the bridgeless interleaved PFC boost converter is an ideal topology candidate for typical supplies of 240 V, with power levels of 3.3 kW, 5 kW, and 6.6 kW.", "title": "" }, { "docid": "cca664cf201c79508a266a34646dba01", "text": "Scholars have argued that online social networks and personalized web search increase ideological segregation. We investigate the impact of these potentially polarizing channels on news consumption by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that individuals indeed exhibit substantially higher segregation when reading articles shared on social networks or returned by search engines, a pattern driven by opinion pieces. However, these polarizing articles from social media and web search constitute only 2% of news consumption. Consequently, while recent technological changes do increase ideological segregation, the magnitude of the effect is limited. JEL: D83, L86, L82", "title": "" }, { "docid": "3df9bacf95281fc609ee7fd2d4724e91", "text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.", "title": "" }, { "docid": "99bed553411303f4800315ce5dff2139", "text": "In this work, we propose contextual language models that incorporate dialog level discourse information into language modeling. Previous works on contextual language model treat preceding utterances as a sequence of inputs, without considering dialog interactions. We design recurrent neural network (RNN) based contextual language models that specially track the interactions between speakers in a dialog. Experiment results on Switchboard Dialog Act Corpus show that the proposed model outperforms conventional single turn based RNN language model by 3.3% on perplexity. The proposed models also demonstrate advantageous performance over other competitive contextual language models.", "title": "" }, { "docid": "6ca7eafb36eebd1d14217b78660c40e0", "text": "The identification of the candidate genes for autism through linkage and association studies has proven to be a difficult enterprise. An alternative approach is the analysis of cytogenetic abnormalities associated with autism. We present a review of all studies to date that relate patients with cytogenetic abnormalities to the autism phenotype. A literature survey of the Medline and Pubmed databases was performed, using multiple keyword searches. Additional searches through cited references and abstracts from the major genetic conferences from 2000 onwards completed the search. The quality of the phenotype (i.e. of the autism spectrum diagnosis) was rated for each included case. Available specific probe and marker information was used to define optimally the boundaries of the cytogenetic abnormalities. In case of recurrent deletions or duplications on chromosome 15 and 22, the positions of the low copy repeats that are thought to mediate these rearrangements were used to define the most likely boundaries of the implicated ‘Cytogenetic Regions Of Interest’ (CROIs). If no molecular data were available, the sequence position of the relevant chromosome bands was used to obtain the approximate molecular boundaries of the CROI. The findings of the current review indicate: (1) several regions of overlap between CROIs and known loci of significant linkage and/or association findings, and (2) additional regions of overlap among multiple CROIs at the same locus. Whereas the first finding confirms previous linkage/association findings, the latter may represent novel, not previously identified regions containing genes that contribute to autism. This analysis not only has confirmed the presence of several known autism risk regions but has also revealed additional previously unidentified loci, including 2q37, 5p15, 11q25, 16q22.3, 17p11.2, 18q21.1, 18q23, 22q11.2, 22q13.3 and Xp22.2–p22.3.", "title": "" }, { "docid": "096b09f064643cbd2cd80f310981c5a6", "text": "A Ku-band 200-W pulsed solid-state power amplifier has been presented and designed by using a hybrid radial-/rectangular-waveguide spatially power-combining technique. The hybrid radial-/rectangular-waveguide power-dividing/power-combining circuit employed in this design provides not only a high power-combining efficiency over a wide bandwidth but also efficient heat sinking for the active power devices. A simple design approach of the presented power-dividing/power-combining structure has been developed. The measured small-signal gain of the pulsed power amplifier is about 51.3 dB over the operating frequency range, while the measured maximum output power at 1-dB compression is 209 W at 13.9 GHz, with an active power-combining efficiency of about 91%. Furthermore, the active power-combining efficiency is greater than 82% from 13.75 to 14.5 GHz.", "title": "" }, { "docid": "8abedc8a3f3ad84c940e38735b759745", "text": "Degeneration is a senescence process that occurs in all living organisms. Although tremendous efforts have been exerted to alleviate this degenerative tendency, minimal progress has been achieved to date. The nematode, Caenorhabditis elegans (C. elegans), which shares over 60% genetic similarities with humans, is a model animal that is commonly used in studies on genetics, neuroscience, and molecular gerontology. However, studying the effect of exercise on C. elegans is difficult because of its small size unlike larger animals. To this end, we fabricated a flow chamber, called \"worm treadmill,\" to drive worms to exercise through swimming. In the device, the worms were oriented by electrotaxis on demand. After the exercise treatment, the lifespan, lipofuscin, reproductive capacity, and locomotive power of the worms were analyzed. The wild-type and the Alzheimer's disease model strains were utilized in the assessment. Although degeneration remained irreversible, both exercise-treated strains indicated an improved tendency compared with their control counterparts. Furthermore, low oxidative stress and lipofuscin accumulation were also observed among the exercise-treated worms. We conjecture that escalated antioxidant enzymes imparted the worms with an extra capacity to scavenge excessive oxidative stress from their bodies, which alleviated the adverse effects of degeneration. Our study highlights the significance of exercise in degeneration from the perspective of the simple life form, C. elegans.", "title": "" }, { "docid": "99e47a88f0950c1928557857facb35d5", "text": "We present the NBA framework, which extends the architecture of the Click modular router to exploit modern hardware, adapts to different hardware configurations, and reaches close to their maximum performance without manual optimization. NBA takes advantages of existing performance-excavating solutions such as batch processing, NUMA-aware memory management, and receive-side scaling with multi-queue network cards. Its abstraction resembles Click but also hides the details of architecture-specific optimization, batch processing that handles the path diversity of individual packets, CPU/GPU load balancing, and complex hardware resource mappings due to multi-core CPUs and multi-queue network cards. We have implemented four sample applications: an IPv4 and an IPv6 router, an IPsec encryption gateway, and an intrusion detection system (IDS) with Aho-Corasik and regular expression matching. The IPv4/IPv6 router performance reaches the line rate on a commodity 80 Gbps machine, and the performances of the IPsec gateway and the IDS reaches above 30 Gbps. We also show that our adaptive CPU/GPU load balancer reaches near-optimal throughput in various combinations of sample applications and traffic conditions.", "title": "" }, { "docid": "063287a98a5a45bc8e38f8f8c193990e", "text": "This paper investigates the relationship between the contextual factors related to the firm’s decision-maker and the process of international strategic decision-making. The analysis has been conducted focusing on small and medium-sized enterprises (SME). Data for the research came from 111 usable responses to a survey on a sample of SME decision-makers in international field. The results of regression analysis indicate that the context variables, both internal and external, exerted more influence on international strategic decision making process than the decision-maker personality characteristics. DOI: 10.4018/ijabe.2013040101 2 International Journal of Applied Behavioral Economics, 2(2), 1-22, April-June 2013 Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The purpose of this paper is to reverse this trend and to explore the different dimensions of SMEs’ strategic decision-making process in international decisions and, within these dimensions, we want to understand if are related to the decision-maker characteristics and also to broader contextual factors characteristics. The paper is organized as follows. In the second section the concepts of strategic decision-making process and factors influencing international SDMP are approached. Next, the research methodology, findings analysis and discussion will be presented. Finally, conclusions, limitations of the study and suggestions for future research are explored. THEORETICAL BACKGROUND Strategic Decision-Making Process The process of making strategic decisions has emerged as one of the most important themes of strategy research over the last two decades (Papadakis, 2006; Papadakis & Barwise, 2002). According to Harrison (1996), the SMDP can be defined as a combination of the concepts of strategic gap and management decision making process, with the former “determined by comparing the organization’s inherent capabilities with the opportunities and threats in its external environment”, while the latter is composed by a set of decision-making functions logically connected, that begins with the setting of managerial objective, followed by the search for information to develop a set of alternatives, that are consecutively compared and evaluated, and selected. Afterward, the selected alternative is implemented and, finally, it is subjected to follow-up and control. Other authors (Fredrickson, 1984; Mintzberg, Raisinghani, & Theoret, 1976) developed several models of strategic decision-making process since 1970, mainly based on the number of stages (Nooraie, 2008; Nutt, 2008). Although different researches investigated SDMP with specific reference to either small firms (Brouthers, et al., 1998; Gibcus, Vermeulen, & Jong, 2009; Huang, 2009; Jocumsen, 2004), or internationalization process (Aharoni, Tihanyi, & Connelly, 2011; Dimitratos, et al., 2011; Nielsen & Nielsen, 2011), there is a lack of studies that examine the SDMP in both perspectives. In this study we decided to mainly follow the SDMP defined by Harrison (1996) adapted to the international arena and particularly referred to market development decisions. Thus, for the definition of objectives (first phase) we refer to those in international field, for search for information, development and comparison of alternatives related to foreign markets (second phase) we refer to the systematic International Market Selection (IMS), and to the Entry Mode Selection (EMS) methodologies. For the implementation of the selected alternative (third phase) we mainly mean the entering in a particular foreign market with a specific entry mode, and finally, for follow-up and control (fourth phase) we refer to the control and evaluation of international activities. Dimensions of the Strategic Decision-Making Process Several authors attempted to implement a set of dimensions in approaching strategic process characteristics, and the most adopted are: • Rationality; • Formalization; • Hierarchical Decentralization and lateral communication; • Political Behavior.", "title": "" }, { "docid": "c841938f03a07fffc5150fbe18f8f740", "text": "Ensemble modeling is now a well-established means for improving prediction accuracy; it enables you to average out noise from diverse models and thereby enhance the generalizable signal. Basic stacked ensemble techniques combine predictions from multiple machine learning algorithms and use these predictions as inputs to second-level learning models. This paper shows how you can generate a diverse set of models by various methods such as forest, gradient boosted decision trees, factorization machines, and logistic regression and then combine them with stacked-ensemble techniques such as hill climbing, gradient boosting, and nonnegative least squares in SAS Visual Data Mining and Machine Learning. The application of these techniques to real-world big data problems demonstrates how using stacked ensembles produces greater prediction accuracy and robustness than do individual models. The approach is powerful and compelling enough to alter your initial data mining mindset from finding the single best model to finding a collection of really good complementary models. It does involve additional cost due both to training a large number of models and the proper use of cross validation to avoid overfitting. This paper shows how to efficiently handle this computational expense in a modern SAS environment and how to manage an ensemble workflow by using parallel computation in a distributed framework.", "title": "" }, { "docid": "ec1f585fbb97c8e6468dd992e1a933ff", "text": "Scientists continue to find challenges in the ever increasing amount of information that has been produced on a world wide scale, during the last decades. When writing a paper, an author searches for the most relevant citations that started or were the foundation of a particular topic, which would very likely explain the thinking or algorithms that are employed. The search is usually done using specific keywords submitted to literature search engines such as Google Scholar and CiteSeer. However, finding relevant citations is distinctive from producing articles that are only topically similar to an author's proposal. In this paper, we address the problem of citation recommendation using a singular value decomposition approach. The models are trained and evaluated on the Citeseer digital library. The results of our experiments show that the proposed approach achieves significant success when compared with collaborative filtering methods on the citation recommendation task.", "title": "" }, { "docid": "136fadcc21143fd356b48789de5fb2b0", "text": "Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities.", "title": "" }, { "docid": "89725ed15bec80072198dbab9f6f75eb", "text": "OBJECTIVE\nTo present the clinical and roentgenographic features of caudal duplication syndrome.\n\n\nDESIGN\nRetrospective review of the medical records and all available imaging studies.\n\n\nSETTING\nTwo university-affiliated teaching hospitals.\n\n\nPARTICIPANTS\nSix children with multiple anomalies and duplications of distal organs derived from the hindgut, neural tube, and adjacent mesoderm.\n\n\nINTERVENTIONS\nNone.\n\n\nRESULTS\nSpinal anomalies (myelomeningocele in two patients, sacral duplication in three, diplomyelia in two, and hemivertebrae in one) were present in all our patients. Duplications or anomalies of the external genitalia and/or the lower urinary and reproductive structures were also seen in all our patients. Ventral herniation (in one patient), intestinal obstructions (in one patient), and bowel duplications (in two patients) were the most common gastrointestinal abnormalities.\n\n\nCONCLUSIONS\nWe believe that the above constellation of abnormalities resulted from an insult to the caudal cell mass and hindgut at approximately the 23rd through the 25th day of gestation. We propose the term caudal duplication syndrome to describe the association between gastrointestinal, genitourinary, and distal neural tube malformations.", "title": "" }, { "docid": "a5a86ecd39df5b032f4fa4f22362c914", "text": "Diet strongly affects human health, partly by modulating gut microbiome composition. We used diet inventories and 16S rDNA sequencing to characterize fecal samples from 98 individuals. Fecal communities clustered into enterotypes distinguished primarily by levels of Bacteroides and Prevotella. Enterotypes were strongly associated with long-term diets, particularly protein and animal fat (Bacteroides) versus carbohydrates (Prevotella). A controlled-feeding study of 10 subjects showed that microbiome composition changed detectably within 24 hours of initiating a high-fat/low-fiber or low-fat/high-fiber diet, but that enterotype identity remained stable during the 10-day study. Thus, alternative enterotype states are associated with long-term diet.", "title": "" }, { "docid": "2be35e0e63316137b3426fffd397111c", "text": "Face detection is essential to facial analysis tasks, such as facial reenactment and face recognition. Both cascade face detectors and anchor-based face detectors have translated shining demos into practice and received intensive attention from the community. However, cascade face detectors often suffer from a low detection accuracy, while anchor-based face detectors rely heavily on very large neural networks pre-trained on large-scale image classification datasets such as ImageNet, which is not computationally efficient for both training and deployment. In this paper, we devise an efficient anchor-based cascade framework called anchor cascade. To improve the detection accuracy by exploring contextual information, we further propose a context pyramid maxout mechanism for anchor cascade. As a result, anchor cascade can train very efficient face detection models with a high detection accuracy. Specifically, compared with a popular convolutional neural network (CNN)-based cascade face detector MTCNN, our anchor cascade face detector greatly improves the detection accuracy, e.g., from 0.9435 to 0.9704 at $1k$ false positives on FDDB, while it still runs in comparable speed. Experimental results on two widely used face detection benchmarks, FDDB and WIDER FACE, demonstrate the effectiveness of the proposed framework.", "title": "" }, { "docid": "ecd99c9f87e1c5e5f529cb5fcbb206f2", "text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be", "title": "" }, { "docid": "3fffd4317116d8ff0165916681ce1c46", "text": "The challenges of Machine Reading and Knowledge Extraction at a web scale require a system capable of extracting diverse information from large, heterogeneous corpora. The Open Information Extraction (OIE) paradigm aims at extracting assertions from large corpora without requiring a vocabulary or relation-specific training data. Most systems built on this paradigm extract binary relations from arbitrary sentences, ignoring the context under which the assertions are correct and complete. They lack the expressiveness needed to properly represent and extract complex assertions commonly found in the text. To address the lack of representation power, we propose NESTIE, which uses a nested representation to extract higher-order relations, and complex, interdependent assertions. Nesting the extracted propositions allows NESTIE to more accurately reflect the meaning of the original sentence. Our experimental study on real-world datasets suggests that NESTIE obtains comparable precision with better minimality and informativeness than existing approaches. NESTIE produces 1.7-1.8 times more minimal extractions and achieves 1.1-1.2 times higher informativeness than CLAUSIE.", "title": "" } ]
scidocsrr
32c8889bf4dae5b6fa371c0b6e172252
The Making of a 3D-Printed, Cable-Driven, Single-Model, Lightweight Humanoid Robotic Hand
[ { "docid": "d2c0bccf1ff6fd4ac9d76defe1632a85", "text": "Children with hand reductions, whether congenital or traumatic, have unique prosthetic needs. They present a challenge because of their continually changing size due to physical growth as well as changing needs due to psychosocial development. Conventional prosthetics are becoming more technologically advanced and increasingly complex. Although these are welcome advances for adults, the concomitant increases in weight, moving parts, and cost are not beneficial for children. Pediatric prosthetic needs may be better met with simpler solutions. Three-dimensional printing can be used to fabricate rugged, light-weight, easily replaceable, and very low cost assistive hands for children.", "title": "" }, { "docid": "3309e09d16e74f87a507181bd82cd7f0", "text": "The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types.", "title": "" }, { "docid": "aeba4012971d339a9a953a7b86f57eb8", "text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "title": "" } ]
[ { "docid": "563a630a752416668664246e8eb937b6", "text": "The Linux Driver Verification system is designed for static analysis of the source code of Linux kernel space device drivers. In this paper, we describe the architecture of the verification system, including the integration of third-party tools for static verification of C programs. We consider characteristics of the Linux drivers source code that are important from the viewpoint of verification algorithms and give examples of comparative analysis of different verification tools, as well as different versions and configurations of a given tool.", "title": "" }, { "docid": "bc17b54461a134809911ebfa2a57e560", "text": "We use data with complete information on both rejected and accepted bank loan applicants to estimate the value of sample bias correction using Heckman’s two-stage model with partial observability. In the credit scoring domain such correction is called reject inference. We validate the model performances with and without the correction of sample bias by various measurements. Results show that it is prohibitively costly not to control for sample selection bias due to the accept/reject decision. However, we also find that the Heckman procedure is unable to appropriately control for the selection bias. † Data contained in this study were produced on site at the Carnegie-Mellon Census Research Data Center. Research results and conclusions are those of the authors and do not necessarily indicate concurrence by the Bureau of the Census or the Carnegie-Mellon Census Research Data Center. Åstebro acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada and the Social Sciences and Humanities Research Council of Canada’s joint program in Management of Technological Change as well as support from the Canadian Imperial Bank of Commerce.", "title": "" }, { "docid": "2801a5a26d532fc33543744ea89743f1", "text": "Microalgae have received much interest as a biofuel feedstock in response to the uprising energy crisis, climate change and depletion of natural sources. Development of microalgal biofuels from microalgae does not satisfy the economic feasibility of overwhelming capital investments and operations. Hence, high-value co-products have been produced through the extraction of a fraction of algae to improve the economics of a microalgae biorefinery. Examples of these high-value products are pigments, proteins, lipids, carbohydrates, vitamins and anti-oxidants, with applications in cosmetics, nutritional and pharmaceuticals industries. To promote the sustainability of this process, an innovative microalgae biorefinery structure is implemented through the production of multiple products in the form of high value products and biofuel. This review presents the current challenges in the extraction of high value products from microalgae and its integration in the biorefinery. The economic potential assessment of microalgae biorefinery was evaluated to highlight the feasibility of the process.", "title": "" }, { "docid": "c6160b8ad36bc4f297bfb1f6b04c79e0", "text": "Despite their incentive structure flaws, mining pools account for more than 95% of Bitcoin’s computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party’s payments, and therefore miners need neither trust the attacker’s intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit).", "title": "" }, { "docid": "ea6392b6a49ed40cb5e3779e0d1f3ea2", "text": "We see the world in scenes, where visual objects occur in rich surroundings, often embedded in a typical context with other related objects. How does the human brain analyse and use these common associations? This article reviews the knowledge that is available, proposes specific mechanisms for the contextual facilitation of object recognition, and highlights important open questions. Although much has already been revealed about the cognitive and cortical mechanisms that subserve recognition of individual objects, surprisingly little is known about the neural underpinnings of contextual analysis and scene perception. Building on previous findings, we now have the means to address the question of how the brain integrates individual elements to construct the visual experience.", "title": "" }, { "docid": "1326be667e3ec3aa6bf0732ef97c230a", "text": "Recognizing human activities in a sequence is a challenging area of research in ubiquitous computing. Most approaches use a fixed size sliding window over consecutive samples to extract features— either handcrafted or learned features—and predict a single label for all samples in the window. Two key problems emanate from this approach: i) the samples in one window may not always share the same label. Consequently, using one label for all samples within a window inevitably lead to loss of information; ii) the testing phase is constrained by the window size selected during training while the best window size is difficult to tune in practice. We propose an efficient algorithm that can predict the label of each sample, which we call dense labeling, in a sequence of human activities of arbitrary length using a fully convolutional network. In particular, our approach overcomes the problems posed by the sliding window step. Additionally, our algorithm learns both the features and classifier automatically. We release a new daily activity dataset based on a wearable sensor with hospitalized patients. We conduct extensive experiments and demonstrate that our proposed approach is able to outperform the state-of-the-arts in terms of classification and label misalignment measures on three challenging datasets: Opportunity, Hand Gesture, and our new dataset.", "title": "" }, { "docid": "459b07b78f3cbdcbd673881fd000da14", "text": "The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as “goats” due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for “goats,” since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of “goats” during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.", "title": "" }, { "docid": "7eec93450eb625bee264f37f1520603f", "text": "User Experience is a key differentiator in the era of Digital Disruption. Chatbots are increasingly considered as part of a delighting User Experience. But only a bright Conversation Modelling effort, which is part of holistic Enterprise Modelling, can make a Chatbot effective. In addition, best practices can be applied to achieve or even outperform user expectations. Thanks to Cognitive Systems and associated modelling tools, effective Chatbot dialogs can be developed in an agile manner, while respecting the enterprise", "title": "" }, { "docid": "34523c9ccd5d8c0bec2a84173205be99", "text": "Deep learning has achieved astonishing results onmany taskswith large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.", "title": "" }, { "docid": "d68147bf8637543adf3053689de740c3", "text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.", "title": "" }, { "docid": "41261cf72d8ee3bca4b05978b07c1c4f", "text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.", "title": "" }, { "docid": "96aa1f19a00226af7b5bbe0bb080582e", "text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.", "title": "" }, { "docid": "a85803f14639bef7f4539bad631d088c", "text": "5.", "title": "" }, { "docid": "71757d1cee002bb235a591cf0d5aafd5", "text": "There is an old Wall Street adage goes, ‘‘It takes volume to make price move”. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China’s stock market. Pacific-Basin Finace Journal, 12, 541–564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68–85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285–295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiplefactor models, Chen’s [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263–275] and Huarng and Yu’s [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445–462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a21f04b6c8af0b38b3b41f79f2661fa6", "text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.", "title": "" }, { "docid": "ea5697d417fe154be77d941c19d8a86e", "text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.", "title": "" }, { "docid": "f16fd498b692875c3bd95460feaf06ec", "text": "Raman and Fourier Transform Infrared (FT-IR) spectroscopy was used for assessment of structural differences of celluloses of various origins. Investigated celluloses were: bacterial celluloses cultured in presence of pectin and/or xyloglucan, as well as commercial celluloses and cellulose extracted from apple parenchyma. FT-IR spectra were used to estimate of the I(β) content, whereas Raman spectra were used to evaluate the degree of crystallinity of the cellulose. The crystallinity index (X(C)(RAMAN)%) varied from -25% for apple cellulose to 53% for microcrystalline commercial cellulose. Considering bacterial cellulose, addition of xyloglucan has an impact on the percentage content of cellulose I(β). However, addition of only xyloglucan or only pectins to pure bacterial cellulose both resulted in a slight decrease of crystallinity. However, culturing bacterial cellulose in the presence of mixtures of xyloglucan and pectins results in an increase of crystallinity. The results confirmed that the higher degree of crystallinity, the broader the peak around 913 cm(-1). Among all bacterial celluloses the bacterial cellulose cultured in presence of xyloglucan and pectin (BCPX) has the most similar structure to those observed in natural primary cell walls.", "title": "" }, { "docid": "041ca42d50e4cac92cf81c989a8527fb", "text": "Helix antenna consists of a single conductor or multi-conductor open helix-shaped. Helix antenna has a three-dimensional shape. The shape of the helix antenna resembles a spring and the diameter and the distance between the windings of a certain size. This study aimed to design a signal amplifier wifi on 2.4 GHz. Materials used in the form of the pipe, copper wire, various connectors and wireless adapters and various other components. Mmmanagal describing simulation result on helix antenna. Further tested with wirelesmon software to test the wifi signal strength. The results are based Mmanagal, radiation patterns emitted achieve Ganin: 4.5 dBi horizontal polarization, F / B: −0,41dB; rear azimuth 1200 elevation 600, 2400 MHz, R27.9 and jX impedance −430.9, Elev: 64.40 real GND: 0.50 m height, and wifi signal strength increased from 47% to 55%.", "title": "" }, { "docid": "86095b1b9900abb5a16cc7bfef8e1c39", "text": "We consider the problem of estimating the spatial layout of an indoor scene from a monocular RGB image, modeled as the projection of a 3D cuboid. Existing solutions to this problem often rely strongly on hand-engineered features and vanishing point detection, which are prone to failure in the presence of clutter. In this paper, we present a method that uses a fully convolutional neural network (FCNN) in conjunction with a novel optimization framework for generating layout estimates. We demonstrate that our method is robust in the presence of clutter and handles a wide range of highly challenging scenes. We evaluate our method on two standard benchmarks and show that it achieves state of the art results, outperforming previous methods by a wide margin.", "title": "" } ]
scidocsrr
062ddecd140d369c267dfc5ecf0f4727
Multilevel SOT-MRAM cell with a novel sensing scheme for high-density memory applications
[ { "docid": "476bb80edf6c54f0b6415d19f027ee19", "text": "Spin-transfer torque (STT) switching demonstrated in submicron sized magnetic tunnel junctions (MTJs) has stimulated considerable interest for developments of STT switched magnetic random access memory (STT-MRAM). Remarkable progress in STT switching with MgO MTJs and increasing interest in STTMRAM in semiconductor industry have been witnessed in recent years. This paper will present a review on the progress in the intrinsic switching current density reduction and STT-MRAM prototype chip demonstration. Challenges to overcome in order for STT-MRAM to be a mainstream memory technology in future technology nodes will be discussed. Finally, potential applications of STT-MRAM in embedded and standalone memory markets will be outlined.", "title": "" }, { "docid": "e91e1a2bdd90cec352cb566f8c556c68", "text": "This paper deals with a new MRAM technology whose writing scheme relies on the Spin Orbit Torque (SOT). Compared to Spin Transfer Torque (STT) MRAM, it offers a very fast switching, a quasi-infinite endurance and improves the reliability by solving the issue of “read disturb”, thanks to separate reading and writing paths. These properties allow introducing SOT at all-levels of the memory hierarchy of systems and adressing applications which could not be easily implemented by STT-MRAM. We present this emerging technology and a full design framework, allowing to design and simulate hybrid CMOS/SOT complex circuits at any level of abstraction, from device to system. The results obtained are very promising and show that this technology leads to a reduced power consumption of circuits without notable penalty in terms of performance.", "title": "" }, { "docid": "71215e59838861228f316da921b7f6b7", "text": "In this paper, we present two multilevel spin-orbit torque magnetic random access memories (SOT-MRAMs). A single-level SOT-MRAM employs a three-terminal SOT device as a storage element with enhanced endurance, close-to-zero read disturbance, and low write energy. However, the three-terminal device requires the use of two access transistors per cell. To improve the integration density, we propose two multilevel cells (MLCs): 1) series SOT MLC and 2) parallel SOT MLC, both of which store two bits per memory cell. A detailed analysis of the bit-cell suggests that the S-MLC is promising for applications requiring both high density and low write-error rate, and P-MLC is particularly suitable for high-density and low-write-energy applications. We also performed iso-bit-cell area comparison of our MLC designs with previously proposed MLCs that are based on spin-transfer torque MRAM and show 3-16× improvement in write energy.", "title": "" } ]
[ { "docid": "76049ed267e9327412d709014e8e9ed4", "text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.", "title": "" }, { "docid": "c1d5df0e2058e3f191a8227fca51a2fb", "text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "title": "" }, { "docid": "98f1e9888b9b6f17dd91153b906c0569", "text": "Irumban puli (Averrhoa bilimbi) is commonly used as a traditional remedy in the state of Kerala. Freshly made concentrated juice has a very high oxalic acid content and consumption carries a high risk of developing acute renal failure (ARF) by deposition of calcium oxalate crystals in renal tubules. Acute oxalate nephropathy (AON) due to secondary oxalosis after consumption of Irumban puli juice is uncommon. AON due to A. bilimbi has not been reported before. We present a series of ten patients from five hospitals in the State of Kerala who developed ARF after intake of I. puli fruit juice. Seven patients needed hemodialysis whereas the other three improved with conservative management.", "title": "" }, { "docid": "408db96baaf513c65c66ced61e4d50a8", "text": "This review highlights the use of bromelain in various applications with up-to-date literature on the purification of bromelain from pineapple fruit and waste such as peel, core, crown, and leaves. Bromelain, a cysteine protease, has been exploited commercially in many applications in the food, beverage, tenderization, cosmetic, pharmaceutical, and textile industries. Researchers worldwide have been directing their interest to purification strategies by applying conventional and modern approaches, such as manipulating the pH, affinity, hydrophobicity, and temperature conditions in accord with the unique properties of bromelain. The amount of downstream processing will depend on its intended application in industries. The breakthrough of recombinant DNA technology has facilitated the large-scale production and purification of recombinant bromelain for novel applications in the future.", "title": "" }, { "docid": "074fb5576ea24d6ffb44924fd2b50cff", "text": "I treat three related subjects: virtual-worlds research—the construction of real-time 3-D illusions by computer graphics; some observations about interfaces to virtual worlds; and the coming application of virtual-worlds techniques to the enhancement of scientific computing.\nWe need to design generalized interfaces for visualizing, exploring, and steering scientific computations. Our interfaces must be direct-manipulation, not command-string; interactive, not batch; 3-D, not 2-D; multisensory, not just visual.\nWe need generalized research results for 3-D interactive interfaces. More is known than gets reported, because of a reluctance to share “unproven” results. I propose a shells-of-certainty model for such knowledge.", "title": "" }, { "docid": "d22200ec6b700ce30c45b89c54f435f8", "text": "Evolutionary hypotheses to explain the greater numbers of species in the tropics than the temperate zone include greater age 2004; Grant and Grant 2008; Price 2008; Schluter 2009; Marie Curie Speciation Network 2012; Nosil 2012), the and area, higher temperature and metabolic rates, and greater ecological opportunity. These ideas make contrasting predictions about the relationship between speciation processes and latitude, which I elaborate and evaluate. Available data suggest that per capita speciation rates are currently highest in the temperate zone and that diversification rates (speciation minus extinction) are similar between latitudes. In contrast, clades whose oldest analyzed dates precede the Eocene thermal maximum, when the extent of the tropics was much greater than today, tend to show highest speciation and diversification rates in the tropics. These findings are consistent with age and area, which is alone among hypotheses in predicting a time trend. Higher recent speciation rates in the temperate zone than the tropics suggest an additional response to high ecological opportunity associated with low species diversity. These broad patterns are compelling but provide limited insights into underlying mechanisms, arguing that studies of speciation processes along the latitudinal gradient will be vital. Using threespine stickleback in depauperate northern lakes as an example, I show how high ecological opportunity can lead to rapid speciation. The results support a role for ecological opportunity in speciation, but its importance in the evolution of the latitudinal gradient remains uncertain. I conclude that per capita evolutionary rates are no longer higher in the tropics than the temperate zone. Nevertheless, the vast numbers of species that have already accumulated in the tropics ensure that total rate of species production remains highest there. Thus, tropical evolutionary momentum helps to perpetuate the steep latitudinal biodiversity gradient.", "title": "" }, { "docid": "706fb7e2635403662a6b75c410c9fa5b", "text": "Emphasizing the importance of cross-border effectiveness in the contemporary globalized world, we propose that cultural intelligence—the leadership capability to manage effectively in culturally diverse settings—is a critical leadership competency for those with cross-border responsibilities. We tested this hypothesis with multisource data, including multiple intelligences, in a sample of 126 Swiss military officers with both domestic and cross-border leadership responsibilities. Results supported our predictions: (1) general intelligence predicted both domestic and cross-border leadership effectiveness; (2) emotional intelligence was a stronger predictor of domestic leadership effectiveness, and (3) cultural intelligence was a stronger predictor of cross-border leadership effectiveness. Overall,", "title": "" }, { "docid": "26764eb192d4404bda7ebf8c37ba5c4a", "text": "Two novel gallium nitride-based vertical junction FETs (VJFETs), one with a vertical channel and the other with a lateral channel, are proposed, designed, and modeled to achieve a 1.2 kV normally OFF power switch with very low ON resistance (R<sub>ON</sub>). The 2-D drift diffusion model of the proposed devices was implemented using Silvaco ATLAS. A comprehensive design space was generated for the vertical channel VJFET (VC-VJFET). For a well-designed VC-VJFET, the breakdown voltage (V<sub>BR</sub>) obtained was 1260 V, which is defined in this study as the drain-to-source voltage at an OFF-state current of 1 μA · cm<sup>-2</sup> and a peak electric field not exceeding 2.4 MV/cm. The corresponding R<sub>ON</sub> was 5.2 mΩ · cm<sup>2</sup>. To further improve the switching device figure of merit, a merged lateral-vertical geometry was proposed and modeled in the form of a lateral channel VJFET (LC-VJFET). For the LC-VJFET, a breakdown voltage of 1310 V with a corresponding R<sub>ON</sub> of 1.7 mQ · cm<sup>2</sup> was achieved for similar thicknesses of the drift region. This paper studies the design space in detail and discusses the associated tradeoffs in the R<sub>ON</sub> and V<sub>BR</sub> in conjunction with the threshold voltage (V<sub>T</sub>) desired for the normally OFF operation.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "412951e42529d7862cb0bcbaf5bd9f97", "text": "Wireless Sensor Network is an emerging field which is accomplishing much importance because of its vast contribution in varieties of applications. Wireless Sensor Networks are used to monitor a given field of interest for changes in the environment. Coverage is one of the main active research interests in WSN.In this paper we aim to review the coverage problem In WSN and the strategies that are used in solving coverage problem in WSN.These strategies studied are used during deployment phase of the network. Besides this we also outlined some basic design considerations in coverage of WSN.We also provide a brief summary of various coverage issues and the various approaches for coverage in Sensor network. Keywords— Coverage; Wireless sensor networks: energy efficiency; sensor; area coverage; target Coverage.", "title": "" }, { "docid": "3b0019cffc71b5675994699d993aeb41", "text": "This paper begins by reviewing the motivation for an informatics of indoor space. It then discusses application domains and consider why current geospatial technology, with its focus on outdoor space, needs to be extended. We review existing formal models of indoor space, along with their applications, and introduce a new model that is the subject of the author's current research. We conclude with some observations about the development of a unified model of both indoor and outdoor space.", "title": "" }, { "docid": "64f70c1214d148c43ceed537c69ad5dd", "text": "Relation classification is an important semantic processing task in the field of natural language processing (NLP). In this paper, we present a novel model BRCNN to classify the relation of two entities in a sentence. Some state-of-the-art systems concentrate on modeling the shortest dependency path (SDP) between two entities leveraging convolutional or recurrent neural networks. We further explore how to make full use of the dependency relations information in the SDP, by combining convolutional neural networks and twochannel recurrent neural networks with long short term memory (LSTM) units. We propose a bidirectional architecture to learn relation representations with directional information along the SDP forwards and backwards at the same time, which benefits classifying the direction of relations. Experimental results show that our method outperforms the state-of-theart approaches on the SemEval-2010 Task 8 dataset.", "title": "" }, { "docid": "6eeadd78b5cf225bafb9d1b01a5ac28c", "text": "Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the extracted topics may not exist in external sources. In this paper we propose to address the problem of automatic labelling of latent topics learned from Twitter as a summarisation problem. We introduce a framework which apply summarisation algorithms to generate topic labels. These algorithms are independent of external sources and only rely on the identification of dominant terms in documents related to the latent topic. We compare the efficiency of existing state of the art summarisation algorithms. Our results suggest that summarisation algorithms generate better topic labels which capture event-related context compared to the top-n terms returned by LDA.", "title": "" }, { "docid": "674ed4bd5128403fe97a16867917c6fd", "text": "Raising a child with an autism spectrum disorder (ASD) can be exhausting, which has the potential to impact on parental health and wellbeing. The current study investigated the influence of maternal fatigue and coping on the relationship between children's problematic behaviours and maternal stress for 65 mothers of young children (aged 2-5 years) with ASDs. Results showed that maternal fatigue but not maladaptive coping mediated the relationship between problematic child behaviours and maternal stress. These findings suggest child behaviour difficulties may contribute to parental fatigue, which in turn may influence use of ineffective coping strategies and increased stress. The significance of fatigue on maternal wellbeing was highlighted as an important area for consideration in families of children with an ASD.", "title": "" }, { "docid": "9eeb3ce9d963bc3bab6c32e651c34772", "text": "In bioequivalence assessment, the consumer risk of erroneously accepting bioequivalence is of primary concern. In order to control the consumer risk, the decision problem is formulated with bioinequivalence as hypothesis and bioequivalence as alternative. In the parametric approach, a split into two one-sided test problems and application of two-sample t-tests have been suggested. Rejection of both hypotheses at nominal alpha-level is equivalent to the inclusion of the classical (shortest) (1-2 alpha) 100%-confidence interval in the bioequivalence range. This paper demonstrates that the rejection of the two one-sided hypotheses at nominal alpha-level by means of nonparametric Mann-Whitney-Wilcoxon tests is equivalent to the inclusion of the corresponding distribution-free (1-2 alpha) 100%-confidence interval in the bioequivalence range. This distribution-free (nonparametric) approach needs weaker model assumptions and hence presents an alternative to the parametric approach.", "title": "" }, { "docid": "ba0481ae973970f96f7bf7b1a5461f16", "text": "WEP is a protocol for securing wireless networks. In the past years, many attacks on WEP have been published, totally breaking WEP’s security. This thesis summarizes all major attacks on WEP. Additionally a new attack, the PTW attack, is introduced, which was partially developed by the author of this document. Some advanced versions of the PTW attack which are more suiteable in certain environments are described as well. Currently, the PTW attack is fastest publicly known key recovery attack against WEP protected networks.", "title": "" }, { "docid": "f2e10c5118cc736a942f201ddfbdf524", "text": "Numerical sediment quality guidelines (SQGs) for freshwater ecosystems have previously been developed using a variety of approaches. Each approach has certain advantages and limitations which influence their application in the sediment quality assessment process. In an effort to focus on the agreement among these various published SQGs, consensus-based SQGs were developed for 28 chemicals of concern in freshwater sediments (i.e., metals, polycyclic aromatic hydrocarbons, polychlorinated biphenyls, and pesticides). For each contaminant of concern, two SQGs were developed from the published SQGs, including a threshold effect concentration (TEC) and a probable effect concentration (PEC). The resultant SQGs for each chemical were evaluated for reliability using matching sediment chemistry and toxicity data from field studies conducted throughout the United States. The results of this evaluation indicated that most of the TECs (i.e., 21 of 28) provide an accurate basis for predicting the absence of sediment toxicity. Similarly, most of the PECs (i.e., 16 of 28) provide an accurate basis for predicting sediment toxicity. Mean PEC quotients were calculated to evaluate the combined effects of multiple contaminants in sediment. Results of the evaluation indicate that the incidence of toxicity is highly correlated to the mean PEC quotient (R(2) = 0.98 for 347 samples). It was concluded that the consensus-based SQGs provide a reliable basis for assessing sediment quality conditions in freshwater ecosystems.", "title": "" }, { "docid": "cef1270ff3e263d2becf551288b08efe", "text": "Sentiment Analysis has become a significant research matter for its probable in tapping into the vast amount of opinions generated by the people. Sentiment analysis deals with the computational conduct of opinion, sentiment within the text. People sometimes uses sarcastic text to express their opinion within the text. Sarcasm is a type of communication act in which the people write the contradictory of what they mean in reality. The intrinsically vague nature of sarcasm sometimes makes it hard to understand. Recognizing sarcasm can promote many sentiment analysis applications. Automatic detecting sarcasm is an approach for predicting sarcasm in text. In this paper we have tried to talk of the past work that has been done for detecting sarcasm in the text. This paper talk of approaches, features, datasets, and issues associated with sarcasm detection. Performance values associated with the past work also has been discussed. Various tables that present different dimension of past work like dataset used, features, approaches, performance values has also been discussed.", "title": "" }, { "docid": "d2ac8e090922336d433884b85b297b6b", "text": "BACKGROUND\nTwitter provides various types of location data, including exact Global Positioning System (GPS) coordinates, which could be used for infoveillance and infodemiology (ie, the study and monitoring of online health information), health communication, and interventions. Despite its potential, Twitter location information is not well understood or well documented, limiting its public health utility.\n\n\nOBJECTIVE\nThe objective of this study was to document and describe the various types of location information available in Twitter. The different types of location data that can be ascertained from Twitter users are described. This information is key to informing future research on the availability, usability, and limitations of such location data.\n\n\nMETHODS\nLocation data was gathered directly from Twitter using its application programming interface (API). The maximum tweets allowed by Twitter were gathered (1% of the total tweets) over 2 separate weeks in October and November 2011. The final dataset consisted of 23.8 million tweets from 9.5 million unique users. Frequencies for each of the location options were calculated to determine the prevalence of the various location data options by region of the world, time zone, and state within the United States. Data from the US Census Bureau were also compiled to determine population proportions in each state, and Pearson correlation coefficients were used to compare each state's population with the number of Twitter users who enable the GPS location option.\n\n\nRESULTS\nThe GPS location data could be ascertained for 2.02% of tweets and 2.70% of unique users. Using a simple text-matching approach, 17.13% of user profiles in the 4 continental US time zones were able to be used to determine the user's city and state. Agreement between GPS data and data from the text-matching approach was high (87.69%). Furthermore, there was a significant correlation between the number of Twitter users per state and the 2010 US Census state populations (r ≥ 0.97, P < .001).\n\n\nCONCLUSIONS\nHealth researchers exploring ways to use Twitter data for disease surveillance should be aware that the majority of tweets are not currently associated with an identifiable geographic location. Location can be identified for approximately 4 times the number of tweets using a straightforward text-matching process compared to using the GPS location information available in Twitter. Given the strong correlation between both data gathering methods, future research may consider using more qualitative approaches with higher yields, such as text mining, to acquire information about Twitter users' geographical location.", "title": "" }, { "docid": "e9ed26434ac4e17548a08a40ace99a0c", "text": "An analytical study on air flow effects and resulting dynamics on the PACE Formula 1 race car is presented. The study incorporates Computational Fluid Dynamic analysis and simulation to maximize down force and minimize drag during high speed maneuvers of the race car. Using Star CCM+ software and mentoring provided by CD – Adapco, the simulation employs efficient meshing techniques and realistic loading conditions to understand down force on front and rear wing portions of the car as well as drag created by all exterior surfaces. Wing and external surface loading under high velocity runs of the car are illustrated. Optimization of wing orientations (direct angle of attack) and geometry modifications on outer surfaces of the car are performed to enhance down force and lessen drag for maximum stability and control during operation. The use of Surface Wrapper saved months of time in preparing the CAD model. The Transform tool and Contact Prevention tool in Star CCM+ proved to be an efficient means of correcting and modifying geometry instead of going back to the CAD model. The CFD simulations point out that the current front and rear wings do not generate the desired downforce and that the rear wing should be redesigned.", "title": "" } ]
scidocsrr
6cf67385029aad5dc778f19fa55c8287
Integration and evaluation of intrusion detection for CoAP in smart city applications
[ { "docid": "2f41ff2d68fa75ef5e91695d19684fbb", "text": "Wireless Sensor Networking is one of the most promising technologies that have applications ranging from health care to tactical military. Although Wireless Sensor Networks (WSNs) have appealing features (e.g., low installation cost, unattended network operation), due to the lack of a physical line of defense (i.e., there are no gateways or switches to monitor the information flow), the security of such networks is a big concern, especially for the applications where confidentiality has prime importance. Therefore, in order to operate WSNs in a secure way, any kind of intrusions should be detected before attackers can harm the network (i.e., sensor nodes) and/or information destination (i.e., data sink or base station). In this article, a survey of the state-of-the-art in Intrusion Detection Systems (IDSs) that are proposed for WSNs is presented. Firstly, detailed information about IDSs is provided. Secondly, a brief survey of IDSs proposed for Mobile Ad-Hoc Networks (MANETs) is presented and applicability of those systems to WSNs are discussed. Thirdly, IDSs proposed for WSNs are presented. This is followed by the analysis and comparison of each scheme along with their advantages and disadvantages. Finally, guidelines on IDSs that are potentially applicable to WSNs are provided. Our survey is concluded by highlighting open research issues in the field.", "title": "" }, { "docid": "c724fdcf7f58121ff6ad886df68e2725", "text": "The Internet of Things (IoT) is an emerging paradigm where smart objects are seamlessly connected to the overall Internet and can potentially cooperate to achieve common objectives such as supporting innovative home automation services. With reference to such a scenario, this paper presents an Intrusion Detection System (IDS) framework for IoT empowered by IPv6 over low-power personal area network (6LoWPAN) devices. In fact, 6LoWPAN is an interesting protocol supporting the realization of IoT in a resource constrained environment. 6LoWPAN devices are vulnerable to attacks inherited from both the wireless sensor networks and the Internet protocols. The proposed IDS framework which includes a monitoring system and a detection engine has been integrated into the network framework developed within the EU FP7 project `ebbits'. A penetration testing (PenTest) system had been used to evaluate the performance of the implemented IDS framework. Preliminary tests revealed that the proposed framework represents a promising solution for ensuring better security in 6LoWPANs.", "title": "" }, { "docid": "a43646db20923d9058df5544a5753da0", "text": "Smart objects connected to the Internet, constituting the so called Internet of Things (IoT), are revolutionizing human beings' interaction with the world. As technology reaches everywhere, anyone can misuse it, and it is always essential to secure it. In this work we present a denial-of-service (DoS) detection architecture for 6LoWPAN, the standard protocol designed by IETF as an adaptation layer for low-power lossy networks enabling low-power devices to communicate with the Internet. The proposed architecture integrates an intrusion detection system (IDS) into the network framework developed within the EU FP7 project ebbits. The aim is to detect DoS attacks based on 6LoWPAN. In order to evaluate the performance of the proposed architecture, preliminary implementation was completed and tested against a real DoS attack using a penetration testing system. The paper concludes with the related results proving to be successful in detecting DoS attacks on 6LoWPAN. Further, extending the IDS could lead to detect more complex attacks on 6LoWPAN.", "title": "" } ]
[ { "docid": "6c1a1e47ce91b2d9ae60a0cfc972b7e4", "text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.", "title": "" }, { "docid": "0343f1a0be08ff53e148ef2eb22aaf14", "text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.", "title": "" }, { "docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06", "text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.", "title": "" }, { "docid": "91dcf0f281724bd6a5cc8c6479f5d632", "text": "In this paper, a cable-driven planar parallel haptic interface is presented. First, the velocity equations are derived and the forces in the cables are obtained by the principle of virtual work. Then, an analysis of the wrench-closure workspace is performed and a geometric arrangement of the cables is proposed. Control issues are then discussed and a control scheme is presented. The calibration of the attachment points is also discussed. Finally, the prototype is described and experimental results are provided.", "title": "" }, { "docid": "55f95c7b59f17fb210ebae97dbd96d72", "text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.", "title": "" }, { "docid": "f2a2f1e8548cc6fcff6f1d565dfa26c9", "text": "Cabbage contains the glucosinolate sinigrin, which is hydrolyzed by myrosinase to allyl isothiocyanate. Isothiocyanates are thought to inhibit the development of cancer cells by a number of mechanisms. The effect of cooking cabbage on isothiocyanate production from glucosinolates during and after their ingestion was examined in human subjects. Each of 12 healthy human volunteers consumed three meals, at 48-h intervals, containing either raw cabbage, cooked cabbage, or mustard according to a cross-over design. At each meal, watercress juice, which is rich in phenethyl isothiocyanate, was also consumed to allow individual and temporal variation in postabsorptive isothiocyanate recovery to be measured. Volunteers recorded the time and volume of each urination for 24 h after each meal. Samples of each urination were analyzed for N-acetyl cysteine conjugates of isothiocyanates as a measure of entry of isothiocyanates into the peripheral circulation. Excretion of isothiocyanates was rapid and substantial after ingestion of mustard, a source of preformed allyl isothiocyanate. After raw cabbage consumption, allyl isothiocyanate was again rapidly excreted, although to a lesser extent than when mustard was consumed. On the cooked cabbage treatment, excretion of allyl isothiocyanate was considerably less than for raw cabbage, and the excretion was delayed. The results indicate that isothiocyanate production is more extensive after consumption of raw vegetables but that isothiocyanates still arise, albeit to a lesser degree, when cooked vegetables are consumed. The lag in excretion on the cooked cabbage treatment suggests that the colon microflora catalyze glucosinolate hydrolysis in this case.", "title": "" }, { "docid": "3e409a01cfc02c0b89bae310c3f693fe", "text": "The last ten years have seen an increasing interest, within cognitive science, in issues concerning the physical body, the local environment, and the complex interplay between neural systems and the wider world in which they function. Yet many unanswered questions remain, and the shape of a genuinely physically embodied, environmentally embedded science of the mind is still unclear. In this article I will raise a number of critical questions concerning the nature and scope of this approach, drawing a distinction between two kinds of appeal to embodiment: (1) 'Simple' cases, in which bodily and environmental properties merely constrain accounts that retain the focus on inner organization and processing; and (2) More radical appeals, in which attention to bodily and environmental features is meant to transform both the subject matter and the theoretical framework of cognitive science.", "title": "" }, { "docid": "45cbfbe0a0bcf70910a6d6486fb858f0", "text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.", "title": "" }, { "docid": "0a63a875b57b963372640f8fb527bd5c", "text": "KEMI-TORNIO UNIVERSITY OF APPLIED SCIENCES Degree programme: Business Information Technology Writer: Guo, Shuhang Thesis title: Analysis and evaluation of similarity metrics in collaborative filtering recommender system Pages (of which appendix): 62 (1) Date: May 15, 2014 Thesis instructor: Ryabov, Vladimir This research is focused on the field of recommender systems. The general aims of this thesis are to summary the state-of-the-art in recommendation systems, evaluate the efficiency of the traditional similarity metrics with varies of data sets, and propose an ideology to model new similarity metrics. The literatures on recommender systems were studied for summarizing the current development in this filed. The implementation of the recommendation and evaluation was achieved by Apache Mahout which provides an open source platform of recommender engine. By importing data information into the project, a customized recommender engine was built. Since the recommending results of collaborative filtering recommender significantly rely on the choice of similarity metrics and the types of the data, several traditional similarity metrics provided in Apache Mahout were examined by the evaluator offered in the project with five data sets collected by some academy groups. From the evaluation, I found out that the best performance of each similarity metric was achieved by optimizing the adjustable parameters. The features of each similarity metric were obtained and analyzed with practical data sets. In addition, an ideology by combining two traditional metrics was proposed in the thesis and it was proven applicable and efficient by the metrics combination of Pearson correlation and Euclidean distance. The observation and evaluation of traditional similarity metrics with practical data is helpful to understand their features and suitability, from which new models can be created. Besides, the ideology proposed for modeling new similarity metrics can be found useful both theoretically and practically.", "title": "" }, { "docid": "453e4343653f2d84bc4b5077d9556de1", "text": "Device-to-Device (D2D) communication is the technology enabling user equipments (UEs) to directly communicate with each other without help of evolved nodeB (eNB). Due to this characteristic, D2D communication can reduce end-to-end delay and traffic load offered to eNB. However, by applying D2D communication into cellular systems, interference between D2D and eNB relaying UEs can occur if D2D UEs reuse frequency band for eNB relaying UEs. In cellular systems, fractional frequency reuse (FFR) is used to reduce inter-cell interference of cell outer UEs. In this paper, we propose a radio resource allocation scheme for D2D communication underlaying cellular networks using FFR. In the proposed scheme, D2D and cellular UEs use the different frequency bands chosen as users' locations. The proposed radio resource allocation scheme can alleviate interference between D2D and cellular UEs if D2D device is located in cell inner region. If D2D UEs is located in cell outer region, D2D and cellular UEs experience tolerable interference. By simulations, we show that the proposed scheme improves the performance of D2D and cellular UEs by reducing interference between them.", "title": "" }, { "docid": "21f56bb6edbef3448275a0925bd54b3a", "text": "Dr. Stephanie L. Cincotta (Psychiatry): A 35-year-old woman was seen in the emergency department of this hospital because of a pruritic rash. The patient had a history of hepatitis C virus (HCV) infection, acne, depression, and drug dependency. She had been in her usual health until 2 weeks before this presentation, when insomnia developed, which she attributed to her loss of a prescription for zolpidem. During the 10 days before this presentation, she reported seeing white “granular balls,” which she thought were mites or larvae, emerging from and crawling on her skin, sheets, and clothing and in her feces, apartment, and car, as well as having an associated pruritic rash. She was seen by her physician, who referred her to a dermatologist for consideration of other possible causes of the persistent rash, such as porphyria cutanea tarda, which is associated with HCV infection. Three days before this presentation, the patient ran out of clonazepam (after an undefined period during which she reportedly took more than the prescribed dose) and had increasing anxiety and insomnia. The same day, she reported seeing “bugs” on her 15-month-old son that were emerging from his scalp and were present on his skin and in his diaper and sputum. The patient scratched her skin and her child’s skin to remove the offending agents. The day before this presentation, she called emergency medical services and she and her child were transported by ambulance to the emergency department of another hospital. A diagnosis of possible cheyletiellosis was made. She was advised to use selenium sulfide shampoo and to follow up with her physician; the patient returned home with her child. On the morning of admission, while bathing her child, she noted that his scalp was turning red and he was crying. She came with her son to the emergency department of this hospital. The patient reported the presence of bugs on her skin, which she attempted to point out to examiners. She acknowledged a habit of picking at her skin since adolescence, which she said had a calming effect. Fourteen months earlier, shortly after the birth of her son, worsening acne developed that did not respond to treatment with topical antimicrobial agents and tretinoin. Four months later, a facial abscess due From the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Massachusetts General Hospital, and the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Harvard Medi‐ cal School — both in Boston.", "title": "" }, { "docid": "ec300259d5bcdcf3373d05ddcd8f99ae", "text": "This research focuses on the flapping wing mechanism design for the micro air vehicle model. The paper starts with analysis the topological structure characteristics of Single-Crank Double-Rocker mechanism. Following the design procedure, all of the possible combinations of flapping mechanism which contains not more than 6 components were generated. The design procedure is based on Hong-Sen Yan's creative design theory for mechanical devices. This research designed 31 different types of mechanisms, which provide more directions for the design and fabrication of the micro air vehicle model.", "title": "" }, { "docid": "03dc23b2556e21af9424500e267612bb", "text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.", "title": "" }, { "docid": "a44e95fe672a4468b42fe881cd1697fd", "text": "In this paper, we present a maximum power point tracker and estimator for a PV system to estimate the point of maximum power, to track this point and force it to reach this point in finite time and to stay there for all future time in order to provide the maximum power available to the load. The load will be composed of a battery bank. This is obtained by controlling the duty cycle of a DC-DC converter using sliding mode control. The sliding mode controller is given the estimated maximum power point as a reference for it to track that point and force the PV system to operate in this point. This method has the advantage that it will guarantee the maximum output power possible by the array configuration while considering the dynamic parameters temperature and solar irradiance and delivering more power to charge the battery. The procedure of designing, simulating and results are presented in this paper.", "title": "" }, { "docid": "4c7624e4d1674a753fb54d2a826c3666", "text": "We tackle the question: how much supervision is needed to achieve state-of-the-art performance in part-of-speech (POS) tagging, if we leverage lexical representations given by the model of Brown et al. (1992)? It has become a standard practice to use automatically induced “Brown clusters” in place of POS tags. We claim that the underlying sequence model for these clusters is particularly well-suited for capturing POS tags. We empirically demonstrate this claim by drastically reducing supervision in POS tagging with these representations. Using either the bit-string form given by the algorithm of Brown et al. (1992) or the (less well-known) embedding form given by the canonical correlation analysis algorithm of Stratos et al. (2014), we can obtain 93% tagging accuracy with just 400 labeled words and achieve state-of-the-art accuracy (> 97%) with less than 1 percent of the original training data.", "title": "" }, { "docid": "a0b9017cfcdbcfd94c08ddb5e6526af4", "text": "Search and recommendation systems must include contextual information to effectively model users' interests. In this paper, we present a systematic study of the effectiveness of five variant sources of contextual information for user interest modeling. Post-query navigation and general browsing behaviors far outweigh direct search engine interaction as an information-gathering activity. Therefore we conducted this study with a focus on Website recommendations rather than search results. The five contextual information sources used are: social, historic, task, collection, and user interaction. We evaluate the utility of these sources, and overlaps between them, based on how effectively they predict users' future interests. Our findings demonstrate that the sources perform differently depending on the duration of the time window used for future prediction, and that context overlap outperforms any isolated source. Designers of Website suggestion systems can use our findings to provide improved support for post-query navigation and general browsing behaviors.", "title": "" }, { "docid": "cb1308814af219072bdcb66629149317", "text": "Automatic detection of persuasion is essential for machine interaction on the social web. To facilitate automated persuasion detection, we present a novel microtext corpus derived from hostage negotiation transcripts as well as a detailed manual (codebook) for persuasion annotation. Our corpus, called the NPS Persuasion Corpus, consists of 37 transcripts from four sets of hostage negotiation transcriptions. Each utterance in the corpus is hand annotated for one of nine categories of persuasion based on Cialdini’s model: reciprocity, commitment, consistency, liking, authority, social proof, scarcity, other, and not persuasive. Initial results using three supervised learning algorithms (Naı̈ve Bayes, Maximum Entropy, and Support Vector Machines) combined with gappy and orthogonal sparse bigram feature expansion techniques show that the annotation process did capture machine learnable features of persuasion with F-scores better than baseline.", "title": "" }, { "docid": "6afdf8c4f509de6481bf4cf8d28c77a4", "text": "We propose a Learning from Demonstration (LfD) algorithm which leverages expert data, even if they are very few or inaccurate. We achieve this by using both expert data, as well as reinforcement signals gathered through trial-and-error interactions with the environment. The key idea of our approach, Approximate Policy Iteration with Demonstration (APID), is that expert’s suggestions are used to define linear constraints which guide the optimization performed by Approximate Policy Iteration. We prove an upper bound on the Bellman error of the estimate computed by APID at each iteration. Moreover, we show empirically that APID outperforms pure Approximate Policy Iteration, a state-of-the-art LfD algorithm, and supervised learning in a variety of scenarios, including when very few and/or suboptimal demonstrations are available. Our experiments include simulations as well as a real robot path-finding task.", "title": "" }, { "docid": "568a8c1d8c494c1eee807f0ea30b8531", "text": "Patent data represent a significant source of information on innovation, knowledge production, and the evolution of technology through networks of citations, co-invention and co-assignment. A major obstacle to extracting useful information from this data is the problem of name disambiguation: linking alternate spellings of individuals or institutions to a single identifier to uniquely determine the parties involved in knowledge production and diffusion. In this paper, we describe a new algorithm that uses high-resolution geolocation to disambiguate both inventors and assignees on about 8.5 million patents found in the European Patent Office (EPO), under the Patent Cooperation Treaty (PCT), and in the US Patent and Trademark Office (USPTO). We show this disambiguation is consistent with a number of ground-truth benchmarks of both assignees and inventors, significantly outperforming the use of undisambiguated names to identify unique entities. A significant benefit of this work is the high quality assignee disambiguation with coverage across the world coupled with an inventor disambiguation (that is competitive with other state of the art approaches) in multiple patent offices.", "title": "" }, { "docid": "d4896aa12be18aea9a6639422ee12d92", "text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.", "title": "" } ]
scidocsrr
e9857839940c2aa22f47d6b14c76193c
An innovative neural network approach for stock market prediction
[ { "docid": "4d0bfb1eead0886e4196d61cf698aac5", "text": "We use machine learning for designing a medium frequency trading strategy for a portfolio of 5 year and 10 year US Treasury note futures. We formulate this as a classification problem where we predict the weekly direction of movement of the portfolio using features extracted from a deep belief network trained on technical indicators of the portfolio constituents. The experimentation shows that the resulting pipeline is effective in making a profitable trade.", "title": "" } ]
[ { "docid": "88bdaa1ee78dd24f562e632cdb5ed396", "text": "We present a novel paraphrase fragment pair extraction method that uses a monolingual comparable corpus containing different articles about the same topics or events. The procedure consists of document pair extraction, sentence pair extraction, and fragment pair extraction. At each stage, we evaluate the intermediate results manually, and tune the later stages accordingly. With this minimally supervised approach, we achieve 62% of accuracy on the paraphrase fragment pairs we collected and 67% extracted from the MSR corpus. The results look promising, given the minimal supervision of the approach, which can be further scaled up.", "title": "" }, { "docid": "0c81db10ea2268b640073e3aaa49cb35", "text": "A data structure called a PQ-tree is introduced. PQ-trees can be used to represent the permutations of a set U in which various subsets of U occur consecutively. Efficient algorithms are presented for manipulating PQ-trees. Algorithms using PQ-trecs are then given which test for the consecutive ones property in matrices and for graph planarity. The consecutive ones test is extended to a test for interval graphs using a recently discovered fast recognition algorithm for chordal graphs. All of these algorithms require a number of steps linear in the size of their input.", "title": "" }, { "docid": "b0a1a782ce2cbf5f152a52537a1db63d", "text": "In piezoelectric energy harvesting (PEH), with the use of the nonlinear technique named synchronized switching harvesting on inductor (SSHI), the harvesting efficiency can be greatly enhanced. Furthermore, the introduction of its self-powered feature makes this technique more applicable for standalone systems. In this article, a modified circuitry and an improved analysis for self-powered SSHI are proposed. With the modified circuitry, direct peak detection and better isolation among different units within the circuit can be achieved, both of which result in further removal on dissipative components. In the improved analysis, details in open circuit voltage, switching phase lag, and voltage inversion factor are discussed, all of which lead to a better understanding to the working principle of the self-powered SSHI. Both analyses and experiments show that, in terms of harvesting power, the higher the excitation level, the closer between self-powered and ideal SSHI; at the same time, the more beneficial the adoption of self-powered SSHI treatment in piezoelectric energy harvesting, compared to the standard energy harvesting (SEH) technique.", "title": "" }, { "docid": "86f82b9006c4e34192b79a03e71dde87", "text": "Erectile dysfunction (ED) is defined as the consistent inability to obtain or maintain an erection for satisfactory sexual relations. An estimated 20-30 million men suffer from some degree of sexual dysfunction. The past 20 years of research on erectile physiology have increased our understanding of the biochemical factors and intracellular mechanisms responsible for corpus cavernosal smooth muscle contraction and relaxation, and revealed that ED is predominantly a disease of vascular origin. Since the advent of sildenafil (Viagra), there has been a resurgence of interest in ED, and an increase in patients presenting with this disease. A thorough knowledge of the physiology of erection is essential for future pharmacological innovations in the field of male ED.", "title": "" }, { "docid": "fa0883f4adf79c65a6c13c992ae08b3f", "text": "Being able to keep the graph scale small while capturing the properties of the original social graph, graph sampling provides an efficient, yet inexpensive solution for social network analysis. The challenge is how to create a small, but representative sample out of the massive social graph with millions or even billions of nodes. Several sampling algorithms have been proposed in previous studies, but there lacks fair evaluation and comparison among them. In this paper, we analyze the state-of art graph sampling algorithms and evaluate their performance on some widely recognized graph properties on directed graphs using large-scale social network datasets. We evaluate not only the commonly used node degree distribution, but also clustering coefficient, which quantifies how well connected are the neighbors of a node in a graph. Through the comparison we have found that none of the algorithms is able to obtain satisfied sampling results in both of these properties, and the performance of each algorithm differs much in different kinds of datasets.", "title": "" }, { "docid": "3875e92e378dc416d53ca71cc2263437", "text": "One of the major challenges in imaging neuroscience is the integration of cognitive science with the empiricism of neurophysiology. The cognitive architectures and principles offered by cognitive science have been essential in shaping experimental design and image analysis strategies from the outset. Now some of the cognitive models and their assumptions (for example, cognitive subtraction) are being re-evaluated in the light of how the brain actually implements putative components and processes. In this review we will consider experimental designs that go beyond cognitive subtraction and also consider how functional imaging can be used to assess the context-sensitivity of cognitive processing (using conjunction analyses), and the integration of different processes (in terms of interactions, using factorial designs) and how both these themes can be developed in the context of parametric designs. These new approaches reflect an ongoing discourse between cognitive science and the emerging principles of functional anatomy.", "title": "" }, { "docid": "c625221e79bdc508c7c772f5be0458a1", "text": "Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).", "title": "" }, { "docid": "37ecc24d1bfd5109f511d184028f5061", "text": "Legged robots have the potential to serve as versatile and useful autonomous robotic platforms for use in unstructured environments such as disaster sites. They need to be both capable of fast dynamic locomotion and precise movements. However, there is a lack of platforms with suitable mechanical properties and adequate controllers to advance the research in this direction. In this paper we are presenting results on the novel research platform HyQ, a torque controlled hydraulic quadruped robot. We identify the requirements for versatile robotic legged locomotion and show that HyQ is fulfilling most of these specifications. We show that HyQ is able to do both static and dynamic movements and is able to cope with the mechanical requirements of dynamic movements and locomotion, such as jumping and trotting. The required control, both on hydraulic level (force/torque control) and whole body level (rigid model based control) is discussed.", "title": "" }, { "docid": "75f8f0d89bdb5067910a92553275b0d7", "text": "It is well known that recognition performance degrades signi cantly when moving from a speakerdependent to a speaker-independent system. Traditional hidden Markov model (HMM) systems have successfully applied speaker-adaptation approaches to reduce this degradation. In this paper we present and evaluate some techniques for speaker-adaptation of a hybrid HMM-arti cial neural network (ANN) continuous speech recognition system. These techniques are applied to a well trained, speaker-independent, hybrid HMM-ANN system and the recognizer parameters are adapted to a new speaker through o -line procedures. The techniques are evaluated on the DARPA RM corpus using varying amounts of adaptation material and different ANN architectures. The results show that speaker-adaptation within the hybrid framework can substantially improve system performance.", "title": "" }, { "docid": "67e89b0b436df121a3b037a4c2fcbd47", "text": "This paper reviews the theoretical and empirical literature on the channels through which blockholders (large shareholders) engage in corporate governance. In classical models, blockholders exert governance through direct intervention in a firm’s operations, otherwise known as “voice.” These theories have motivated empirical research on the determinants and consequences of activism. More recent models show that blockholders can govern through an alternative mechanism known as “exit”—selling their shares if the manager underperforms. These theories give rise to new empirical studies on the two-way relationship between blockholders and financial markets, linking corporate finance with asset pricing. Blockholders may also worsen governance by extracting private benefits of control or pursuing objectives other than firm value maximization. I highlight the empirical challenges in identifying causal effects of and on blockholders as well as the typical strategies attempted to achieve identification. I close with directions for future research. 23 A nn u. R ev . F in . E co n. 2 01 4. 6: 23 -5 0. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g A cc es s pr ov id ed b y U ni ve rs ity o f Pe nn sy lv an ia o n 01 /0 7/ 15 . F or p er so na l u se o nl y.", "title": "" }, { "docid": "9c5d3f89d5207b42d7e2c8803b29994c", "text": "With the advent of data mining, machine learning has come of age and is now a critical technology in many businesses. However, machine learning evolved in a different research context to that in which it now finds itself employed. A particularly important problem in the data mining world is working effectively with large data sets. However, most machine learning research has been conducted in the context of learning from very small data sets. To date most approaches to scaling up machine learning to large data sets have attempted to modify existing algorithms to deal with large data sets in a more computationally efficient and effective manner. But is this necessarily the best method? This paper explores the possibility of designing algorithms specifically for large data sets. Specifically, the paper looks at how increasing data set size affects bias and variance error decompositions for classification algorithms. Preliminary results of experiments to determine these effects are presented, showing that, as hypothesised variance can be expected to decrease as training set size increases. No clear effect of training set size on bias was observed. These results have profound implications for data mining from large data sets, indicating that developing effective learning algorithms for large data sets is not simply a matter of finding computationally efficient variants of existing learning algorithms.", "title": "" }, { "docid": "acf514a4aa34487121cc853e55ceaed4", "text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.", "title": "" }, { "docid": "a2101b56ecc738dc43c853fedbfe1af5", "text": "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "title": "" }, { "docid": "71c562dfa8cc3967e9d7ca347225c631", "text": "The nasal valve has long been described as the anatomical boundary most likely to inhibit nasal airflow and lead to subsequent nasal obstruction. Although many procedures can address this area to improve the nasal airway, for over 20 years, suture lateralization of the external nasal valve has been described as a minimally invasive technique that can improve nasal breathing. We report our modification of the standard technique in which we lateralize the placement of the bone-anchored suture and incorporate Gore-Tex within the nasal vestibular incision to prevent tissue migration.", "title": "" }, { "docid": "f9d2ccdbbc2dd5a0ea5635c53a6b1e50", "text": "OBJECTIVES\nThe article provides an overview of current trends in personal sensor, signal and imaging informatics, that are based on emerging mobile computing and communications technologies enclosed in a smartphone and enabling the provision of personal, pervasive health informatics services.\n\n\nMETHODS\nThe article reviews examples of these trends from the PubMed and Google scholar literature search engines, which, by no means claim to be complete, as the field is evolving and some recent advances may not be documented yet.\n\n\nRESULTS\nThere exist critical technological advances in the surveyed smartphone technologies, employed in provision and improvement of diagnosis, acute and chronic treatment and rehabilitation health services, as well as in education and training of healthcare practitioners. However, the most emerging trend relates to a routine application of these technologies in a prevention/wellness sector, helping its users in self-care to stay healthy.\n\n\nCONCLUSIONS\nSmartphone-based personal health informatics services exist, but still have a long way to go to become an everyday, personalized healthcare-provisioning tool in the medical field and in a clinical practice. Key main challenge for their widespread adoption involve lack of user acceptance striving from variable credibility and reliability of applications and solutions as they a) lack evidence- based approach; b) have low levels of medical professional involvement in their design and content; c) are provided in an unreliable way, influencing negatively its usability; and, in some cases, d) being industry-driven, hence exposing bias in information provided, for example towards particular types of treatment or intervention procedures.", "title": "" }, { "docid": "3fb85f6f093b4a47dafd830c4b99f4e3", "text": "New applications of evolutionary biology are transforming our understanding of cancer. The articles in this special issue provide many specific examples, such as microorganisms inducing cancers, the significance of within-tumor heterogeneity, and the possibility that lower dose chemotherapy may sometimes promote longer survival. Underlying these specific advances is a large-scale transformation, as cancer research incorporates evolutionary methods into its toolkit, and asks new evolutionary questions about why we are vulnerable to cancer. Evolution explains why cancer exists at all, how neoplasms grow, why cancer is remarkably rare, and why it occurs despite powerful cancer suppression mechanisms. Cancer exists because of somatic selection; mutations in somatic cells result in some dividing faster than others, in some cases generating neoplasms. Neoplasms grow, or do not, in complex cellular ecosystems. Cancer is relatively rare because of natural selection; our genomes were derived disproportionally from individuals with effective mechanisms for suppressing cancer. Cancer occurs nonetheless for the same six evolutionary reasons that explain why we remain vulnerable to other diseases. These four principles-cancers evolve by somatic selection, neoplasms grow in complex ecosystems, natural selection has shaped powerful cancer defenses, and the limitations of those defenses have evolutionary explanations-provide a foundation for understanding, preventing, and treating cancer.", "title": "" }, { "docid": "38a7f57900474553f6979131e7f39e5d", "text": "A cascade switched-capacitor ΔΣ analog-to-digital converter, suitable for WLANs, is presented. It uses a double-sampling scheme with single set of DAC capacitors, and an improved low-distortion architecture with an embedded-adder integrator. The proposed architecture eliminates one active stage, and reduces the output swings in the loop-filter and hence the non-linearity. It was fabricated with a 0.18um CMOS process. The prototype chip achieves 75.5 dB DR, 74 dB SNR, 73.8 dB SNDR, −88.1 dB THD, and 90.2 dB SFDR over a 10 MHz signal band with an FoM of 0.27 pJ/conv-step.", "title": "" }, { "docid": "4b156066e72d0e8bf220c3e13738d91c", "text": "We present an unsupervised approach for abnormal event detection in videos. We propose, given a dictionary of features learned from local spatiotemporal cuboids using the sparse coding objective, the abnormality of an event depends jointly on two factors: the frequency of each feature in reconstructing all events (or, rarity of a feature) and the strength by which it is used in reconstructing the current event (or, the absolute coefficient). The Incremental Coding Length (ICL) of a feature is a measure of its entropy gain. Given a dictionary, the ICL computation does not involve any parameter, is computationally efficient and has been used for saliency detection in images with impressive results. In this paper, the rarity of a dictionary feature is learned online as its average energy, a function of its ICL. The proposed approach is applicable to real world streaming videos. Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art.", "title": "" }, { "docid": "6f0fc401c11d7ee3faf2f265eb4b2baf", "text": "The inverted peno-scrotal flap is considered the standard technique for vaginoplasty in male-to-female transsexuals. Nowadays, great importance is also given by patients to the reconstruction of the clitoro-labial complex; this is also reconstructed with tissue coming from glans penis, penile skin envelop and scrotal skin. Since the first sex reassignment surgery for biological males performed in Thailand in 1975, Dr Preecha and his team developed the surgical technique for vaginoplasty; many refinements have been introduced during the past 40 years, with nearly 3000 patients operated on. The scope of this paper is to present the surgical technique currently in use for vaginoplasty and clitoro-labioplasty and the refinements introduced at the Chulalongkorn University and at the Preecha Aesthetic Institute, Bangkok, Thailand. These refinements consist of cavity dissection with blunt technique, the use of skin graft in addition to the penile flap, shaping of the clitoris complex from penis glans and clitoral hood, and the use of the urethral mucosa to line the anterior fourchette of the neo-vagina. With the refinements introduced, it has been possible to achieve a result that is very close to the biological female genitalia.", "title": "" } ]
scidocsrr
89dd999ef389f743daff69910cfa9bbd
PeriodShare: A Bloody Design Fiction
[ { "docid": "ff572d9c74252a70a48d4ba377f941ae", "text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.", "title": "" } ]
[ { "docid": "21b04c71f6c87b18f544f6b3f6570dd7", "text": "Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience in developing expert systems.<<ETX>>", "title": "" }, { "docid": "45c04c80a5e4c852c4e84ba66bd420dd", "text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.", "title": "" }, { "docid": "59ee62f5e0fc37156c5c1a5febc046ba", "text": "The paper presents a method to estimate the detailed 3D body shape of a person even if heavy or loose clothing is worn. The approach is based on a space of human shapes, learned from a large database of registered body scans. Together with this database we use as input a 3D scan or model of the person wearing clothes and apply a fitting method, based on ICP (iterated closest point) registration and Laplacian mesh deformation. The statistical model of human body shapes enforces that the model stays within the space of human shapes. The method therefore allows us to compute the most likely shape and pose of the subject, even if it is heavily occluded or body parts are not visible. Several experiments demonstrate the applicability and accuracy of our approach to recover occluded or missing body parts from 3D laser scans.", "title": "" }, { "docid": "9a68b128c88d6d64ccb46861bdc999d5", "text": "This paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2D and 3D face alignment datasets. To this end, we make the following 5 contributions: (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and finally evaluate it on all other 2D facial landmark datasets. (b)We create a guided by 2D landmarks network which converts 2D landmark annotations to 3D and unifies all existing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date (~230,000 images). (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W. (d) We further look into the effect of all “traditional” factors affecting face alignment performance like large pose, initialization and resolution, and introduce a “new” one, namely the size of the network. (e) We show that both 2D and 3D face alignment networks achieve performance of remarkable accuracy which is probably close to saturating the datasets used. Training and testing code as well as the dataset can be downloaded from https://www.adrianbulat.com/face-alignment/", "title": "" }, { "docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7", "text": "41  Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.", "title": "" }, { "docid": "f4cb0eb6d39c57779cf9aa7b13abef14", "text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.", "title": "" }, { "docid": "380dc2289f621b06f0085a1d8e178638", "text": "Feature modeling is an important approach to capture the commonalities and variabilities in system families and product lines. Cardinality-based feature modeling integrates a number of existing extensions of the original feature-modeling notation from Feature-Oriented Domain Analysis. Staged configuration is a process that allows the incremental configuration of cardinality-based feature models. It can be achieved by performing a step-wise specialization of the feature model. In this paper, we argue that cardinality-based feature models can be interpreted as a special class of context-free grammars. We make this precise by specifying a translation from a feature model into a context-free grammar. Consequently, we provide a semantic interpretation for cardinalitybased feature models by assigning an appropriate semantics to the language recognized by the corresponding grammar. Finally, we give an account on how feature model specialization can be formalized as transformations on the grammar equivalent of feature models.", "title": "" }, { "docid": "e2d2fe124fbef2138d2c67a02da220c6", "text": "This paper addresses robust fault diagnosis of the chaser’s thrusters used for the rendezvous phase of the Mars Sample Return (MSR) mission. The MSR mission is a future exploration mission undertaken jointly by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The goal is to return tangible samples from Mars atmosphere and ground to Earth for analysis. A residual-based scheme is proposed that is robust against the presence of unknown time-varying delays induced by the thruster modulator unit. The proposed fault diagnosis design is based on Eigenstructure Assignment (EA) and first-order Padé approximation. The resulted method is able to detect quickly any kind of thruster faults and to isolate them using a cross-correlation based test. Simulation results from the MSR ”high-fidelity” industrial simulator, provided by Thales Alenia Space, demonstrate that the proposed method is able to detect and isolate some thruster faults in a reasonable time, despite of delays in the thruster modulator unit, inaccurate navigation unit, and spatial disturbances (i.e. J2 gravitational perturbation, atmospheric drag, and solar radiation pressure). Robert Fonod IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: robert.fonod@ims-bordeaux.fr David Henry IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: david.henry@ims-bordeaux.fr Catherine Charbonnel Thales Alenia Space, 100 Boulevard du Midi, 06156 Cannes La Bocca, France e-mail: catherine.charbonnel@thalesaleniaspace.com Eric Bornschlegl European Space Research and Technology Centre, Keplerlaan 1, 2200 AG Noordwijk, Netherlands e-mail: eric.bornschlegl@esa.int 1 Proceedings of the EuroGNC 2013, 2nd CEAS Specialist Conference on Guidance, Navigation & Control, Delft University of Technology, Delft, The Netherlands, April 10-12, 2013 FrBT2.2", "title": "" }, { "docid": "5f0e1c63d60a4bdd8af5994b25b6654d", "text": "The machine representation of floating point values has limited precision such that errors may be introduced during execution. These errors may get propagated and magnified by the following operations, leading to instability problems, e.g., control flow path may be undesirably altered and faulty output may be emitted. In this paper, we develop an on-the-fly efficient monitoring technique that can predict if an execution is stable. The technique does not explicitly compute errors as doing so incurs high overhead. Instead, it detects possible places where an error becomes substantially inflated regarding the corresponding value, and then tags the value with one bit to denote that it has an inflated error. It then tracks inflation bit propagation, taking care of operations that may cut off such propagation. It reports instability if any inflation bit reaches a critical execution point, such as a predicate, where the inflated error may induce substantial execution difference, such as different execution paths. Our experiment shows that with appropriate thresholds, the technique can correctly detect that over 99.999996% of the inputs of all the programs we studied are stable while a traditional technique relying solely on inflation detection mistakenly classifies majority of the inputs as unstable for some of the programs. Compared to the state of the art technique that is based on high precision computation and causes several hundred times slowdown, our technique only causes 7.91 times slowdown on average and can report all the true unstable executions with the appropriate thresholds.", "title": "" }, { "docid": "f0a5d33084588ed4b7fc4905995f91e2", "text": "A new microstrip dual-band polarization reconfigurable antenna is presented for wireless local area network (WLAN) systems operating at 2.4 and 5.8 GHz. The antenna consists of a square microstrip patch that is aperture coupled to a microstrip line located along the diagonal line of the patch. The dual-band operation is realized by employing the TM10 and TM30 modes of the patch antenna. Four shorting posts are inserted into the patch to adjust the frequency ratio of the two modes. The center of each edge of the patch is connected to ground via a PIN diode for polarization switching. By switching between the different states of PIN diodes, the proposed antenna can radiate either horizontal, vertical, or 45° linear polarization in the two frequency bands. Measured results on reflection coefficients and radiation patterns agree well with numerical simulations.", "title": "" }, { "docid": "9fa8ba9da6f6303278d479666916bd13", "text": "UART (Universal Asynchronous Receiver Transmitter) is used for serial communication. It is used for long distance and low cost process for transfer of data between pc and its devices. In general a UART operated with specific baud rate. To meet the complex communication demands it is not sufficient. To overcome this difficulty a multi channel UART is proposed in this paper. And the whole design is simulated with modelsim and synthesized with Xilinx software", "title": "" }, { "docid": "6f942f8ead4684f4943d1c82ea140b9a", "text": "This paper considers the problem of approximate nearest neighbor search in the compressed domain. We introduce polysemous codes, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance. Their design is inspired by algorithms introduced in the 90’s to construct channel-optimized vector quantizers. At search time, this dual interpretation accelerates the search. Most of the indexed vectors are filtered out with Hamming distance, letting only a fraction of the vectors to be ranked with an asymmetric distance estimator. The method is complementary with a coarse partitioning of the feature space such as the inverted multi-index. This is shown by our experiments performed on several public benchmarks such as the BIGANN dataset comprising one billion vectors, for which we report state-of-the-art results for query times below 0.3 millisecond per core. Last but not least, our approach allows the approximate computation of the k-NN graph associated with the Yahoo Flickr Creative Commons 100M, described by CNN image descriptors, in less than 8 hours on a single machine.", "title": "" }, { "docid": "faa70d7d0bb9097abae6e93f23c42efe", "text": "あらまし 直線位相特性を有するディジタルフィルタは信号処理の多くの応用において必要である.本論文で は,通過域に近似的直線位相特性を有するチェビシェフ型 IIRフィルタの設計について述べる.まず,阻止域の 指定された周波数点に多重零点を配置することにより,フィルタの平たんな阻止域が容易に実現できることを示 す.次に,通過域に複素 Remez アルゴリズムを適用し,フィルタの設計問題を固有値問題として定式化する. よって,固有値問題を解くことにより,フィルタ係数が容易に求められる.更に,反復計算を行い,通過域にお ける誤差関数の等リプル特性を得る.最後に,提案したチェビシェフ型フィルタと遅延器を並列接続し,近似的 直線位相特性を有する逆チェビシェフ型 IIRフィルタも同時に得られることを示す. キーワード IIR ディジタルフィルタ,チェビシェフ型フィルタ,近似的直線位相特性,固有値問題,複素 Remezアルゴリズム", "title": "" }, { "docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1", "text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.", "title": "" }, { "docid": "6e22b591075d1344ae34716854d96272", "text": "This paper demonstrates a new structure of dual band microstrip bandpass filter (BPF) by cascading an interdigital structure (IDS) and a hairpin line structure. The use of IDS improves the quality factor of the proposed filter. The size of the filter is very small and it is very compact and simple to design. To reduce size of the proposed filter there is no use of via or defected ground structure which makes its fabrication easier and cost effective. The first band of filter covers 2.4GHz, 2.5GHz and 3.5GHz and second band covers 5.8GHz of WLAN/WiMAX standards with good insertion loss. The proposed filter is designed on FR4 with dielectric constant of 4.4 and of thickness 1.6mm. Performance of proposed filter is compared with previously reported filters and found better with reduced size.", "title": "" }, { "docid": "a5880537982831e8028ff611daf71ff4", "text": "An adaptive medium access control (MAC) retransmission limit selection scheme is proposed to improve the performance of IEEE 802.11p standard MAC protocol for video streaming applications over vehicular ad-hoc networks (VANETs). A multi-objective optimization framework, which jointly minimizes the probability of playback freezes and start-up delay of the streamed video at the destination vehicle by tuning the MAC retransmission limit with respect to channel statistics as well as packet transmission rate, is applied at road side unit (RSU). Periodic channel state estimation is performed at the RSU using the information derived from the received signal strength (RSS) and Doppler shift effect. Estimates of access probability between the RSU and the destination vehicle is incorporated in the design of the adaptive MAC scheme. The adaptation parameters are embedded in the user datagram protocol (UDP) packet header. Two-hop transmission is applied in zones in which the destination vehicle is not within the transmission range of any RSU. For multi-hop scenario, we discuss two-hop joint MAC retransmission adaptation and path selection. Compared with the non-adaptive IEEE 802.11p standard MAC, numerical results show that the proposed adaptive MAC protocol exhibits significantly fewer playback freezes while introduces only a slight increase in start-up delay.", "title": "" }, { "docid": "db98068f4c69b2389c9ff1bc0ade4e6f", "text": "We infiltrate the ASIC development chain by inserting a small denial-of-service (DoS) hardware Trojan at the fabrication design phase into an existing VLSI circuit, thereby simulating an adversary at a semiconductor foundry. Both the genuine and the altered ASICs have been fabricated using a 180 nm CMOS process. The Trojan circuit adds an overhead of only 0.5% to the original design. In order to detect the hardware Trojan, we perform side-channel analyses and apply IC-fingerprinting techniques using templates, principal component analysis (PCA), and support vector machines (SVMs). As a result, we were able to successfully identify and classify all infected ASICs from non-infected ones. To the best of our knowledge, this is the first hardware Trojan manufactured as an ASIC and has successfully been analyzed using side channels.", "title": "" }, { "docid": "0433b6406358479e45dfece9ca6633b7", "text": "The gamification is growing in e-business and the banks are looking for new ways to get more customers on their websites. Therefore, it is important to study what are the most appreciated features of the website that could influence the behaviour of the customer to use an electronic banking system with game features. The gamified e-banking suggests that rich elements/features associated with the games could influence other variables and therefore increasing the client loyalty, to spend more time and increasing the transactions on the website. The aim of this study is to look into the influence of gamification in the e-banking system. Based on the research of 180 publications and 210 variables that could influence the intention to use a certain technology this study develops a theoretical model representing the gamification influence on ease of use, information, web pages characteristics, web design and on the intention to use an e-banking with game features. The results from an online survey of 219 e-banking customers show that the gamification had a positive impact on all variables; special has a medium positive influence in web design and information and a large positive influence on customer intentions to use. Further analysis shows that the website ease of use plays has also a medium positive influence on the intention to use an e-banking gamified. Our findings also show that the clients give more importance to an attractive graphical and architecture website design, and less to web pages with so much information or having pleasure in using an e-banking system.", "title": "" }, { "docid": "58d66911afe35370309ae0bd6ee71045", "text": "The face inversion effect (FIE) is defined as the larger decrease in recognition performance for faces than for other mono-oriented objects when they are presented upside down. Behavioral studies suggest the FIE takes place at the perceptual encoding stage and is mainly due to the decrease in ability to extract relational information when discriminating individual faces. Recently, functional magnetic resonance imaging and scalp event-related potentials studies found that turning faces upside down slightly but significantly decreases the response of face-selective brain regions, including the so-called fusiform face area (FFA), and increases activity of other areas selective for nonface objects. Face inversion leads to a significantly delayed (sometimes larger) N170 component, an occipito-temporal scalp potential associated with the perceptual encoding of faces and objects. These modulations are in agreement with the perceptual locus of the FIE and reinforce the view that the FFA and N170 are sensitive to individual face discrimination.", "title": "" }, { "docid": "aed97de827b675d3ddb3e04274f73428", "text": "In paid search advertising on Internet search engines, advertisers bid for specific keywords, e.g. “Rental Cars LAX,” to display a text ad in the sponsored section of the search results page. The advertiser is charged when a user clicks on the ad. Many of the keywords in paid search campaigns generate few, if any, sales conversions – even over several months. This sparseness makes it difficult to assess the profit performance of individual keywords and has led to the practice of managing large groups of keywords together or relying on easy-to-calculate heuristics such as click-through rate (CTR). The authors develop a model of individual keyword conversion that addresses the sparseness problem. Conversion rates are estimated using a hierarchical Bayes binary choice model. This enables conversion to be based on both word-level covariates and shrinkage across keywords. The model is applied to keyword-level paid search data containing daily information on impressions, clicks and reservations for a major lodging chain. The results show that including keyword-level covariates and heterogeneity significantly improves conversion estimates. A holdout comparison suggests that campaign management based on the model, i.e., estimated costper-sale on a keyword level, would outperform existing managerial strategies.", "title": "" } ]
scidocsrr
e65759666ce045041fc1c7b73feddcb4
The Experts in the Crowd : The Role of Reputable Investors in a Crowdfunding Market
[ { "docid": "c02d207ed8606165e078de53a03bf608", "text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: anand.bodapati@anderson.ucla.edu), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: rbucklin@anderson.ucla.edu), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*", "title": "" } ]
[ { "docid": "0148370d6185069fd3daac4b235ee415", "text": "In this paper, a method is proposed for feature extraction of offline signature recognition system. The proposed method is based on global features to identify forgeries and also median filter is introduces for noise reduction. The Proposed feature extraction method is compared with Discrete Radon Transform (DRT). Both the feature extraction method extracts one dimensional global features and the alignment between features is performed by Dynamic Time Warping (DTW). When being trained using 6 genuine signatures of each person and 250 forgeries taken from our database, the proposed method obtained an equal error rate (EER) of 8. 40%. The false acceptance rate (FAR) for proposed method was also kept as low as 8. 80%.", "title": "" }, { "docid": "4855ade8c90d3f562716ccb5b12c4051", "text": "Two adult samples were surveyed to investigate the relation between individuals' levels of self-monitoring and age. A negative relation was predicted as older individuals were seen as most likely to exhibit the low self-monitoring tendency of behaving in accordance with one's own attitudes and feelings, whereas younger individuals appeared most likely to exhibit the high self-monitoring tendency of behaving according to social cues. A significant negative correlation between age and self-monitoring was found in both samples. The self-monitoring construct is discussed in relation to other social-cognitive life-span differences and to the idea of critical periods throughout the life span.", "title": "" }, { "docid": "21e59ae0ca769d73ff6a6da176b717db", "text": "OBJECTIVE\nTo compile and critique research on the diagnostic accuracy of individual orthopaedic physical examination tests in a manner that would allow clinicians to judge whether these tests are valuable to their practice.\n\n\nMETHODS\nA computer-assisted literature search of MEDLINE, CINAHL, and SPORTDiscus databases (1966 to October 2006) using keywords related to diagnostic accuracy of physical examination tests of the shoulder. The Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool was used to critique the quality of each paper. Meta-analysis through meta-regression of the diagnostic odds ratio (DOR) was performed on the Neer test for impingement, the Hawkins-Kennedy test for impingement, and the Speed test for superior labral pathology.\n\n\nRESULTS\nForty-five studies were critiqued with only half demonstrating acceptable high quality and only two having adequate sample size. For impingement, the meta-analysis revealed that the pooled sensitivity and specificity for the Neer test was 79% and 53%, respectively, and for the Hawkins-Kennedy test was 79% and 59%, respectively. For superior labral (SLAP) tears, the summary sensitivity and specificity of the Speed test was 32% and 61%, respectively. Regarding orthopaedic special tests (OSTs) where meta-analysis was not possible either due to lack of sufficient studies or heterogeneity between studies, the list that demonstrates both high sensitivity and high specificity is short: hornblowers's sign and the external rotation lag sign for tears of the rotator cuff, biceps load II for superior labral anterior to posterior (SLAP) lesions, and apprehension, relocation and anterior release for anterior instability. Even these tests have been under-studied or are from lower quality studies or both. No tests for impingement or acromioclavicular (AC) joint pathology demonstrated significant diagnostic accuracy.\n\n\nCONCLUSION\nBased on pooled data, the diagnostic accuracy of the Neer test for impingement, the Hawkins-Kennedy test for impingement and the Speed test for labral pathology is limited. There is a great need for large, prospective, well-designed studies that examine the diagnostic accuracy of the numerous physical examination tests of the shoulder. Currently, almost without exception, there is a lack of clarity with regard to whether common OSTs used in clinical examination are useful in differentially diagnosing pathologies of the shoulder.", "title": "" }, { "docid": "c1d5f28d264756303fded5faa65587a2", "text": "English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic vocabulary learning process in which ubiquitous technology is used to develop the system, and video clips are used as the material. Afterward, the technology acceptance model and partial least squares approach are used to explore students’ perspectives on the UEVL system. The results indicate that (1) both the system characteristics and the material characteristics of the UEVL system positively and significantly influence the perspectives of all students on the system; (2) the active students are interested in perceived usefulness; (3) the passive students are interested in perceived ease of use. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "16afaad8bfdc64f9d97e9829f2029bc6", "text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.", "title": "" }, { "docid": "b811fcd9bf9a1728dc0c8eb112d01e99", "text": "-Bluetooth Low Energy (BLE) is an emerging low-power wireless technology developed for short-range control and monitoring applications that is expected to be incorporated into billions of devices in the next few years. This paper describes the main features of BLE, explores its potential applications, and investigates the impact of various critical parameters on its performance. BLE represents a trade-off between energy consumption, latency, piconet size, and throughput that mainly depends on parameters such as connInterval and connSlaveLatency. According to theoretical results, the lifetime of a BLE device powered by a coin cell battery ranges between 2.0 days and 14.1 years. The number of simultaneous slaves per master ranges between 2 and 5,917. The minimum latency for a master to obtain a sensor reading is 676 μs, although simulation results show that, under high bit error rate, average latency increases by up to three orders of magnitude. The paper provides experimental results that complement the theoretical and simulation findings, and indicates implementation constraints that may reduce BLE performance.", "title": "" }, { "docid": "62e445cabbb5c79375f35d7b93f9a30d", "text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.", "title": "" }, { "docid": "8a0e4a8640afb055c03c7335a7f2fff6", "text": "The brain and spinal cord form the central nervous system. The brain is the part of the central nervous system that is housed in the cranium/skull. It consists of the brain stem, diencephalon, cerebellum, and cerebrum. At the foramen magnum, the highest cervical segment of the spinal cord is continuous with the lowest level of the medulla of the brain stem. The spinal nerves from the sacral, lumbar, thoracic, and cervical levels of the spinal cord form the lower part of the peripheral nervous system and record general sensations of pain, temperature touch, and pressure. The 12 cranial nerves attached to the brain form the upper part of the peripheral nervous system and record general sensations of pain, temperature touch, and pressure, but in addition we now find the presence of the special senses of smell, vision, hearing, balance, and taste. The blood supply to the brain originates from the first major arterial branches from the heart insuring that over 20% of the entire supply of oxygenated blood flows directly into the brain.", "title": "" }, { "docid": "56998c03c373dfae07460a7b731ef03e", "text": "52 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis", "title": "" }, { "docid": "691f07bc4f1339d0915e98b76a6b6da1", "text": "Malicious Web pages that launch drive-by-download attacks on Web browsers have increasingly become a problem in recent years. High-interaction client honeypots are security devices that can detect these malicious Web pages on a network. However, high-interaction client honeypots are both resource-intensive and unable to handle the increasing array of vulnerable clients. This paper presents a novel classification method for detecting malicious Web pages that involves inspecting the underlying server relationships. Because of the unique structure of malicious front-end Web pages and centralized exploit servers, merely counting the number of domain name extensions and Domain Name System (DNS) servers used to resolve the host names of all Web servers involved in rendering a page is sufficient to determine whether a Web page is malicious or benign, independent of the vulnerable Web browser targeted by these pages. Combining high-interaction client honeypots and this new classification method into a hybrid system leads to performance improvements.", "title": "" }, { "docid": "e388d63d917358d6c3733c0b2e598511", "text": "This paper integrates theory, ethnography, and collaborative artwork to explore improvisational activity as both topic and tool of multidisciplinary HCI inquiry. Building on theories of improvisation drawn from art, music, HCI and social science, and two ethnographic studies based on interviews, participant observation and collaborative art practice, we seek to elucidate the improvisational nature of practice in both art and ordinary action, including human-computer interaction. We identify five key features of improvisational action -- reflexivity, transgression, tension, listening, and interdependence -- and show how these can deepen and extend both linear and open-ended methodologies in HCI and design. We conclude by highlighting collaborative engagement based on 'intermodulation' as a tool of multidisciplinary inquiry for HCI research and design.", "title": "" }, { "docid": "9223e4401954d39169eac1f0935c5bb3", "text": "Then, we can start by answering the questions: what exactly is chaos?, how is it used in cryptography?. First of all, let us say that there is not an universally mathematical accepted definition of the term “chaos”. In general sense, it refers to some dynamical phenomena considered to be complex (lack of time and spatial order) and unpredictable (erratic). Although it was precluded by Poincaré at the end of the XIX century (Poincaré, 1890), chaos theory begins to take form in the second half of the XX century (Lorenz, 1963; Mandelbrot, 1977) after observations of the evolution of different physical systems. These systems revealed that despite of the knowledge of their evolution rules and initial conditions, their future seemed to be arbitrary and unpredictable. That opened quite a revolution in modern physics, terminating with Laplace’s ideas of casual determinism (Laplace, 1825).", "title": "" }, { "docid": "0fd147227c10a243f4209ffc1295d279", "text": "Increases in server power dissipation time placed significant pressure on traditional data center thermal management systems. Traditional systems utilize computer room air conditioning (CRAC) units to pressurize a raised floor plenum with cool air that is passed to equipment racks via ventilation tiles distributed throughout the raised floor. Temperature is typically controlled at the hot air return of the CRAC units away from the equipment racks. Due primarily to a lack of distributed environmental sensing, these CRAC systems are often operated conservatively resulting in reduced computational density and added operational expense. This paper introduces a data center environmental control system that utilizes a distributed sensor network to manipulate conventional CRAC units within an air-cooled environment. The sensor network is attached to standard racks and provides a direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. The combined controller and sensor network has been deployed in a production data center environment. Results from the algorithm will be presented that demonstrate the performance of the system and evaluate the energy savings compared with conventional data center environmental control architecture", "title": "" }, { "docid": "f249386bda71809bd506064e970955f5", "text": "In this paper, a capacitive voltage divider with a high division ratio of >1∶1000 based on a high voltage coaxial cable and discrete foil capacitors mounted around the cable is investigated. The divider is designed for high voltage pulses up to 200 kV and has a relatively simple and robust design. For the presented voltage divider a circuit model is presented, which is also validated by measurements using a impedance analyzer and by comparisons with HV standard voltage probes.", "title": "" }, { "docid": "72c917a9f42d04cae9e03a31e0728555", "text": "We extend Fano’s inequality, which controls the average probability of events in terms of the average of some f–divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary [0, 1]–valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning. MSC 2000 subject classifications. Primary-62B10; secondary-62F15, 68T05.", "title": "" }, { "docid": "17d7cc23f3c12d93717ab0027ae80258", "text": "Recently, deep neural networks have demonstrated excellent performances in recognizing the age and gender on human face images. However, these models were applied in a black-box manner with no information provided about which facial features are actually used for prediction and how these features depend on image preprocessing, model initialization and architecture choice. We present a study investigating these different effects. In detail, our work compares four popular neural network architectures, studies the effect of pretraining, evaluates the robustness of the considered alignment preprocessings via cross-method test set swapping and intuitively visualizes the model's prediction strategies in given preprocessing conditions using the recent Layer-wise Relevance Propagation (LRP) algorithm. Our evaluations on the challenging Adience benchmark show that suitable parameter initialization leads to a holistic perception of the input, compensating artefactual data representations. With a combination of simple preprocessing steps, we reach state of the art performance in gender recognition.", "title": "" }, { "docid": "54ea2e0435e1a6a3554d420dab3b2f54", "text": "A lack of information security awareness within some parts of society as well as some organisations continues to exist today. Whilst we have emerged from the threats of late 1990s of viruses such as Code Red and Melissa, through to the phishing emails of the mid 2000’s and the financial damage some such as the Nigerian scam caused, we continue to react poorly to new threats such as demanding money via SMS with a promise of death to those who won’t pay. So is this lack of awareness translating into problems within the workforce? There is often a lack of knowledge as to what is an appropriate level of awareness for information security controls across an organisation. This paper presents the development of a theoretical framework and model that combines aspects of information security best practice standards as presented in ISO/IEC 27002 with theories of Situation Awareness. The resultant model is an information security awareness capability model (ISACM). A preliminary survey is being used to develop the Awareness Importance element of the model and will leverage the opinions of information security professionals. A subsequent survey is also being developed to measure the Awareness Capability element of the model. This will present scenarios that test Level 1 situation awareness (perception), Level 2 situation awareness (comprehension) and finally Level 3 situation awareness (projection). Is it time for awareness of information security to now hit the mainstream of society, governments and organisations?", "title": "" }, { "docid": "581e704216bedb1564340fe3b9780d99", "text": "Wide variation in programmer performance has been frequently reported in the literature [1, 2, 3]. In the absence of other explanation, most managers have come to accept that the variation is due to individual characteristics. The presumption that there are order-of-magnitude differences in individual performance makes accurate cost projection seem nearly impossible.\nIn an extensive study, 166 programmers from 35 different organizations, participated in a one-day implementation benchmarking exercise. While there were wide variations across the sample, we found evidence that characteristics of the workplace and of the organization seemed to explain a significant part of the difference.", "title": "" }, { "docid": "fba48672e859a7606707406267dd0957", "text": "We suggest a spectral histogram, defined as the marginal distribution of filter responses, as a quantitative definition for a texton pattern. By matching spectral histograms, an arbitrary image can be transformed to an image with similar textons to the observed. We use the chi(2)-statistic to measure the difference between two spectral histograms, which leads to a texture discrimination model. The performance of the model well matches psychophysical results on a systematic set of texture discrimination data and it exhibits the nonlinearity and asymmetry phenomena in human texture discrimination. A quantitative comparison with the Malik-Perona model is given, and a number of issues regarding the model are discussed.", "title": "" }, { "docid": "5c18830610621c61ce910a97d5878e34", "text": "We report here on a quantitative technique called COBRA to determine DNA methylation levels at specific gene loci in small amounts of genomic DNA. Restriction enzyme digestion is used to reveal methylation-dependent sequence differences in PCR products of sodium bisulfite-treated DNA as described previously. We show that methylation levels in the original DNA sample are represented by the relative amounts of digested and undigested PCR product in a linearly quantitative fashion across a wide spectrum of DNA methylation levels. In addition, we show that this technique can be reliably applied to DNA obtained from microdissected paraffin-embedded tissue samples. COBRA thus combines the powerful features of ease of use, quantitative accuracy, and compatibility with paraffin sections.", "title": "" } ]
scidocsrr
0a98c6bfb77cb129736675d1fd61e749
Toward integrating feature selection algorithms for classification and clustering
[ { "docid": "6a3dc4c6bcf2a4133532c37dfa685f3b", "text": "Feature selection can be de ned as a problem of nding a minimum set of M relevant at tributes that describes the dataset as well as the original N attributes do where M N After examining the problems with both the exhaustive and the heuristic approach to fea ture selection this paper proposes a proba bilistic approach The theoretic analysis and the experimental study show that the pro posed approach is simple to implement and guaranteed to nd the optimal if resources permit It is also fast in obtaining results and e ective in selecting features that im prove the performance of a learning algo rithm An on site application involving huge datasets has been conducted independently It proves the e ectiveness and scalability of the proposed algorithm Discussed also are various aspects and applications of this fea ture selection algorithm", "title": "" } ]
[ { "docid": "2adf5e06cfc7e6d8cf580bdada485a23", "text": "This paper describes the comprehensive Terrorism Knowledge Base TM (TKB TM) which will ultimately contain all relevant knowledge about terrorist groups, their members, leaders, affiliations , etc., and full descriptions of specific terrorist events. Led by world-class experts in terrorism , knowledge enterers have, with simple tools, been building the TKB at the rate of up to 100 assertions per person-hour. The knowledge is stored in a manner suitable for computer understanding and reasoning. The TKB also utilizes its reasoning modules to integrate data and correlate observations, generate scenarios, answer questions and compose explanations.", "title": "" }, { "docid": "7ec33dfb4321acbada95b6a6ac38f1ea", "text": "A chatterbot or chatbot aims to make a conversation between both human and machine. The machine has been embedded knowledge to identify the sentences and making a decision itself as response to answer a question. The response principle is matching the input sentence from user. From input sentence, it will be scored to get the similarity of sentences, the higher score obtained the more similar of reference sentences. The sentence similarity calculation in this paper using bigram which divides input sentence as two letters of input sentence. The knowledge of chatbot are stored in the database. The chatbot consists of core and interface that is accessing that core in relational database management systems (RDBMS). The database has been employed as knowledge storage and interpreter has been employed as stored programs of function and procedure sets for pattern-matching requirement. The interface is standalone which has been built using programing language of Pascal and Java.", "title": "" }, { "docid": "ce0649675da17105e3142ad50835fac8", "text": "Multi-agent cooperation is an important feature of the natural world. Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate. Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning (MARL) and evolutionary theory. Here, we study a particular class of multiagent problems called intertemporal social dilemmas (ISDs), where the conflict between the individual and the group is particularly sharp. By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way. To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection. We present results in two challenging environments, and interpret these in the context of cultural and ecological evolution.", "title": "" }, { "docid": "8c867af4a6dd4125e90ba7642e9e7852", "text": "Parallel corpora are the necessary resources in many multilingual natural language processing applications, including machine translation and cross-lingual information retrieval. Manual preparation of a large scale parallel corpus is a very time consuming and costly procedure. In this paper, the work towards building a sentence-level aligned EnglishPersian corpus in a semi-automated manner is presented. The design of the corpus, collection, and alignment process of the sentences is described. Two statistical similarity measures were used to find the similarities of sentence pairs. To verify the alignment process automatically, Google Translator was used. The corpus is based on news resources available online and consists of about 30,000 formal sentence pairs.", "title": "" }, { "docid": "71e275e9bb796bda3279820bfdd1dafb", "text": "Alex M. Brooks Doctor of Philosophy The University of Sydney January 2007 Parametric POMDPs for Planning in Continuous State Spaces This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.", "title": "" }, { "docid": "84ddad0cac479f5b91d5f378e76854fa", "text": "Digital forensics is the science of identifying, extracting, analyzing and presenting the digital evidence that has been stored in the digital devices. Various digital tools and techniques are being used to achieve this. Our paper explains forensic analysis steps in the storage media, hidden data analysis in the file system, network forensic methods and cyber crime data mining. This paper proposes a new tool which is the combination of digital forensic investigation and crime data mining. The proposed system is designed for finding motive, pattern of cyber attacks and counts of attacks types happened during a period. Hence the proposed tool enables the system administrators to minimize the system vulnerability.", "title": "" }, { "docid": "a13a302e7e2fd5e09a054f1bf23f1702", "text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.", "title": "" }, { "docid": "b540fb20a265d315503543a5d752f486", "text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.", "title": "" }, { "docid": "4bb98ac4501d3c481aa760c61417730f", "text": "Among different recommendation techniques, collaborative filtering usually suffer from limited performance due to the sparsity of user-item interactions. To address the issues, auxiliary information is usually used to boost the performance. Due to the rapid collection of information on the web, the knowledge base provides heterogeneous information including both structured and unstructured data with different semantics, which can be consumed by various applications. In this paper, we investigate how to leverage the heterogeneous information in a knowledge base to improve the quality of recommender systems. First, by exploiting the knowledge base, we design three components to extract items' semantic representations from structural content, textual content and visual content, respectively. To be specific, we adopt a heterogeneous network embedding method, termed as TransR, to extract items' structural representations by considering the heterogeneity of both nodes and relationships. We apply stacked denoising auto-encoders and stacked convolutional auto-encoders, which are two types of deep learning based embedding techniques, to extract items' textual representations and visual representations, respectively. Finally, we propose our final integrated framework, which is termed as Collaborative Knowledge Base Embedding (CKE), to jointly learn the latent representations in collaborative filtering as well as items' semantic representations from the knowledge base. To evaluate the performance of each embedding component as well as the whole system, we conduct extensive experiments with two real-world datasets from different scenarios. The results reveal that our approaches outperform several widely adopted state-of-the-art recommendation methods.", "title": "" }, { "docid": "3a7657130cb165682cc2e688a7e7195b", "text": "The functional simulator Simics provides a co-simulation integration path with a SystemC simulation environment to create Virtual Platforms. With increasing complexity of the SystemC models, this platform suffers from performance degradation due to the single threaded nature of the integrated Virtual Platform. In this paper, we present a multi-threaded Simics SystemC platform solution that significantly improves performance over the existing single threaded solution. The two schedulers run independently, only communicating in a thread safe manner through a message interface. Simics based logging and checkpointing are preserved within SystemC and tied to the corresponding Simics' APIs for a seamless experience. The solution also scales to multiple SystemC models within the platform, each running its own thread with an instantiation of the SystemC kernel. A second multi-cell solution is proposed providing comparable performance with the multi-thread solution, but reducing the burden of integration on the SystemC model. Empirical data is presented showing performance gains over the legacy single threaded solution.", "title": "" }, { "docid": "2ebe6832af61085200d4aef27f2be3a5", "text": "This paper deals with the development and the parameter identification of an anaerobic digestion process model. A two-step (acidogenesis-methanization) mass-balance model has been considered. The model incorporates electrochemical equilibria in order to include the alkalinity, which has to play a central role in the related monitoring and control strategy of a treatment plant. The identification is based on a set of dynamical experiments designed to cover a wide spectrum of operating conditions that are likely to take place in the practical operation of the plant. A step by step identification procedure to estimate the model parameters is presented. The results of 70 days of experiments in a 1-m(3) fermenter are then used to validate the model.", "title": "" }, { "docid": "725022666e6f02ec791586b437fb4466", "text": "Planning under uncertainty in multiagent settings is highly intractable because of history and plan space complexities. Probabilistic graphical models exploit the structure of the problem domain to mitigate the computational burden. In this paper, we introduce the first parallelization of planning in multiagent settings on a CPU-GPU heterogeneous system. In particular, we focus on the algorithm for exactly solving interactive dynamic influence diagrams , which is a recognized graphical models for multiagent planning. Beyond parallelizing the standard Bayesian inference, the computation of decisions’ expected utilities are parallelized. The GPU-based approach provides significant speedup on two benchmark problems.", "title": "" }, { "docid": "10b71d66348ec6af4627839b76147e88", "text": "Identifying the relations that connect words is an important step towards understanding human languages and is useful for various NLP tasks such as knowledge base completion and analogical reasoning. Simple unsupervised operators such as vector offset between two-word embeddings have shown to recover some specific relationships between those words, if any. Despite this, how to accurately learn generic relation representations from word representations remains unclear. We model relation representation as a supervised learning problem and learn parametrised operators that map pre-trained word embeddings to relation representations. We propose a method for learning relation representations using a feed-forward neural network that performs relation prediction. Our evaluations on two benchmark datasets reveal that the penultimate layer of the trained neural network-based relational predictor acts as a good representation for the relations between words.", "title": "" }, { "docid": "c0b30475f78acefae1c15f9f5d6dc57b", "text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.", "title": "" }, { "docid": "9a66f3a0c7c5e625e26909f04f43f5f4", "text": "El propósito de este estudio fue examinar el impacto relativo de los diferentes tipos de liderazgo en los resultados académicos y no académicos de los estudiantes. La metodología consistió en el análisis de los resultados de 27 estudios publicados sobre la relación entre liderazgo y resultados de los estudiantes. El primer metaanálisis, que incluyó 22 de los 27 estudios, implicó una comparación de los efectos de la transformación y liderazgo instructivo en los resultados de los estudiantes. Con el segundo meta-análisis se realizó una comparación de los efectos de cinco conjuntos derivados inductivamente de prácticas de liderazgo en los resultados de los estudiantes. Doce de los estudios contribuyeron a este segundo análisis. El primer meta-análisis indicó que el efecto promedio de liderazgo instructivo en los resultados de los estudiantes fue de tres a cuatro veces la de liderazgo transformacional. La inspección de los elementos de la encuesta que se utilizaron para medir el liderazgo escolar reveló cinco conjuntos de prácticas de liderazgo o dimensiones: el establecimiento de metas y expectativas; dotación de recursos estratégicos, la planificación, coordinación y evaluación de la enseñanza y el currículo; promoción y participan en el aprendizaje y desarrollo de los profesores, y la garantía de un ambiente ordenado y de apoyo. El segundo metaanálisis reveló fuertes efectos promedio para la dimensión de liderazgo que implica promover y participar en el aprendizaje docente, el desarrollo y efectos moderados de las dimensiones relacionadas con la fijación de objetivos y la planificación, coordinación y evaluación de la enseñanza y el currículo. Las comparaciones entre el liderazgo transformacional y el instructivo y entre las cinco dimensiones de liderazgo sugirieron que los líderes que focalizan sus relaciones, su trabajo y su aprendizaje en el asunto clave de la enseñanza y el aprendizaje, tendrán una mayor influencia en los resultados de los estudiantiles. El artículo concluye con una discusión sobre la necesidad de que liderazgo, investigación y práctica estén más estrechamente vinculados a la evidencia sobre la enseñanza eficaz y el aprendizaje efectivo del profesorado. Dicha alineación podría aumentar aún más el impacto del liderazgo escolar en los resultados de los estudiantes.", "title": "" }, { "docid": "578696bf921cc5d4e831786c67845346", "text": "Identifying and monitoring multiple disease biomarkers and other clinically important factors affecting the course of a disease, behavior or health status is of great clinical relevance. Yet conventional statistical practice generally falls far short of taking full advantage of the information available in multivariate longitudinal data for tracking the course of the outcome of interest. We demonstrate a method called multi-trajectory modeling that is designed to overcome this limitation. The method is a generalization of group-based trajectory modeling. Group-based trajectory modeling is designed to identify clusters of individuals who are following similar trajectories of a single indicator of interest such as post-operative fever or body mass index. Multi-trajectory modeling identifies latent clusters of individuals following similar trajectories across multiple indicators of an outcome of interest (e.g., the health status of chronic kidney disease patients as measured by their eGFR, hemoglobin, blood CO2 levels). Multi-trajectory modeling is an application of finite mixture modeling. We lay out the underlying likelihood function of the multi-trajectory model and demonstrate its use with two examples.", "title": "" }, { "docid": "8a072bb125569fa1a52c1e86dacc0500", "text": "Accurate prediction of lake-level variations is important for planning, design, construction, and operation of lakeshore structures and also in the management of freshwater lakes for water supply purposes. In the present paper, three artificial intelligence approaches, namely artificial neural networks (ANNs), adaptive-neuro-fuzzy inference system (ANFIS), and gene expression programming (GEP), were applied to forecast daily lake-level variations up to 3-day ahead time intervals. The measurements at the Lake Iznik in Western Turkey, for the period of January 1961–December 1982, were used for training, testing, and validating the employed models. The results obtained by the GEP approach indicated that it performs better than ANFIS and ANNs in predicting lake-level variations. A comparison was also made between these artificial intelligence approaches and convenient autoregressive moving average (ARMA) models, which demonstrated the superiority of GEP, ANFIS, and ANN models over ARMA models. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "35e4a1519cbeaa46fe63f0f6aec8c28a", "text": "Decision trees and Random Forest are most popular methods of machine learning techniques. C4.5 which is an extension version of ID.3 algorithm and CART are one of these most commonly use algorithms to generate decision trees. Random Forest which constructs a lot of number of trees is one of another useful technique for solving both classification and regression problems. This study compares classification performances of different decision trees (C4.5, CART) and Random Forest which was generated using 50 trees. Data came from OECD countries health expenditures for the year 2011. AUC and ROC curve graph was used for performance comparison. Experimental results show that Random Forest outperformed in classification accuracy [AUC=0.98] in comparison with CART (0.95) and C4.5 (0.90) respectively. Future studies more focus on performance comparisons of different machine learning techniques using several datasets and different hyperparameter optimization techniques.", "title": "" }, { "docid": "1da9ea0ec4c33454ad9217bcf7118c1c", "text": "We use quantitative media (blogs, and news as a comparison) data generated by a large-scale natural language processing (NLP) text analysis system to perform a comprehensive and comparative study on how a company’s reported media frequency, sentiment polarity and subjectivity anticipates or reflects its stock trading volumes and financial returns. Our analysis provides concrete evidence that media data is highly informative, as previously suggested in the literature – but never studied on our scale of several large collections of blogs and news for over five years. Building on our findings, we give a sentiment-based market-neutral trading strategy which gives consistently favorable returns with low volatility over a five year period (2005-2009). Our results are significant in confirming the performance of general blog and news sentiment analysis methods over broad domains and sources. Moreover, several remarkable differences between news and blogs are also identified in this paper.", "title": "" }, { "docid": "9bc1d596de6471e23bd678febe7d962d", "text": "Identifying paraphrase in Malayalam language is difficult task because it is a highly agglutinative language and the linguistic structure in Malayalam language is complex compared to other languages. Here we use individual words synonyms to find the similarity between two sentences. In this paper, cosine similarity method is used to find the paraphrases in Malayalam language. In this paper we present the observations on sentence similarity between two Malayalam sentences using cosine similarity method, we used test data of 900 and 1400 sentence pairs of FIRE 2016 Malayalam corpus that used in two iterations to present and obtained an accuracy of 0.8 and 0.59.", "title": "" } ]
scidocsrr
d997d64efff658f4a7c5c1270af694c6
Plasma : Scalable Autonomous Smart Contracts
[ { "docid": "d1872279a26fefe34d7a5bc0582c134f", "text": "Bitcoin and Ethereum, whose miners arguably collectively comprise the most powerful computational resource in the history of mankind, offer no more power for processing and verifying transactions than a typical smart phone. The system described herein bypasses this bottleneck and brings scalable computation to Ethereum. Our new system consists of a financial incentive layer atop a dispute resolution layer where the latter takes form of a versatile “verification game.” In addition to secure outsourced computation, immediate applications include decentralized mining pools whose operator is an Ethereum smart contract, a cryptocurrency with scalable transaction throughput, and a trustless means for transferring currency between disjoint cryptocurrency systems.", "title": "" } ]
[ { "docid": "85e227c86077c728ae7bdbd78f781186", "text": "What is the state of the neuroscience of language – and cognitive neuroscience more broadly – in light of the linguistic research, the arguments, and the theories advanced in the context of the program developed over the past 60 years by Noam Chomsky? There are, presumably, three possible outcomes: neuroscience of language is better off, worse off, or untouched by this intellectual tradition. In some sense, all three outcomes are true. The field has made remarkable progress, in no small part because the questions were so carefully and provocatively defined by the generative research program. But insights into neuroscience and language have also been stymied because of many parochial battles that have led to little light beyond rhetorical fireworks. Finally, a disturbing amount of neuroscience research has progressed as if the significant advances beginning in the 1950s and 1960s had not been made. This work remains puzzling because it builds on ideas known to be dodgy or outright false. In sum, when it comes to the neurobiology of language, the past sixty years have been fabulous, terrible, and puzzling. Chomsky has not helped matters by being so relentlessly undidactic in his exposition of ideas germane to the neurobiological enterprise. The present moment is a good one to assess the current state, because there are energetic thrusts of research that pursue an overtly anti-Chomskyan stance. I have in mind here current research that focuses on big (brain) data, relying on no more than the principle of association, often with implicit anti-mentalist sentiments, typically skeptical of the tenets of the computational theory of mind, associated with relentless enthusiasm for embodied cognition, the ubiquitous role of context, and so on. A large proportion of current research on the neuroscience of language has embraced these ideas, and it is fair to ask why – and whether – this approach is more likely to yield substantive progress. It is also fair to say that the traditional four (and now five) leading questions that have always formed the basis for the generative research program as", "title": "" }, { "docid": "5adb5e056b099c5ec2f8e91006d96615", "text": "BACKGROUND\nEmbodied conversational agents (ECAs) are computer-generated characters that simulate key properties of human face-to-face conversation, such as verbal and nonverbal behavior. In Internet-based eHealth interventions, ECAs may be used for the delivery of automated human support factors.\n\n\nOBJECTIVE\nWe aim to provide an overview of the technological and clinical possibilities, as well as the evidence base for ECA applications in clinical psychology, to inform health professionals about the activity in this field of research.\n\n\nMETHODS\nGiven the large variety of applied methodologies, types of applications, and scientific disciplines involved in ECA research, we conducted a systematic scoping review. Scoping reviews aim to map key concepts and types of evidence underlying an area of research, and answer less-specific questions than traditional systematic reviews. Systematic searches for ECA applications in the treatment of mood, anxiety, psychotic, autism spectrum, and substance use disorders were conducted in databases in the fields of psychology and computer science, as well as in interdisciplinary databases. Studies were included if they conveyed primary research findings on an ECA application that targeted one of the disorders. We mapped each study's background information, how the different disorders were addressed, how ECAs and users could interact with one another, methodological aspects, and the study's aims and outcomes.\n\n\nRESULTS\nThis study included N=54 publications (N=49 studies). More than half of the studies (n=26) focused on autism treatment, and ECAs were used most often for social skills training (n=23). Applications ranged from simple reinforcement of social behaviors through emotional expressions to sophisticated multimodal conversational systems. Most applications (n=43) were still in the development and piloting phase, that is, not yet ready for routine practice evaluation or application. Few studies conducted controlled research into clinical effects of ECAs, such as a reduction in symptom severity.\n\n\nCONCLUSIONS\nECAs for mental disorders are emerging. State-of-the-art techniques, involving, for example, communication through natural language or nonverbal behavior, are increasingly being considered and adopted for psychotherapeutic interventions in ECA research with promising results. However, evidence on their clinical application remains scarce. At present, their value to clinical practice lies mostly in the experimental determination of critical human support factors. In the context of using ECAs as an adjunct to existing interventions with the aim of supporting users, important questions remain with regard to the personalization of ECAs' interaction with users, and the optimal timing and manner of providing support. To increase the evidence base with regard to Internet interventions, we propose an additional focus on low-tech ECA solutions that can be rapidly developed, tested, and applied in routine practice.", "title": "" }, { "docid": "777d4e55f3f0bbb0544130931006b237", "text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.", "title": "" }, { "docid": "4913c98cefb759e79106031315b414ad", "text": "BACKGROUND\nTranscranial direct current stimulation (tDCS) induces long-lasting NMDA receptor-dependent cortical plasticity via persistent subthreshold polarization of neuronal membranes. Conventional bipolar tDCS is applied with two large (35 cm(2)) rectangular electrodes, resulting in directional modulation of neuronal excitability. Recently a newly designed 4 × 1 high-definition (HD) tDCS protocol was proposed for more focal stimulation according to the results of computational modeling. HD tDCS utilizes small disc electrodes deployed in 4 × 1 ring configuration whereby the physiological effects of the induced electric field are thought to be grossly constrained to the cortical area circumscribed by the ring.\n\n\nOBJECTIVE\nWe aim to compare the physiological effects of both tDCS electrode arrangements on motor cortex excitability.\n\n\nMETHODS\ntDCS was applied with 2 mA for 10 min. Fourteen healthy subjects participated, and motor cortex excitability was monitored by transcranial magnetic stimulation (TMS) before and after tDCS.\n\n\nRESULTS\nExcitability enhancement following anodal and a respective reduction after cathodal stimulation occurred in both, conventional and HD tDCS. However, the plastic changes showed a more delayed peak at 30 min and longer lasting after-effects for more than 2 h after HD tDCS for both polarities, as compared to conventional tDCS.\n\n\nCONCLUSION\nThe results show that this new electrode arrangement is efficient for the induction of neuroplasticity in the primary motor cortex. The pattern of aftereffects might be compatible with the concept of GABA-mediated surround inhibition, which should be explored in future studies directly.", "title": "" }, { "docid": "960f5bd8b673236d3b44a77e876e10c4", "text": "This paper describes an approach to harvesting electrical energy from a mechanically excited piezoelectric element. A vibrating piezoelectric device differs from a typical electrical power source in that it has a capacitive rather than inductive source impedance, and may be driven by mechanical vibrations of varying amplitude. An analytical expression for the optimal power flow from a rectified piezoelectric device is derived, and an “energy harvesting” circuit is proposed which can achieve this optimal power flow. The harvesting circuit consists of an ac–dc rectifier with an output capacitor, an electrochemical battery, and a switch-mode dc–dc converter that controls the energy flow into the battery. An adaptive control technique for the dc–dc converter is used to continuously implement the optimal power transfer theory and maximize the power stored by the battery. Experimental results reveal that use of the adaptive dc–dc converter increases power transfer by over 400% as compared to when the dc–dc converter is not used.", "title": "" }, { "docid": "771339711243897c18d565769e758a74", "text": "This paper presents Memory Augmented Policy Optimization (MAPO): a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates for deterministic environments with discrete actions. The formulation expresses the expected return objective as a weighted sum of two terms: an expectation over a memory of trajectories with high rewards, and a separate expectation over the trajectories outside the memory. We propose 3 techniques to make an efficient training algorithm for MAPO: (1) distributed sampling from inside and outside memory with an actor-learner architecture; (2) a marginal likelihood constraint over the memory to accelerate training; (3) systematic exploration to discover high reward trajectories. MAPO improves the sample efficiency and robustness of policy gradient, especially on tasks with a sparse reward. We evaluate MAPO on weakly supervised program synthesis from natural language with an emphasis on generalization. On the WIKITABLEQUESTIONS benchmark we improve the state-of-the-art by 2.5%, achieving an accuracy of 46.2%, and on the WIKISQL benchmark, MAPO achieves an accuracy of 74.9% with only weak supervision, outperforming several strong baselines with full supervision. Our code is open sourced at https://github.com/crazydonkey200/neural-symbolic-machines.", "title": "" }, { "docid": "f9ca69c3a63403ff7a9e676847868dcd", "text": "BACKGROUND\nVegetarian nutrition is gaining increasing public attention worldwide. While some studies have examined differences in motivations and personality traits between vegetarians and omnivores, only few studies have considered differences in motivations and personality traits between the 2 largest vegetarian subgroups: lacto-ovo-vegetarians and vegans.\n\n\nOBJECTIVES\nTo examine differences between lacto-ovo-vegetarians and vegans in the distribution patterns of motives, values, empathy, and personality profiles.\n\n\nMETHODS\nAn anonymous online survey was performed in January 2014. Group differences between vegetarians and vegans in their initial motives for the choice of nutritional approaches, health-related quality of life (World Health Organization Quality of Life-BREF (WHOQOL-BREF)), personality traits (Big Five Inventory-SOEP (BFI-S)), values (Portraits Value Questionnaire (PVQ)), and empathy (Empathizing Scale) were analyzed by univariate analyses of covariance; P values were adjusted for multiple testing.\n\n\nRESULTS\n10,184 individuals completed the survey; 4,427 (43.5%) were vegetarians and 4,822 (47.3%) were vegans. Regarding the initial motives for the choice of nutritional approaches, vegans rated food taste, love of animals, and global/humanitarian reasons as more important, and the influence of their social environment as less important than did vegetarians. Compared to vegetarians, vegans had higher values on physical, psychological, and social quality of life on the WHOQOL-BREF, and scored lower on neuroticism and higher on openness on the BFI-S. In the PVQ, vegans scored lower than vegetarians on power/might, achievement, safety, conformity, and tradition and higher on self-determination and universalism. Vegans had higher empathy than vegetarians (all p < 0.001).\n\n\nDISCUSSION\nThis survey suggests that vegans have more open and compatible personality traits, are more universalistic, empathic, and ethically oriented, and have a slightly higher quality of life when compared to vegetarians. Given the small absolute size of these differences, further research is needed to evaluate whether these group differences are relevant in everyday life and can be confirmed in other populations.", "title": "" }, { "docid": "eff45b92173acbc2f6462c3802d19c39", "text": "There are shortcomings in traditional theorizing about effective ways of coping with bereavement, most notably, with respect to the so-called \"grief work hypothesis.\" Criticisms include imprecise definition, failure to represent dynamic processing that is characteristic of grieving, lack of empirical evidence and validation across cultures and historical periods, and a limited focus on intrapersonal processes and on health outcomes. Therefore, a revised model of coping with bereavement, the dual process model, is proposed. This model identifies two types of stressors, loss- and restoration-oriented, and a dynamic, regulatory coping process of oscillation, whereby the grieving individual at times confronts, at other times avoids, the different tasks of grieving. This model proposes that adaptive coping is composed of confrontation--avoidance of loss and restoration stressors. It also argues the need for dosage of grieving, that is, the need to take respite from dealing with either of these stressors, as an integral part of adaptive coping. Empirical research to support this conceptualization is discussed, and the model's relevance to the examination of complicated grief, analysis of subgroup phenomena, as well as interpersonal coping processes, is described.", "title": "" }, { "docid": "1ff8d3270f4884ca9a9c3d875bdf1227", "text": "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded/ invisible surfaces.", "title": "" }, { "docid": "eaead3c8ac22ff5088222bb723d8b758", "text": "Discrete-Time Markov Chains (DTMCs) are a widely-used formalism to model probabilistic systems. On the one hand, available tools like PRISM or MRMC offer efficient model checking algorithms and thus support the verification of DTMCs. However, these algorithms do not provide any diagnostic information in the form of counterexamples, which are highly important for the correction of erroneous systems. On the other hand, there exist several approaches to generate counterexamples for DTMCs, but all these approaches require the model checking result for completeness. In this paper we introduce a model checking algorithm for DTMCs that also supports the generation of counterexamples. Our algorithm, based on the detection and abstraction of strongly connected components, offers abstract counterexamples, which can be interactively refined by the user.", "title": "" }, { "docid": "c6725a67f1fa2b091e0bbf980e6260be", "text": "This paper examines job satisfaction and employees’ turnover intentions in Total Nigeria PLC in Lagos State. The paper highlights and defines basic concepts of job satisfaction and employees’ turnover intention. It specifically considered satisfaction with pay, nature of work and supervision as the three facets of job satisfaction that affect employee turnover intention. To achieve this objective, authors adopted a survey method by administration of questionnaires, conducting interview and by reviewing archival documents as well as review of relevant journals and textbooks in this field of learning as means of data collection. Four (4) major hypotheses were derived from literature and respective null hypotheses tested at .05 level of significance It was found that specifically job satisfaction reduces employees’ turnover intention and that Total Nigeria PLC adopts standard pay structure, conducive nature of work and efficient supervision not only as strategies to reduce employees’ turnover but also as the company retention strategy.", "title": "" }, { "docid": "366f31829bb1ac55d195acef880c488e", "text": "Intense competition among a vast number of group-buying websites leads to higher product homogeneity, which allows customers to switch to alternative websites easily and reduce their website stickiness and loyalty. This study explores the antecedents of user stickiness and loyalty and their effects on consumers’ group-buying repurchase intention. Results indicate that systems quality, information quality, service quality, and alternative system quality each has a positive relationship with user loyalty through user stickiness. Meanwhile, information quality directly impacts user loyalty. Thereafter, user stickiness and loyalty each has a positive relationship with consumers’ repurchase intention. Theoretical and managerial implications are also discussed.", "title": "" }, { "docid": "458e9d3c8dc84c8726fa15559413c81a", "text": "DNA microarrays can be used to identify gene expression changes characteristic of human disease. This is challenging, however, when relevant differences are subtle at the level of individual genes. We introduce an analytical strategy, Gene Set Enrichment Analysis, designed to detect modest but coordinate changes in the expression of groups of functionally related genes. Using this approach, we identify a set of genes involved in oxidative phosphorylation whose expression is coordinately decreased in human diabetic muscle. Expression of these genes is high at sites of insulin-mediated glucose disposal, activated by PGC-1α and correlated with total-body aerobic capacity. Our results associate this gene set with clinically important variation in human metabolism and illustrate the value of pathway relationships in the analysis of genomic profiling experiments.", "title": "" }, { "docid": "2848635e59cf2a41871d79748822c176", "text": "The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito–temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito–temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito–temporal cortex may constitute a multimodal object-related network.", "title": "" }, { "docid": "20fd36e287a631c82aa8527e6a36931f", "text": "Creating a mesh is the first step in a wide range of applications, including scientific computing and computer graphics. An unstructured simplex mesh requires a choice of meshpoints (vertex nodes) and a triangulation. We want to offer a short and simple MATLAB code, described in more detail than usual, so the reader can experiment (and add to the code) knowing the underlying principles. We find the node locations by solving for equilibrium in a truss structure (using piecewise linear force-displacement relations) and we reset the topology by the Delaunay algorithm. The geometry is described implicitly by its distance function. In addition to being much shorter and simpler than other meshing techniques, our algorithm typically produces meshes of very high quality. We discuss ways to improve the robustness and the performance, but our aim here is simplicity. Readers can download (and edit) the codes from http://math.mit.edu/~persson/mesh.", "title": "" }, { "docid": "b151d236ce17b4d03b384a29dbb91330", "text": "To investigate the blood supply to the nipple areola complex (NAC) on thoracic CT angiograms (CTA) to improve breast pedicle design in reduction mammoplasty. In a single centre, CT scans of the thorax were retrospectively reviewed for suitability by a cardiothoracic radiologist. Suitable scans had one or both breasts visible in extended fields, with contrast enhancement of breast vasculature in a female patient. The arterial sources, intercostal space perforated, glandular/subcutaneous course, vessel entry point, and the presence of periareolar anastomoses were recorded for the NAC of each breast. From 69 patients, 132 breasts were suitable for inclusion. The most reproducible arterial contribution to the NAC was perforating branches arising from the internal thoracic artery (ITA) (n = 108, 81.8%), followed by the long thoracic artery (LTA) (n = 31, 23.5%) and anterior intercostal arteries (AI) (n = 21, 15.9%). Blood supply was superficial versus deep in (n = 86, 79.6%) of ITA sources, (n = 28, 90.3%) of LTA sources, and 10 (47.6%) of AI sources. The most vascularly reliable breast pedicle would be asymmetrical in 7.9% as a conservative estimate. We suggest that breast CT angiography can provide valuable information about NAC blood supply to aid customised pedicle design, especially in high-risk, large-volume breast reductions where the risk of vascular-dependent complications is the greatest and asymmetrical dominant vasculature may be present. Superficial ITA perforator supplies are predominant in a majority of women, followed by LTA- and AIA-based sources, respectively.", "title": "" }, { "docid": "daecaa40531dad2622d83aca90ff7185", "text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data could be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist’s choice is affected directly by the travel costs, which includes both financial and time costs. To that end, in this article, we provide a focused study of cost-aware tour recommendation. Along this line, we first propose two ways to represent user cost preference. One way is to represent user cost preference by a two-dimensional vector. Another way is to consider the uncertainty about the cost that a user can afford and introduce a Gaussian prior to model user cost preference. With these two ways of representing user cost preference, we develop different cost-aware latent factor models by incorporating the cost information into the probabilistic matrix factorization (PMF) model, the logistic probabilistic matrix factorization (LPMF) model, and the maximum margin matrix factorization (MMMF) model, respectively. When applied to real-world travel tour data, all the cost-aware recommendation models consistently outperform existing latent factor models with a significant margin.", "title": "" }, { "docid": "5e2088d00fe28159d81eee0ecefdbb12", "text": "We present a new approach to harvesting a large-scale, high quality image-caption corpus that makes a better use of already existing web data with no additional human efforts. The key idea is to focus on Déjà Image-Captions: naturally existing image descriptions that are repeated almost verbatim – by more than one individual for different images. The resulting corpus provides association structure between 4 million images with 180K unique captions, capturing a rich spectrum of everyday narratives including figurative and pragmatic language. Exploring the use of the new corpus, we also present new conceptual tasks of visually situated paraphrasing, creative image captioning, and creative visual paraphrasing.", "title": "" }, { "docid": "f74a0c176352b8378d9f27fdf93763c9", "text": "The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.", "title": "" }, { "docid": "68cff1020543f97e5e8c2710bc85c823", "text": "This paper describes modelling and testing of a digital distance relay for transmission line protection using MATLAB/SIMULINK. SIMULINK’s Power System Blockset (PSB) is used for detailed modelling of a power system network and fault simulation. MATLAB is used to implement programs of digital distance relaying algorithms and to serve as main software environment. The technique is an interactive simulation environment for relaying algorithm design and evaluation. The basic principles of a digital distance relay and some related filtering techniques are also described in this paper. A 345 kV, 100 km transmission line and a MHO type distance relay are selected as examples for fault simulation and relay testing. Some simulation results are given.", "title": "" } ]
scidocsrr
0159b2b096dc6b46d0342ac8bf4c7715
Generating the Future with Adversarial Transformers
[ { "docid": "ee46ee9e45a87c111eb14397c99cd653", "text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto", "title": "" } ]
[ { "docid": "3c79c23036ed7c9a5542670264310141", "text": "This paper investigates possible improvements in grid voltage stability and transient stability with wind energy converter units using modified P/Q control. The voltage source converter (VSC) in modern variable speed wind turbines is utilized to achieve this enhancement. The findings show that using only available hardware for variable-speed turbines improvements could be obtained in all cases. Moreover, it was found that power system stability improvement is often larger when the control is modified for a given variable speed wind turbine rather than when standard variable speed turbines are used instead of fixed speed turbines. To demonstrate that the suggested modifications can be incorporated in real installations, a real situation is presented where short-term voltage stability is improved as an additional feature of an existing VSC high voltage direct current (HVDC) installation", "title": "" }, { "docid": "761be34401cc6ef1d8eea56465effca9", "text": "Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.", "title": "" }, { "docid": "2b985f234933a34b150ef3819305b282", "text": "The constraint of difference is known to the constraint programming community since Lauriere introduced Alice in 1978. Since then, several strategies have been designed to solve the alldifferent constraint. This paper surveys the most important developments over the years regarding the alldifferent constraint. First we summarize the underlying concepts and results from graph theory and integer programming. Then we give an overview and an abstract comparison of different solution strategies. In addition, the symmetric alldifferent constraint is treated. Finally, we show how to apply cost-based filtering to the alldifferent constraint. A preliminary version of this paper appeared as [14].", "title": "" }, { "docid": "b64b2a82cec34a76a84b96c42a09fa0f", "text": "Control of compliant mechanical systems is increasingly being researched for several applications including flexible link robots and ultra-precision positioning systems. The control problem in these systems is challenging, especially with gravity coupling and large deformations, because of inherent underactuation and the combination of lumped and distributed parameters of a nonlinear system. In this paper we consider an ultra-flexible inverted pendulum on a cart and propose a new nonlinear energy shaping controller to keep the pendulum at the upward position with the cart stopped at a desired location. The design is based on a model, obtained via the constrained Lagrange formulation, which previously has been validated experimentally. The controller design consists of a partial feedback linearization step followed by a standard PID controller acting on two passive outputs. Boundedness of all signals and (local) asymptotic stability of the desired equilibrium is theoretically established. Simulations and experimental evidence assess the performance of the proposed controller.", "title": "" }, { "docid": "9b130e155ca93228ed176e5d405fd50a", "text": "For years educators have attempted to identify the effective predictors of scholastic achievement and several personality variables were described as significantly correlated with grade performance. Since one of the crucial practical implications of identifying the factors involved in academic achievement is to facilitate the teaching-learning process, the main variables that have been associated with achievement should be investigated simultaneously in order to provide information as to their relative merit in the population examined. In contrast with this premise, limited research has been conducted on the importance of personality traits and self-esteem on scholastic achievement. To this aim in a sample of 439 subjects (225 males) with an average age of 12.36 years (SD= .99) from three first level secondary school classes of Southern Italy, personality traits, as defined by the Five Factor Model, self-esteem and socioeconomic status were evaluated. The academic results correlated significantly both with personality traits and with some dimensions of self-esteem. Moreover, hierarchical regression analyses brought to light, in particular, the predictive value of openness to experience on academic marks. The results, stressing the multidimensional nature of academic performance, indicate a need to adopt complex approaches for undertaking action addressing students’ difficulties in attaining good academic achievement.", "title": "" }, { "docid": "96b859678e19c177ce6a7ef8baa51f97", "text": "A definition of a typed language is said to be “intrinsic” if it assigns meanings to typings rather than arbitrary phrases, so that ill-typed phrases are meaningless. In contrast, a definition is said to be “extrinsic” if all phrases have meanings that are independent of their typings, while typings represent properties of these meanings. For a simply typed lambda calculus, extended with recursion, subtypes, and named products, we give an intrinsic denotational semantics and a denotational semantics of the underlying untyped language. We then establish a logical relations theorem between these two semantics, and show that the logical relations can be “bracketed” by retractions between the domains of the two semantics. From these results, we derive an extrinsic semantics that uses partial equivalence relations. There are two very different ways of giving denotational semantics to a programming language (or other formal language) with a nontrivial type system. In an intrinsic semantics, only phrases that satisfy typing judgements have meanings. Indeed, meanings are assigned to the typing judgements, rather than to the phrases themselves, so that a phrase that satisfies several judgements will have several meanings. For example, consider λx. x (in a simply typed functional language). Corresponding to the typing judgement ` λx. x : int → int, its intrinsic meaning is the identity function on the integers, while corresponding to ∗This research was supported in part by National Science Foundation Grant CCR9804014. Much of the research was carried out during two delightful and productive visits to BRICS (Basic Research in Computer Science, http://www.brics.dk/, Centre of the Danish National Research Foundation) in Aarhus, Denmark, September to November 1999 and May to June 2000. †A shorter and simpler version of this report, in which products and subtyping are omitted and there is only a single primitive type, will appear in “Essays on Programming Methodology”, edited by Annabelle McIver and Carroll Morgan (copyright 2001 SpringerVerlag, all rights reserved).", "title": "" }, { "docid": "22fbc0863111520ff7df54733e4d9ec7", "text": "The brain is capable of massively parallel information processing while consuming only ∼1-100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ∼1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.", "title": "" }, { "docid": "773d02f9ba577948cde5bb837e4cffe6", "text": "A ring oscillator physical unclonable function (RO PUF) is an application-constrained hardware security primitive that can be used for authentication and key generation. PUFs depend on variability during the fabrication process to produce random outputs that are nevertheless stable across multiple measurements. Unfortunately, RO PUFs are known to be unstable especially when implemented on an Field Programmable Gate Array (FPGA). In this work, we comprehensively evaluate the RO PUF's stability on FPGAs, and we propose a phase calibration process to improve the stability of RO PUFs. The results show that the bit errors in our PUFs are reduced to less than 1%.", "title": "" }, { "docid": "a5c1f075b42c20f3743c3ac8b72169f0", "text": "Infections induce pathogen-specific T cell differentiation into diverse effectors (Teff) that give rise to memory (Tmem) subsets. The cell-fate decisions and lineage relationships that underlie these transitions are poorly understood. Here, we found that the chemokine receptor CX3CR1 identifies three distinct CD8+ Teff and Tmem subsets. Classical central (Tcm) and effector memory (Tem) cells and their corresponding Teff precursors were CX3CR1- and CX3CR1high, respectively. Viral infection also induced a numerically stable CX3CR1int subset that represented ∼15% of blood-borne Tmem cells. CX3CR1int Tmem cells underwent more frequent homeostatic divisions than other Tmem subsets and not only self-renewed, but also contributed to the expanding CX3CR1- Tcm pool. Both Tcm and CX3CR1int cells homed to lymph nodes, but CX3CR1int cells, and not Tem cells, predominantly surveyed peripheral tissues. As CX3CR1int Tmem cells present unique phenotypic, homeostatic, and migratory properties, we designate this subset peripheral memory (tpm) cells and propose that tpm cells are chiefly responsible for the global surveillance of non-lymphoid tissues.", "title": "" }, { "docid": "a3f6781adeca64763156ac41dff32c82", "text": "A multilayer bandpass filter (BPF) with harmonic suppression using meander line inductor and interdigital capacitor (MLI-IDC) resonant structure is presented in this letter. The BPF is fabricated with three unit cells and its measured passband center frequency is 2.56 GHz with a bandwidth of 0.38 GHz and an insertion loss of 1.5 dB. The harmonics are suppressed up to 11 GHz. A diplexer using the proposed BPF is also presented. The proposed diplexer consists of 4.32 mm sized unit cells to couple 2.5 GHz signal into port 2, and 3.65 mm sized unit cells to couple 3.7 GHz signal into port 3. The notch circuit is placed on the output lines of the diplexer to improve isolation. The proposed diplexer has demonstrated insertion loss of 1.35 dB with 0.45 GHz bandwidth in port 2 and 1.73 dB insertion loss with 0.44 GHz bandwidth in port 3. The isolation is better than 18 dB in the first passband with 38 dB maximum isolation at 2.5 GHz. The isolation in the second passband is better than 26 dB with 45 dB maximum isolation at 3.7 GHz.", "title": "" }, { "docid": "daf311fdbd3d19e09c9eca3ec04702b6", "text": "1 Since the 1970s, investigative profilers at the FBI's Behavioral Science Unit (now part of the National Center for the Analysis of Violent Crime) have been assisting local, state, and federal agencies in narrowing investigations by providing criminal personality profiles. An attempt is now being made to describe this criminal-profile-generating process. A series of five overlapping stages lead to the sixth stage, or the goal of apprehension of the offender: (1) profiling inputs, (2) decision-process models, (3) crime assessment, (4) the criminal profile, (5) investigation, and (6) apprehension. Two key feedback filters in the process are: (a) achieving congruence with the evidence, with decision models, and with investigation recommendations, and (6) the addition of new evidence. \"You wanted to mock yourself at me!. .. You did not know your Hercule Poirot.\" He thrust out his chest and twirled his moustache. I looked at him and grinned. .. \"All right then,\" I said. \"Give us the answer to the problems-if you know it.\" \"But of course I know it.\" Hardcastle stared at him incredulously…\"Excuse me. Monsieur Poirot, you claim that you know who killed three people. And why?...All you mean is that you have a hunch\" I will not quarrel with you over a word…Come now. Inspector. I know – really know…I perceive you are still sceptic. But first let me say this. To be sure means that when the right solution is reached, everything falls into place. You perceive that in no other way could things have happened. \" The ability of Hercule Poirot to solve a crime by describing the perpetrator is a skill shared by the expert investigative profiler. Evidence speaks its own language of patterns and sequences that can reveal the offender's behavioral characteristics. Like Poirot, the profiler can say. \"I know who he must be.\" This article focuses on the developing technique of criminal profiling. Special agents at the FBI Academy have demonstrated expertise in crime scene analysis of various violent crimes, particularly those involving sexual homicide. This article discusses the history of profiling and the criminal-profile-generating process and provides a case example to illustrate the technique. Criminal profiling has been used successfully by law enforcement in several areas and is a valued means by which to narrow the field of investigation. Profiling does not provide the specific identity of the offender. Rather, it indicates the kind of person most likely to have committed a crime …", "title": "" }, { "docid": "b990e62cb73c0f6c9dd9d945f72bb047", "text": "Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Unfortunately, effective (accurate and cheap to compute) admissible heuristics can take years for people to discover. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. It also describes and evaluates an implemented program (Absolver II) that uses a means-ends analysis search control strategy to discover abstracted problems that result in effective admissible heuristics. Absolver II discovered several well-known and a few novel admissible heuristics, including the first known effective one for Rubik's Cube, thus concretely demonstrating that effective admissible heuristics can be tractably discovered by a machine.", "title": "" }, { "docid": "4621856b479672433f9f9dff86d4f4da", "text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.", "title": "" }, { "docid": "07425e53be0f6314d52e3b4de4d1b601", "text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.", "title": "" }, { "docid": "1de10e40580ba019045baaa485f8e729", "text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.", "title": "" }, { "docid": "e72be9cc69cbcbc67dd4389f2179d7e7", "text": "We present a first sparse modular algorithm for computing a greatest common divisor of two polynomials <i>f</i><sub>1</sub>, <i>f</i><sub>2</sub> ε <i>L</i>[<i>x</i>] where <i>L</i> is an algebraic function field in <i>k</i> ≥ <i>0</i> parameters with <i>r</i> ≥ <i>0</i> field extensions. Our algorithm extends the dense algorithm of Monagan and van Hoeij from 2004 to support multiple field extensions and to be efficient when the gcd is sparse. Our algorithm is an output sensitive Las Vegas algorithm.\n We have implemented our algorithm in Maple. We provide timings demonstrating the efficiency of our algorithm compared to that of Monagan and van Hoeij and with a primitive fraction-free Euclidean algorithm for both dense and sparse gcd problems.", "title": "" }, { "docid": "50795998e83dafe3431c3509b9b31235", "text": "In this study, the daily movement directions of three frequently traded stocks (GARAN, THYAO and ISCTR) in Borsa Istanbul were predicted using deep neural networks. Technical indicators obtained from individual stock prices and dollar-gold prices were used as features in the prediction. Class labels indicating the movement direction were found using daily close prices of the stocks and they were aligned with the feature vectors. In order to perform the prediction process, the type of deep neural network, Convolutional Neural Network, was trained and the performance of the classification was evaluated by the accuracy and F-measure metrics. In the experiments performed, using both price and dollar-gold features, the movement directions in GARAN, THYAO and ISCTR stocks were predicted with the accuracy rates of 0.61, 0.578 and 0.574 respectively. Compared to using the price based features only, the use of dollar-gold features improved the classification performance.", "title": "" }, { "docid": "85bfa5d711d845175759a8e3973d37cb", "text": "Human motion and behaviour in crowded spaces is influenced by several factors, such as the dynamics of other moving agents in the scene, as well as the static elements that might be perceived as points of attraction or obstacles. In this work, we present a new model for human trajectory prediction which is able to take advantage of both human-human and human-space interactions. The future trajectory of humans, are generated by observing their past positions and interactions with the surroundings. To this end, we propose a “context-aware” recurrent neural network LSTM model, which can learn and predict human motion in crowded spaces such as a sidewalk, a museum or a shopping mall. We evaluate our model on a public pedestrian datasets, and we contribute a new challenging dataset that collects videos of humans that navigate in a (real) crowded space such as a big museum. Results show that our approach can predict human trajectories better when compared to previous state-of-the-art forecasting models.", "title": "" }, { "docid": "09bfe483e80464d0116bda5ec57c7d66", "text": "The problem of distance-based outlier detection is difficult to solve efficiently in very large datasets because of potential quadratic time complexity. We address this problem and develop sequential and distributed algorithms that are significantly more efficient than state-of-the-art methods while still guaranteeing the same outliers. By combining simple but effective indexing and disk block accessing techniques, we have developed a sequential algorithm iOrca that is up to an order-of-magnitude faster than the state-of-the-art. The indexing scheme is based on sorting the data points in order of increasing distance from a fixed reference point and then accessing those points based on this sorted order. To speed up the basic outlier detection technique, we develop two distributed algorithms (DOoR and iDOoR) for modern distributed multi-core clusters of machines, connected on a ring topology. The first algorithm passes data blocks from each machine around the ring, incrementally updating the nearest neighbors of the points passed. By maintaining a cutoff threshold, it is able to prune a large number of points in a distributed fashion. The second distributed algorithm extends this basic idea with the indexing scheme discussed earlier. In our experiments, both distributed algorithms exhibit significant improvements compared to the state-of-the-art distributed method [13].", "title": "" }, { "docid": "344778eee5b0d1479f8627ed1cd894bc", "text": "We provide a correspondence between the subjects of duality and density in classes of finite relational structures. The purpose of duality is to characterise the structures C that do not admit a homomorphism into a given target B by the existence of a homomorphism from a structure A into C. Density is the order-theoretic property of containing no covers (or ‘gaps’). We show that the covers in the skeleton of a category of finite relational models correspond naturally to certain instances of duality statements, and we characterise these covers.", "title": "" } ]
scidocsrr
a3041d0fadc6fba5a081fd6f04a804bf
Jump to better conclusions: SCAN both left and right
[ { "docid": "346349308d49ac2d3bb1cfa5cc1b429c", "text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "title": "" } ]
[ { "docid": "d63a81df4117f2b615f6e7208a2bdb6b", "text": "Recently, Location-based Services (LBS) became proactive by supporting smart notifications in case the user enters or leaves a specific geographical area, well-known as Geofencing. However, different geofences cannot be temporally related to each other. Therefore, we introduce a novel method to formalize sophisticated Geofencing scenarios as state and transition-based geofence models. Such a model considers temporal relations between geofences as well as duration constraints for the time being within a geofence or in transition between geofences. These are two highly important aspects in order to cover sophisticated scenarios in which a notification should be triggered only in case the user crosses multiple geofences in a defined temporal order or leaves a geofence after a certain amount of time. As a proof of concept, we introduce a prototype of a suitable user interface for designing complex geofence models in conjunction with the corresponding proactive LBS.", "title": "" }, { "docid": "3508a963a4f99d02d9c41dab6801d8fd", "text": "The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students’ comprehension and learning. This comprehensive meta-analysis of empirical studies was conducted to examine evidence of the effects of classroom discussion on measures of teacher and student talk and on individual student comprehension and critical-thinking and reasoning outcomes. Results revealed that several discussion approaches produced strong increases in the amount of student talk and concomitant reductions in teacher talk, as well as substantial improvements in text comprehension. Few approaches to discussion were effective at increasing students’ literal or inferential comprehension and critical thinking and reasoning. Effects were moderated by study design, the nature of the outcome measure, and student academic ability. While the range of ages of participants in the reviewed studies was large, a majority of studies were conducted with students in 4th through 6th grades. Implications for research and practice are discussed.", "title": "" }, { "docid": "6c1a21055e21198c2102f2601b835104", "text": "Stroke is a leading cause of adult motor disability. Despite recent progress, recovery of motor function after stroke is usually incomplete. This double blind, Sham-controlled, crossover study was designed to test the hypothesis that non-invasive stimulation of the motor cortex could improve motor function in the paretic hand of patients with chronic stroke. Hand function was measured using the Jebsen-Taylor Hand Function Test (JTT), a widely used, well validated test for functional motor assessment that reflects activities of daily living. JTT measured in the paretic hand improved significantly with non-invasive transcranial direct current stimulation (tDCS), but not with Sham, an effect that outlasted the stimulation period, was present in every single patient tested and that correlated with an increment in motor cortical excitability within the affected hemisphere, expressed as increased recruitment curves (RC) and reduced short-interval intracortical inhibition. These results document a beneficial effect of non-invasive cortical stimulation on a set of hand functions that mimic activities of daily living in the paretic hand of patients with chronic stroke, and suggest that this interventional strategy in combination with customary rehabilitative treatments may play an adjuvant role in neurorehabilitation.", "title": "" }, { "docid": "fab33f2e32f4113c87e956e31674be58", "text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition", "title": "" }, { "docid": "842202ed67b71c91630fcb63c4445e38", "text": "Yaumatei Dermatology Clinic, 12/F Yaumatei Specialist Clinic (New Extension), 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 46-year-old Chinese man presented with one year history of itchy verrucous lesions over penis and scrotum. Skin biopsy confirmed epidermolytic acanthoma. Epidermolytic acanthoma is a rare benign tumour. Before making such a diagnosis, exclusion of other diseases, especially genital warts and bowenoid papulosis is necessary. Treatment of multiple epidermolytic acanthoma remains unsatisfactory.", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "ea9f43aaab4383369680c85a040cedcf", "text": "Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.", "title": "" }, { "docid": "dfb16d97d293776e255397f1dc49bbbf", "text": "Self-service automatic teller machines (ATMs) have dramatically altered the ways in which customers interact with banks. ATMs provide the convenience of completing some banking transactions remotely and at any time. AT&T Global Information Solutions (GIS) is the world's leading provider of ATMs. These machines support such familiar services as cash withdrawals and balance inquiries. Further technological development has extended the utility and convenience of ATMs produced by GIS by facilitating check cashing and depositing, as well as direct bill payment, using an on-line system. These enhanced services, discussed in this paper, are made possible primarily through sophisticated optical character recognition (OCR) technology. Developed by an AT&T team that included GIS, AT&T Bell Laboratories Quality, Engineering, Software, and Technologies (QUEST), and AT&T Bell Laboratories Research, OCR technology was crucial to the development of these advanced ATMs.", "title": "" }, { "docid": "3bb4666a27f6bc961aa820d3f9301560", "text": "The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.", "title": "" }, { "docid": "30e93cb20194b989b26a8689f06b8343", "text": "We present a robust method for solving the map matching problem exploiting massive GPS trace data. Map matching is the problem of determining the path of a user on a map from a sequence of GPS positions of that user --- what we call a trajectory. Commonly obtained from GPS devices, such trajectory data is often sparse and noisy. As a result, the accuracy of map matching is limited due to ambiguities in the possible routes consistent with trajectory samples. Our approach is based on the observation that many regularity patterns exist among common trajectories of human beings or vehicles as they normally move around. Among all possible connected k-segments on the road network (i.e., consecutive edges along the network whose total length is approximately k units), a typical trajectory collection only utilizes a small fraction. This motivates our data-driven map matching method, which optimizes the projected paths of the input trajectories so that the number of the k-segments being used is minimized. We present a formulation that admits efficient computation via alternating optimization. Furthermore, we have created a benchmark for evaluating the performance of our algorithm and others alike. Experimental results demonstrate that the proposed approach is superior to state-of-art single trajectory map matching techniques. Moreover, we also show that the extracted popular k-segments can be used to process trajectories that are not present in the original trajectory set. This leads to a map matching algorithm that is as efficient as existing single trajectory map matching algorithms, but with much improved map matching accuracy.", "title": "" }, { "docid": "76a99c83dfbe966839dd0bcfbd32fad6", "text": "Virtually all domains of cognitive function require the integration of distributed neural activity. Network analysis of human brain connectivity has consistently identified sets of regions that are critically important for enabling efficient neuronal signaling and communication. The central embedding of these candidate 'brain hubs' in anatomical networks supports their diverse functional roles across a broad range of cognitive tasks and widespread dynamic coupling within and across functional networks. The high level of centrality of brain hubs also renders them points of vulnerability that are susceptible to disconnection and dysfunction in brain disorders. Combining data from numerous empirical and computational studies, network approaches strongly suggest that brain hubs play important roles in information integration underpinning numerous aspects of complex cognitive function.", "title": "" }, { "docid": "1e8acf321f7ff3a1a496e4820364e2a8", "text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.", "title": "" }, { "docid": "c896c4c81a3b8d18ad9f8073562f5514", "text": "A fully integrated passive UHF RFID tag with embedded temperature sensor, compatible with the ISO/IEC 18000 type 6C protocol, is developed in a standard 0.18µm CMOS process, which is designed to measure the axle temperature of a running train. The consumption of RF/analog front-end circuits is 1.556µA@1.0V, and power dissipation of digital part is 5µA@1.0V. The CMOS temperature sensor exhibits a conversion time under 2 ms, less than 7 µW power dissipation, resolution of 0.31°C/LSB and error of +2.3/−1.1°C with a 1.8 V power supply for range from −35°C to 105°C. Measured sensitivity of tag is −5dBm at room temperature.", "title": "" }, { "docid": "8c04758d9f1c44e007abf6d2727d4a4f", "text": "The automatic identification and diagnosis of rice diseases are highly desired in the field of agricultural information. Deep learning is a hot research topic in pattern recognition and machine learning at present, it can effectively solve these problems in vegetable pathology. In this study, we propose a novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques. Using a dataset of 500 natural images of diseased and healthy rice leaves and stems captured from rice experimental field, CNNs are trained to identify 10 common rice diseases. Under the 10-fold cross-validation strategy, the proposed CNNs-based model achieves an accuracy of 95.48%. This accuracy is much higher than conventional machine learning model. The simulation results for the identification of rice diseases show the feasibility and effectiveness of the proposed method. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "23c1bd79e91f2e07b883c5cdbd97a780", "text": "BACKGROUND\nPostprandial hypertriglyceridemia and hyperglycemia are considered risk factors for cardiovascular disease. Evidence suggests that postprandial hypertriglyceridemia and hyperglycemia induce endothelial dysfunction and inflammation through oxidative stress. Statins and angiotensin type 1 receptor blockers have been shown to reduce oxidative stress and inflammation, improving endothelial function.\n\n\nMETHODS AND RESULTS\nTwenty type 2 diabetic patients ate 3 different test meals: a high-fat meal, 75 g glucose alone, and a high-fat meal plus glucose. Glycemia, triglyceridemia, endothelial function, nitrotyrosine, C-reactive protein, intercellular adhesion molecule-1, and interleukin-6 were assayed during the tests. Subsequently, diabetics took atorvastatin 40 mg/d, irbesartan 300 mg/d, both, or placebo for 1 week. The 3 tests were performed again between 5 and 7 days after the start of each treatment. High-fat load and glucose alone produced a decrease in endothelial function and increases in nitrotyrosine, C-reactive protein, intercellular adhesion molecule-1, and interleukin-6. These effects were more pronounced when high-fat load and glucose were combined. Short-term atorvastatin and irbesartan treatments significantly counterbalanced these phenomena, and their combination was more effective than either therapy alone.\n\n\nCONCLUSIONS\nThis study confirms an independent and cumulative effect of postprandial hypertriglyceridemia and hyperglycemia on endothelial function and inflammation, suggesting oxidative stress as a common mediator of such an effect. Short-term treatment with atorvastatin and irbesartan may counterbalance this phenomenon; the combination of the 2 compounds is most effective.", "title": "" }, { "docid": "2e9a0bce883548288de0a5d380b1ddf6", "text": "Three-level neutral point clamped (NPC) inverter is a widely used topology of multilevel inverters. However, the neutral point fluctuates for certain switching states. At low modulation index, the fluctuations can be compensated using redundant switching states. But, at higher modulation index and in overmodulation region, the neutral point fluctuation deteriorates the performance of the inverter. This paper proposes a simple space vector pulsewidth modulation scheme for operating a three-level NPC inverter at higher modulation indexes, including overmodulation region, with neutral point balancing. Experimental results are provided", "title": "" }, { "docid": "4fc64e24e9b080ffcc45cae168c2e339", "text": "During real time control of a dynamic system, one needs to design control systems with advanced control strategies to handle inherent nonlinearities and disturbances. This paper deals with the designing of a model reference adaptive control system with the use of MIT rule for real time control of a ball and beam system. This paper uses the gradient theory to develop MIT rule in which one or more parameters of adaptive controller needs to be adjusted so that the plant could track the reference model. A linearized model of ball and beam system is used in this paper to design the controller on MATLAB and the designed controller is then applied for real time control of ball and beam system. Simulations carried on SIMULINK and MATLAB show good performance of the designed adaptive controller in real time.", "title": "" }, { "docid": "25e7e22d19d786ff953c8cfa47988aa2", "text": "The world of human-object interactions is rich. While generally we sit on chairs and sofas, if need be we can even sit on TVs or top of shelves. In recent years, there has been progress in modeling actions and human-object interactions. However, most of these approaches require lots of data. It is not clear if the learned representations of actions are generalizable to new categories. In this paper, we explore the problem of zero-shot learning of human-object interactions. Given limited verb-noun interactions in training data, we want to learn a model than can work even on unseen combinations. To deal with this problem, In this paper, we propose a novel method using external knowledge graph and graph convolutional networks which learns how to compose classifiers for verbnoun pairs. We also provide benchmarks on several dataset for zero-shot learning including both image and video. We hope our method, dataset and baselines will facilitate future research in this direction.", "title": "" }, { "docid": "e6633bf0c5f2fd18f739a7f3a1751854", "text": "Image inpainting in wavelet domain refers to the recovery of an image from incomplete and/or inaccurate wavelet coefficients. To reconstruct the image, total variation (TV) models have been widely used in the literature and they produce high-quality reconstructed images. In this paper, we consider an unconstrained TV-regularized, l2-data-fitting model to recover the image. The model is solved by the alternating direction method (ADM). At each iteration, ADM needs to solve three subproblems, all of which have closed-form solutions. The per-iteration computational cost of ADM is dominated by two Fourier transforms and two wavelet transforms, all of which admit fast computation. Convergence of the ADM iterative scheme is readily obtained. We also discuss extensions of this ADM scheme to solving two closely related constrained models. We present numerical results to show the efficiency and stability of ADM for solving wavelet domain image inpainting problems. Numerical comparison results of ADM with some recent algorithms are also reported.", "title": "" }, { "docid": "2910fe6ac9958d9cbf9014c5d3140030", "text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.", "title": "" } ]
scidocsrr
68edb23f9c819f3ac1b17eedd5d034da
Predictive Mechanisms in Idiom Comprehension
[ { "docid": "04f10a35e3eb25f734cc8f2da492ef67", "text": "Reviewed are studies using event-related potentials to examine when and how sentence context information is used during language comprehension. Results suggest that, when it can, the brain uses context to predict features of likely upcoming items. However, although prediction seems important for comprehension, it also appears susceptible to age-related deterioration and can be associated with processing costs. The brain may address this trade-off by employing multiple processing strategies, distributed across the two cerebral hemispheres. In particular, left hemisphere language processing seems to be oriented toward prediction and the use of top-down cues, whereas right hemisphere comprehension is more bottom-up, biased toward the veridical maintenance of information. Such asymmetries may arise, in turn, because language comprehension mechanisms are integrated with language production mechanisms only in the left hemisphere (the PARLO framework).", "title": "" }, { "docid": "908716e7683bdc78283600f63bd3a1b0", "text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.", "title": "" } ]
[ { "docid": "654f1eb2b4a3612a5050247d996ff59d", "text": "Cyclic Redundancy Check (CRC) codes provide a simple yet powerful method of error detection during digital data transmission. Use of a table look-up in computing the CRC bits will efficiently implement these codes in software.", "title": "" }, { "docid": "ffdd14d8d74a996971284a8e5e950996", "text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.", "title": "" }, { "docid": "fa19d51396156e0ede5a02eb243a9fc8", "text": "Non-negative data is generated by a broad selection of applications today, e.g in gene expression analysis or imaging. Many factorization techniques have been extended to account for this natural constraint and have become very popular due to their decomposition into interpretable latent factors. Generally relational data like protein interaction networks or social network data can also be seen as naturally non-negative. In this work, we extend the RESCAL tensor factorization, which has shown state-of-the-art results for multi-relational learning, to account for non-negativity by employing multiplicative update rules. We study the performance via these approaches on various benchmark datasets and show that a non-negativity constraint can be introduced by losing only little in terms of predictive quality in most of the cases but simultaneously increasing the sparsity of the factors significantly compared to the original RESCAL algorithm.", "title": "" }, { "docid": "65e297211555a88647eb23a65698531c", "text": "Game theoretical techniques have recently become prevalen t in many engineering applications, notably in communications. With the emergence of cooperation as a new communicat ion paradigm, and the need for self-organizing, decentrali zed, and autonomic networks, it has become imperative to seek sui table game theoretical tools that allow to analyze and study the behavior and interactions of the nodes in future communi cation networks. In this context, this tutorial introduces the concepts of cooperative game theory, namely coalitiona l games, and their potential applications in communication and wireless networks. For this purpose, we classify coalit i nal games into three categories: Canonical coalitional g ames, coalition formation games, and coalitional graph games. Th is new classification represents an application-oriented a pproach for understanding and analyzing coalitional games. For eac h class of coalitional games, we present the fundamental components, introduce the key properties, mathematical te hniques, and solution concepts, and describe the methodol ogies for applying these games in several applications drawn from the state-of-the-art research in communications. In a nuts hell, this article constitutes a unified treatment of coalitional g me theory tailored to the demands of communications and", "title": "" }, { "docid": "60cbe9d8e1cbc5dd87c8f438cc766a0b", "text": "Drosophila mounts a potent host defence when challenged by various microorganisms. Analysis of this defence by molecular genetics has now provided a global picture of the mechanisms by which this insect senses infection, discriminates between various classes of microorganisms and induces the production of effector molecules, among which antimicrobial peptides are prominent. An unexpected result of these studies was the discovery that most of the genes involved in the Drosophila host defence are homologous or very similar to genes implicated in mammalian innate immune defences. Recent progress in research on Drosophila immune defence provides evidence for similarities and differences between Drosophila immune responses and mammalian innate immunity.", "title": "" }, { "docid": "3c0a3b26c062056dd5e47774de7b8272", "text": "The emergence of the field of data mining in the last decade has sparked an increasing interest in clustering of time series. Although there has been much research on clustering in general, most classic machine learning and data mining algorithms do not work well for time series due to their unique structure. In particular, the high dimensionality, very high feature correlation, and the (typically) large amount of noise that characterize time series data present a difficult challenge. In this work we address these challenges by introducing a novel anytime version of k-Means clustering algorithm for time series. The algorithm works by leveraging off the multi-resolution property of wavelets. In particular, an initial clustering is performed with a very coarse resolution representation of the data. The results obtained from this “quick and dirty” clustering are used to initialize a clustering at a slightly finer level of approximation. This process is repeated until the clustering results stabilize or until the “approximation” is the raw data. In addition to casting k-Means as an anytime algorithm, our approach has two other very unintuitive properties. The quality of the clustering is often better than the batch algorithm, and even if the algorithm is run to completion, the time taken is typically much less than the time taken by the original algorithm. We explain, and empirically demonstrate these surprising and desirable properties with comprehensive experiments on several publicly available real data sets.", "title": "" }, { "docid": "e5f30c0d2c25b6b90c136d1c84ba8a75", "text": "Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.", "title": "" }, { "docid": "764b20159244eac0b503d86636f5d62e", "text": "Most modern Information Extraction (IE) systems are implemented as sequential taggers and focus on modelling local dependencies. Non-local and non-sequential context is, however, a valuable source of information to improve predictions. In this paper, we introduce GraphIE, a framework that operates over a graph representing both local and nonlocal dependencies between textual units (i.e. words or sentences). The algorithm propagates information between connected nodes through graph convolutions and exploits the richer representation to improve word-level predictions. The framework is evaluated on three different tasks, namely social media, textual and visual information extraction. Results show that GraphIE outperforms a competitive baseline (BiLSTM+CRF) in all tasks by a significant margin.", "title": "" }, { "docid": "1c5de60e122c601cb1c58083694cf599", "text": "Existing complexity bounds for point-based POMDP value iteration algorithms focus either on the curse of dimensionality or the curse of history. We derive a new bound that relies on both and uses the concept of discounted reachability; our conclusions may help guide future algorithm design. We also discuss recent improvements to our (point-based) heuristic search value iteration algorithm. Our new implementation calculates tighter initial bounds, avoids solving linear programs, and makes more effective use of sparsity. Empirical results show speedups of more than two orders of magnitude.", "title": "" }, { "docid": "40e9a5fcc3eaf85840a45dff8a09aec1", "text": "Web data extractors are used to extract data from web documents in order to feed automated processes. In this article, we propose a technique that works on two or more web documents generated by the same server-side template and learns a regular expression that models it and can later be used to extract data from similar documents. The technique builds on the hypothesis that the template introduces some shared patterns that do not provide any relevant data and can thus be ignored. We have evaluated and compared our technique to others in the literature on a large collection of web documents; our results demonstrate that our proposal performs better than the others and that input errors do not have a negative impact on its effectiveness; furthermore, its efficiency can be easily boosted by means of a couple of parameters, without sacrificing its effectiveness.", "title": "" }, { "docid": "1be4284ecc83855ecb2fee27dd8b12ac", "text": "This paper describes a new strategy for real-time cooperative localization of autonomous vehicles. The strategy aims to improve the vehicles localization accuracy and reduce the impact of computing time of multi-sensor data fusion algorithms and vehicle-to-vehicle communication on parallel architectures. The method aims to solve localization issues in a cluster of autonomous vehicles, equipped with low-cost navigation systems in an unknown environment. It stands on multiple forms of the Kalman filter derivatives to estimate the vehicles' nonlinear model vector state, named local fusion node. The vehicles exchange their local state estimate and Covariance Intersection algorithm for merging the local vehicles' state estimate in the second node (named global data fusion node). This strategy simultaneously exploits the proprioceptive and sensors -a Global Positioning System, and a vehicle-to-vehicle transmitter and receiver- and an exteroceptive sensor, range finder, to sense their surroundings for more accurate and reliable collaborative localization.", "title": "" }, { "docid": "b9b45847ebccf152ac2093c300dc769d", "text": "Honey clasps several medicinal and health effects as a natural food supplement. It has been established as a potential therapeutic antioxidant agent for various biodiverse ailments. Data report that it exhibits strong wound healing, antibacterial, anti-inflammatory, antifungal, antiviral, and antidiabetic effects. It also retains immunomodulatory, estrogenic regulatory, antimutagenic, anticancer, and numerous other vigor effects. Data also show that honey, as a conventional therapy, might be a novel antioxidant to abate many of the diseases directly or indirectly associated with oxidative stress. In this review, these wholesome effects have been thoroughly reviewed to underscore the mode of action of honey exploring various possible mechanisms. Evidence-based research intends that honey acts through a modulatory road of multiple signaling pathways and molecular targets. This road contemplates through various pathways such as induction of caspases in apoptosis; stimulation of TNF-α, IL-1β, IFN-γ, IFNGR1, and p53; inhibition of cell proliferation and cell cycle arrest; inhibition of lipoprotein oxidation, IL-1, IL-10, COX-2, and LOXs; and modulation of other diverse targets. The review highlights the research done as well as the apertures to be investigated. The literature suggests that honey administered alone or as adjuvant therapy might be a potential natural antioxidant medicinal agent warranting further experimental and clinical research.", "title": "" }, { "docid": "4d0b04f546ab5c0d79bb066b1431ff51", "text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce13d49ba27d33db28fd5aaf991b2214", "text": "The performance of a standard model predictive controller (MPC) is directly related to its predictive model. If there are unmodeled periodic disturbances in the actual system, MPC will be difficult to suppress the disturbances, thus causing fluctuations of system output. To solve this problem, this paper proposes an improved MPC named predictive-integral-resonant control (PIRC). Compared with the standard MPC, the proposed PIRC could enhance the suppression ability for disturbances by embedding the internal model composing of the integral and resonant loop. Furthermore, this paper applies the proposed PIRC to PMSM drives, and proposes the PMSM control strategy based on the cascaded PIRC, which could suppress periodic disturbances caused by the dead time effects, current sampling errors, and so on. The experimental results show that the PIRC can suppress periodic disturbances in the drive system, thus ensuring good current and speed performance. Meanwhile, the PIRC could maintain the excellent dynamic performance as the standard MPC.", "title": "" }, { "docid": "10c6b59c20f5745104e74eeaa0dfed13", "text": "In this paper, we evaluate various onset detection algorithms in terms of their online capabilities. Most methods use some kind of normalization over time, which renders them unusable for online tasks. We modified existing methods to enable online application and evaluated their performance on a large dataset consisting of 27,774 annotated onsets. We focus particularly on the incorporated preprocessing and peak detection methods. We show that, with the right choice of parameters, the maximum achievable performance is in the same range as that of offline algorithms, and that preprocessing can improve the results considerably. Furthermore, we propose a new onset detection method based on the common spectral flux and a new peak-picking method which outperforms traditional methods both online and offline and works with audio signals of various volume levels.", "title": "" }, { "docid": "24e0fb7247644ba6324de9c86fdfeb12", "text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.", "title": "" }, { "docid": "f0e9869034a0f1d15ac3665b7970eea0", "text": "The United States is experiencing an epidemic of drug overdose (poisoning) deaths. Since 2000, the rate of deaths from drug overdoses has increased 137%, including a 200% increase in the rate of overdose deaths involving opioids (opioid pain relievers and heroin). CDC analyzed recent multiple cause-of-death mortality data to examine current trends and characteristics of drug overdose deaths, including the types of opioids associated with drug overdose deaths. During 2014, a total of 47,055 drug overdose deaths occurred in the United States, representing a 1-year increase of 6.5%, from 13.8 per 100,000 persons in 2013 to 14.7 per 100,000 persons in 2014. The rate of drug overdose deaths increased significantly for both sexes, persons aged 25-44 years and ≥55 years, non-Hispanic whites and non-Hispanic blacks, and in the Northeastern, Midwestern, and Southern regions of the United States. Rates of opioid overdose deaths also increased significantly, from 7.9 per 100,000 in 2013 to 9.0 per 100,000 in 2014, a 14% increase. Historically, CDC has programmatically characterized all opioid pain reliever deaths (natural and semisynthetic opioids, methadone, and other synthetic opioids) as \"prescription\" opioid overdoses (1). Between 2013 and 2014, the age-adjusted rate of death involving methadone remained unchanged; however, the age-adjusted rate of death involving natural and semisynthetic opioid pain relievers, heroin, and synthetic opioids, other than methadone (e.g., fentanyl) increased 9%, 26%, and 80%, respectively. The sharp increase in deaths involving synthetic opioids, other than methadone, in 2014 coincided with law enforcement reports of increased availability of illicitly manufactured fentanyl, a synthetic opioid; however, illicitly manufactured fentanyl cannot be distinguished from prescription fentanyl in death certificate data. These findings indicate that the opioid overdose epidemic is worsening. There is a need for continued action to prevent opioid abuse, dependence, and death, improve treatment capacity for opioid use disorders, and reduce the supply of illicit opioids, particularly heroin and illicit fentanyl.", "title": "" }, { "docid": "7f6ad990d5cdaf8cc3b38685be407529", "text": "Sandra R. Waxman & Erin M. Leddon Northwestern University Synopsis Perhaps more than any other developmental achievement, word-learning stands at the very intersection of language and cognition. Early word-learning represents infants’ entrance into a truly symbolic system and brings with it a means to establish reference. To succeed, infants must identify the relevant linguistic units, identify their corresponding concepts, and establish a mapping between the two. But how do infants begin to map words to concepts, and thus establish their meaning? How do they discover that different types of words (e.g., “dog” (noun), “fluffy” (adjective), “begging” (verb) refer to different aspects of the same scene (e.g, a standard poodle, seated on its hind legs and holding its front paws in the air)? We have proposed that infants begin the task of word-learning with a broad, universal expectation linking novel words to a broad range of commonalities, and that this initial expectation is subsequently fine-tuned on the basis of their experience with the objects and events they encounter and the native language under acquisition. In this chapter, we examine this proposal, in light of recent evidence with infants and young children.", "title": "" }, { "docid": "f178c362aac13afaf0229b83a8f5ace0", "text": "Around the world, Rotating Savings and Credit Associations (ROSCAs) are a prevalent saving mechanism in markets with low financial inclusion ratios. ROSCAs, which rely on social networks, facilitate credit and financing needs for individuals and small businesses. Despite their benefits, informality in ROSCAs leads to problems driven by disagreements and frauds. This further necessitates ROSCA participants’ dependency on social capital. To overcome these problems, to build on ROSCA participants’ financial proclivities, and to enhance access and efficiency of ROSCAs, we explore opportunities to digitize ROSCAs in Pakistan by building a digital platform for collection and distribution of ROSCA funds. Digital ROSCAs have the potential to mitigate issues with safety and privacy of ROSCA money, frauds and defaults in ROSCAs, and record keeping, including payment history. In this context, we illustrate features of a digital ROSCA and examine aspects of gender, social capital, literacy, and religion as they relate to digital ROSCAs.", "title": "" }, { "docid": "55eb5594f05319c157d71361880f1983", "text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.", "title": "" } ]
scidocsrr
317be970c364844cf561ea35c2be9166
Analysis and Design of Voltage-Controlled Oscillator Based Analog-to-Digital Converter
[ { "docid": "81b6059f24c827c271247b07f38f86d5", "text": "We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter, respectively. The transmitter architecture takes advantage of the wideband frequency modulation capability of the all-digital phase-locked loop with built-in automatic compensation to ensure modulation accuracy. The receiver employs a discrete-time architecture in which the RF signal is directly sampled and processed using analog and digital signal processing techniques. The complete chip also integrates power management functions and a digital baseband processor. Application of the presented ideas has resulted in significant area and power savings while producing structures that are amenable to migration to more advanced deep-submicron processes, as they become available. The entire IC occupies 10 mm/sup 2/ and consumes 28 mA during transmit and 41 mA during receive at 1.5-V supply.", "title": "" } ]
[ { "docid": "8de0a71dd4d0e8b6874e80ffd5e45dd4", "text": "Predictive state representations (PSRs) have recently been proposed as an alternative to partially observable Markov decision processes (POMDPs) for representing the state of a dynamical system (Littman et al., 2001). We present a learning algorithm that learns a PSR from observational data. Our algorithm produces a variant of PSRs called transformed predictive state representations (TPSRs). We provide an efficient principal-components-based algorithm for learning a TPSR, and show that TPSRs can perform well in comparison to Hidden Markov Models learned with Baum-Welch in a real world robot tracking task for low dimensional representations and long prediction horizons.", "title": "" }, { "docid": "719b4c5352d94d5ae52172b3c8a2512d", "text": "Acts of violence account for an estimated 1.43 million deaths worldwide annually. While violence can occur in many contexts, individual acts of aggression account for the majority of instances. In some individuals, repetitive acts of aggression are grounded in an underlying neurobiological susceptibility that is just beginning to be understood. The failure of \"top-down\" control systems in the prefrontal cortex to modulate aggressive acts that are triggered by anger provoking stimuli appears to play an important role. An imbalance between prefrontal regulatory influences and hyper-responsivity of the amygdala and other limbic regions involved in affective evaluation are implicated. Insufficient serotonergic facilitation of \"top-down\" control, excessive catecholaminergic stimulation, and subcortical imbalances of glutamatergic/gabaminergic systems as well as pathology in neuropeptide systems involved in the regulation of affiliative behavior may contribute to abnormalities in this circuitry. Thus, pharmacological interventions such as mood stabilizers, which dampen limbic irritability, or selective serotonin reuptake inhibitors (SSRIs), which may enhance \"top-down\" control, as well as psychosocial interventions to develop alternative coping skills and reinforce reflective delays may be therapeutic.", "title": "" }, { "docid": "a7f4a57534ee0a02b675e3b7acdf53d3", "text": "Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.", "title": "" }, { "docid": "c02fb121399e1ed82458fb62179d2560", "text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.", "title": "" }, { "docid": "803a5dbedf309cec97d130438e687002", "text": "Affective computing is a newly trend the main goal is exploring the human emotion things. The human emotion is leaded into a key position of behavior clue, and hence it should be included within the sensible model when an intelligent system aims to simulate or forecast human responses. This research utilizes decision tree one of data mining model to classify the emotion. This research integrates and manipulates the Thayer's emotion mode and color theory into the decision tree model, C4.5 for an innovative emotion detecting system. This paper uses 320 data in four emotion groups to train and build the decision tree for verifying the accuracy in this system. The result reveals that C4.5 decision tree model can be effective classified the emotion by feedback color from human. For the further research, colors will not the only human behavior clues, even more than all the factors from human interaction.", "title": "" }, { "docid": "6d657b6445bbd60f779624104b2dc0b0", "text": "High-quality urban reconstruction requires more than multi-view reconstruction and local optimization. The structure of facades depends on the general layout, which has to be optimized globally. Shape grammars are an established method to express hierarchical spatial relationships, and are therefore suited as representing constraints for semantic facade interpretation. Usually inference uses numerical approximations, or hard-coded grammar schemes. Existing methods inspired by classical grammar parsing are not applicable on real-world images due to their prohibitively high complexity. This work provides feasible generic facade reconstruction by combining low-level classifiers with mid-level object detectors to infer an irregular lattice. The irregular lattice preserves the logical structure of the facade while reducing the search space to a manageable size. We introduce a novel method for handling symmetry and repetition within the generic grammar. We show competitive results on two datasets, namely the Paris 2010 and the Graz 50. The former includes only Hausmannian, while the latter includes Classicism, Biedermeier, Historicism, Art Nouveau and post-modern architectural styles.", "title": "" }, { "docid": "1523534d398b4900c90d94e3f1bee422", "text": "PURPOSE\nThe purpose of this pilot study was to examine the effectiveness of hippotherapy as an intervention for the treatment of postural instability in individuals with multiple sclerosis (MS).\n\n\nSUBJECTS\nA sample of convenience of 15 individuals with MS (24-72 years) were recruited from support groups and assessed for balance deficits.\n\n\nMETHODS\nThis study was a nonequivalent pretest-posttest comparison group design. Nine individuals (4 males, 5 females) received weekly hippotherapy intervention for 14 weeks. The other 6 individuals (2 males, 4 females) served as a comparison group. All participants were assessed with the Berg Balance Scale (BBS) and Tinetti Performance Oriented Mobility Assessment (POMA) at 0, 7, and 14 weeks.\n\n\nRESULTS\nThe group receiving hippotherapy showed statistically significant improvement from pretest (0 week) to posttest (14 week) on the BBS (mean increase 9.15 points (x (2) = 8.82, p = 0.012)) and POMA scores (mean increase 5.13 (x (2) = 10.38, p = 0.006)). The comparison group had no significant changes on the BBS (mean increase 0.73 (x (2) = 0.40, p = 0.819)) or POMA (mean decrease 0.13 (x (2) = 1.41, p = 0.494)). A statistically significant difference was also found between the groups' final BBS scores (treatment group median = 55.0, comparison group median 41.0), U = 7, r = -0.49.\n\n\nDISCUSSION\nHippotherapy shows promise for the treatment of balance disorders in persons with MS. Further research is needed to refine protocols and selection criteria.", "title": "" }, { "docid": "504fcb97010d71fd07aca8bc9543af8b", "text": "The presence of raindrop induced distortion can have a significant negative impact on computer vision applications. Here we address the problem of visual raindrop distortion in standard colour video imagery for use in non-static, automotive computer vision applications where the scene can be observed to be changing over subsequent consecutive frames. We utilise current state of the art research conducted into the investigation of salience mapping as means of initial detection of potential raindrop candidates. We further expand on this prior state of the art work to construct a combined feature rich descriptor of shape information (Hu moments), isolation of raindrops pixel information from context, and texture (saliency derived) within an improved visual bag of words verification framework. Support Vector Machine and Random Forest classification were utilised for verification of potential candidates, and the effects of increasing discrete cluster centre counts on detection rates were studied. This novel approach of utilising extended shape information, isolation of context, and texture, along with increasing cluster counts, achieves a notable 13% increase in precision (92%) and 10% increase in recall (86%) against prior state of the art. False positive rates were also observed to decrease with a minimal false positive rate of 14% observed. iv ACKNOWLEDGEMENTS I wish to thank Dr Toby Breckon for his time and commitment during my project, and for the help given in patiently explaining things for me. I also wish to thank Dr Mark Stillwell for his never tiring commitment to proofreading and getting up to speed with my project. Without them, this thesis would never have come together in the way it did. I wish to thank my partner, Dr Victoria Gortowski for allowing me to go back to university, supporting me and having faith that I could do it, without which, I do not think I would have. And last but not least, Lara and Mitsy. Thank you.", "title": "" }, { "docid": "5919da9a35b5731a4e9360def8990479", "text": "The reacTable* is a novel multi-user electro-acoustic musical instrument with a tabletop tangible user interface. In this paper we focus on the various collaborative aspects of this new instrument as well as on some of the related technical details such as the networking infrastructure. The instrument can be played both in local and remote collaborative scenarios and was designed from the very beginning to serve as a musical instrument for several simultaneous players", "title": "" }, { "docid": "35a298d5ec169832c3faf2e30d95e1a4", "text": "© 2 0 0 1 m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i. m i t. e d u", "title": "" }, { "docid": "b6f9cc5eece3da40bf17f0c0b3d0bc55", "text": "In-silico interaction studies on forty two tetranortriterpenoids, which include four classes of compounds azadiratchins, salannins, nimbins and intact limonoids, with actin have been carried out using Autodock Vina and Surflex Dock. The docking scores and predicted hydrogen bonds along with spatial confirmation of the molecules indicate that actin could be a possible target for insect antifeedant studies, and a good correlation has been obtained between the percentage feeding index (PFI) and the binding energy of these molecules. The enhancement of the activity in the photo products and its reduction in microwave products observed in in-vivo studies are well brought out by this study. The study reveals Arg 183 in actin to be the most favoured residue for binding in most compounds whereas Tyr 69 is favoured additionally for salannin and nimbin type of compounds. In the case of limonoids Gln 59 seems to have hydrogen bonding interactions with most of the compounds. The present study reveals that the fit for PFI vs. binding energy is better for individual classes of compounds and can be attributed to the binding of ligand with different residues. This comprehensive in-silico analysis of interaction between actin as a receptor and tetranortriterpenoids may help in the understanding of the mode of action of bioinsecticides, and designing better lead molecules.", "title": "" }, { "docid": "e6ca7a2a94c7006b0f2839bb31aa28f8", "text": "While the services-based model of cloud computing makes more and more IT resources available to a wider range of customers, the massive amount of data in cloud platforms is becoming a target for malicious users. Previous studies show that attackers can co-locate their virtual machines (VMs) with target VMs on the same server, and obtain sensitive information from the victims using side channels. This paper investigates VM allocation policies and practical countermeasures against this novel kind of co-resident attack by developing a set of security metrics and a quantitative model. A security analysis of three VM allocation policies commonly used in existing cloud computing platforms reveals that the server's configuration, oversubscription and background traffic have a large impact on the ability to prevent attackers from co-locating with the targets. If the servers are properly configured, and oversubscription is enabled, the best policy is to allocate new VMs to the server with the most VMs. Based on these results, a new strategy is introduced that effectively decreases the probability of attackers achieving co-residence. The proposed solution only requires minor changes to current allocation policies, and hence can be easily integrated into existing cloud platforms to mitigate the threat of co-resident attacks.", "title": "" }, { "docid": "43121c7d44b3ad134a2a8ad42b1d43ef", "text": "Web services are emerging technologies to reuse software as services over the Internet by wrapping underlying computing models with XML. Web services are rapidly evolving and are expected to change the paradigms of both software development and use. This panel will discuss the current status and challenges of Web services technologies.", "title": "" }, { "docid": "fb25cc35adc0d5f1d7f7592b1d2f9bf4", "text": "In an information retrieval system (IRS) the query plays a very important role, so the user of an IRS must write his query well to have the expected result. In this paper, we have developed a new genetic algorithm-based query optimization method on relevance feedback for information retrieval. By using this technique, we have designed a fitness function respecting the order in which the relevant documents are retrieved, the terms of the relevant documents, and the terms of the irrelevant documents. Based on three benchmark test collections Cranfield, Medline and CACM, experiments have been carried out to compare our method with three well-known query optimization methods on relevance feedback. The experiments show that our method can achieve better results.", "title": "" }, { "docid": "9f60376e3371ac489b4af90026041fa7", "text": "There is a substantive body of research focusing on women's experiences of intimate partner violence (IPV), but a lack of qualitative studies focusing on men's experiences as victims of IPV. This article addresses this gap in the literature by paying particular attention to hegemonic masculinities and men's perceptions of IPV. Men ( N = 9) participated in in-depth interviews. Interview data were rigorously subjected to thematic analysis, which revealed five key themes in the men's narratives: fear of IPV, maintaining power and control, victimization as a forbidden narrative, critical understanding of IPV, and breaking the silence. Although the men share similar stories of victimization as women, the way this is influenced by their gendered histories is different. While some men reveal a willingness to disclose their victimization and share similar fear to women victims, others reframe their victim status in a way that sustains their own power and control. The men also draw attention to the contextual realities that frame abuse, including histories of violence against the women who used violence and the realities of communities suffering intergenerational affects of colonized histories. The findings reinforce the importance of in-depth qualitative work toward revealing the context of violence, understanding the impact of fear, victimization, and power/control on men's mental health as well as the outcome of legal and support services and lack thereof. A critical discussion regarding the gendered context of violence, power within relationships, and addressing men's need for support without redefining victimization or taking away from policies and support for women's ongoing victimization concludes the work.", "title": "" }, { "docid": "385aacadafef9cfd1a5b00bbe8f871c0", "text": "We present a 6.5mm3, 10mg, wireless peripheral nerve stimulator. The stimulator is powered and controlled through ultrasound from an external transducer and utilizes a single 750×750×750μm3 piezocrystal for downlink communication, powering, and readout, reducing implant volume and mass. An IC with 0.06mm2 active circuit area, designed in TSMC 65nm LPCMOS process, converts harvested ultrasound to stimulation charge with a peak efficiency of 82%. A custom wireless protocol that does not require a clock or memory circuits reduces on-chip power to 4μW when not stimulating. The encapsulated stimulator was cuffed to the sciatic nerve of an anesthetized rodent and demonstrated full-scale nerve activation in vivo. We achieve a highly efficient and temporally precise wireless peripheral nerve stimulator that is the smallest and lightest to our knowledge.", "title": "" }, { "docid": "7057f72a1ce2e92ae01785d5b6e4a1d5", "text": "Social transmission is everywhere. Friends talk about restaurants , policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others? Traditionally, researchers have argued that rumors spread in the \" 3 Cs \" —times of conflict, crisis, and catastrophe (e.g., wars or natural disasters; Koenig, 1985)―and the major explanation for this phenomenon has been generalized anxiety (i.e., apprehension about negative outcomes). Such theories can explain why rumors flourish in times of panic, but they are less useful in explaining the prevalence of rumors in positive situations, such as the Cannes Film Festival or the dot-com boom. Further, although recent work on the social sharing of emotion suggests that positive emotion may also increase transmission, why emotions drive sharing and why some emotions boost sharing more than others remains unclear. I suggest that transmission is driven in part by arousal. Physiological arousal is characterized by activation of the autonomic nervous system (Heilman, 1997), and the mobilization provided by this excitatory state may boost sharing. This hypothesis not only suggests why content that evokes more of certain emotions (e.g., disgust) may be shared more than other a review), but also suggests a more precise prediction , namely, that emotions characterized by high arousal, such as anxiety or amusement (Gross & Levenson, 1995), will boost sharing more than emotions characterized by low arousal, such as sadness or contentment. This idea was tested in two experiments. They examined how manipulations that increase general arousal (i.e., watching emotional videos or jogging in place) affect the social transmission of unrelated content (e.g., a neutral news article). If arousal increases transmission, even incidental arousal (i.e., outside the focal content being shared) should spill over and boost sharing. In the first experiment, 93 students completed what they were told were two unrelated studies. The first evoked specific emotions by using film clips validated in prior research (Christie & Friedman, 2004; Gross & Levenson, 1995). Participants in the control condition watched a neutral clip; those in the experimental conditions watched an emotional clip. Emotional arousal and valence were manipulated independently so that high-and low-arousal emotions of both a positive (amusement vs. contentment) and a negative (anxiety vs. …", "title": "" }, { "docid": "0aca949889a67f3dd21efe372a7f706d", "text": "Existing research on the formation of employee ethical climate perceptions focuses mainly on organization characteristics as antecedents, and although other constructs have been considered, these constructs have typically been studied in isolation. Thus, our understanding of the context in which ethical climate perceptions develop is incomplete. To address this limitation, we build upon the work of Rupp (Organ Psychol Rev 1:72–94, 2011) to develop and test a multi-experience model of ethical climate which links aspects of the corporate social responsibility (CSR), ethics, justice, and trust literatures and helps to explain how employees’ ethical climate perceptions form. We argue that in forming ethical climate perceptions, employees consider the actions or characteristics of a complex web of actors. Specifically, we propose that employees look (1) outward at how communities are impacted by their organization’s actions (e.g., CSR), (2) upward to make inferences about the ethicality of leaders in their organizations (e.g., ethical leadership), and (3) inward at their own propensity to trust others as they form their perceptions. Using a multiple-wave field study (N = 201) conducted at a privately held US corporation, we find substantial evidence in support of our model.", "title": "" }, { "docid": "3c3bf9455bd5fef1b5649f50f020f564", "text": "There is a need for reliable lighting design applications because available tools are limited and inappropriate for interactive or creative use. Architects and lighting designers need those applications to define, predict, test and validate lighting solutions for their problems. We present a new approach to the lighting design problem based on a methodology that includes the geometry of the scene, the properties of materials and the design goals. It is possible to obtain luminaire characteristics or other kind of results that maximise the attainment of the design goals, which may include different types of constraints or objectives (lighting, geometrical or others). The main goal, in our approach, is to improve the lighting design cycle. In this work we discuss the use of optimisation in lighting design, describe the implementation of the methodology, present real-world based examples and analyse in detail some of the complex technical problems associated and speculate on how to overcome them.", "title": "" }, { "docid": "742c7ccfc1bc0f5150b47683fbfd455e", "text": "Detailed facial performance geometry can be reconstructed using dense camera and light setups in controlled studios. However, a wide range of important applications cannot employ these approaches, including all movie productions shot from a single principal camera. For post-production, these require dynamic monocular face capture for appearance modification. We present a new method for capturing face geometry from monocular video. Our approach captures detailed, dynamic, spatio-temporally coherent 3D face geometry without the need for markers. It works under uncontrolled lighting, and it successfully reconstructs expressive motion including high-frequency face detail such as folds and laugh lines. After simple manual initialization, the capturing process is fully automatic, which makes it versatile, lightweight and easy-to-deploy. Our approach tracks accurate sparse 2D features between automatically selected key frames to animate a parametric blend shape model, which is further refined in pose, expression and shape by temporally coherent optical flow and photometric stereo. We demonstrate performance capture results for long and complex face sequences captured indoors and outdoors, and we exemplify the relevance of our approach as an enabling technology for model-based face editing in movies and video, such as adding new facial textures, as well as a step towards enabling everyone to do facial performance capture with a single affordable camera.", "title": "" } ]
scidocsrr
cda452166c593771fa7a3a44579e138c
Minimal Gated Unit for Recurrent Neural Networks
[ { "docid": "6a4cd21704bfbdf6fb3707db10f221a8", "text": "Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.", "title": "" }, { "docid": "20cb30a452bf20c9283314decfb7eb6e", "text": "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.", "title": "" } ]
[ { "docid": "0837ca7bd6e28bb732cfdd300ccecbca", "text": "In our previous research we have made literature analysis and discovered possible mind map application areas. We have pointed out why currently developed software and methods are not adequate and why we are developing a new one. We have defined system architecture and functionality that our software would have. After that, we proceeded with text-mining algorithm development and testing after which we have concluded with our plans for further research. In this paper we will give basic notions about previously published article and present our custom developed software for automatic mind map generation. This software will be tested. Generated mind maps will be critically analyzed. The paper will be concluded with research summary and possible further research and software improvement.", "title": "" }, { "docid": "9af2a00a9a059a87a188d351f7de4904", "text": "The cities of Paris, London, Chicago, and New York (among others) have recently launched large-scale bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the relationship between aspects of bike-share system design and ridership. Specifically, we estimate the effects on ridership of station accessibility (how far the commuter must walk to reach a station) and of bike-availability (the likelihood of finding a bike at the station). Our analysis is based on a structural demand model that considers the random-utility maximizing choices of spatially distributed commuters, and it is estimated using highfrequency system-use data from the bike-share system in Paris. The role of station accessibility is identified using cross-sectional variation in station location and high -frequency changes in commuter choice sets; bike-availability effects are identified using longitudinal variation. Because the scale of our data, (in particular the high-frequency changes in choice sets) render traditional numerical estimation techniques infeasible, we develop a novel transformation of our estimation problem: from the time domain to the “station stockout state” domain. We find that a 10% reduction in distance traveled to access bike-share stations (about 13 meters) can increase system-use by 6.7% and that a 10% increase in bikeavailability can increase system-use by nearly 12%. Finally, we use our estimates to develop a calibrated counterfactual simulation demonstrating that the bike-share system in central Paris would have 29.41% more ridership if its station network design had incorporated our estimates of commuter preferences—with no additional spending on bikes or docking points.", "title": "" }, { "docid": "a19f84fec74cae5573397c155e6d5789", "text": "The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.", "title": "" }, { "docid": "0778eff54b2f48c9ed4554c617b2dcab", "text": "The diagnosis of heart disease is a significant and tedious task in medicine. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. The term Heart disease encompasses the diverse diseases that affect the heart. Cardiomyopathy and Cardiovascular disease are some categories of heart diseases. The reduction of blood and oxygen supply to the heart leads to heart disease. In this paper the data classification is based on supervised machine learning algorithms which result in accuracy, time taken to build the algorithm. Tanagra tool is used to classify the data and the data is evaluated using 10-fold cross validation and the results are compared.", "title": "" }, { "docid": "1bd9cedbbbd26d670dd718fe47c952e7", "text": "Recent advances in conversational systems have changed the search paradigm. Traditionally, a user poses a query to a search engine that returns an answer based on its index, possibly leveraging external knowledge bases and conditioning the response on earlier interactions in the search session. In a natural conversation, there is an additional source of information to take into account: utterances produced earlier in a conversation can also be referred to and a conversational IR system has to keep track of information conveyed by the user during the conversation, even if it is implicit. We argue that the process of building a representation of the conversation can be framed as a machine reading task, where an automated system is presented with a number of statements about which it should answer questions. The questions should be answered solely by referring to the statements provided, without consulting external knowledge. The time is right for the information retrieval community to embrace this task, both as a stand-alone task and integrated in a broader conversational search setting. In this paper, we focus on machine reading as a stand-alone task and present the Attentive Memory Network (AMN), an end-to-end trainable machine reading algorithm. Its key contribution is in efficiency, achieved by having an hierarchical input encoder, iterating over the input only once. Speed is an important requirement in the setting of conversational search, as gaps between conversational turns have a detrimental effect on naturalness. On 20 datasets commonly used for evaluating machine reading algorithms we show that the AMN achieves performance comparable to the state-of-theart models, while using considerably fewer computations.", "title": "" }, { "docid": "5853bccf3dfd3c861cd29afec0cfce7e", "text": "The performance of each datanode in a heterogeneous Hadoop cluster differs, and the number of slots that can be numbered to simultaneously execute tasks differs. For this reason, Hadoop is susceptible to replica placement problems and data replication problems. Because of this, replication problems and allocation problems occur. These problems can deteriorate the performance of Hadoop. In this paper, we summarize existing research to improve data locality, and design a data replication method to solve replication and allocation problems.", "title": "" }, { "docid": "8645ee169b229c074e0a4a8556dc2f7d", "text": "In this paper, we propose a new fake iris detection method based on the changes in the reflectance ratio between the iris and the sclera. The proposed method has four advantages over previous works. First, it is possible to detect fake iris images with high accuracy. Second, our method does not cause inconvenience to users since it can detect fake iris images at a very fast speed. Third, it is possible to show the theoretical background of using the variation of the reflectance ratio between the iris and the sclera. To compare fake iris images with live ones, three types of fake iris images were produced: a printed iris, an artificial eye, and a fake contact lens. In the experiments, we prove that the proposed fake iris detection method achieves high performance when distinguishing between live and fake iris.", "title": "" }, { "docid": "ecf16ddb27cb5bebe59ce0cb26d5b861", "text": "Shoham, Y., Agent-oriented programming, Artificial Intelligence 60 (1993) 51-92. A new computational framework is presented, called agent-oriented programming (AOP), which can be viewed as a specialization of object-oriented programming. The state of an agent consists of components such as beliefs, decisions, capabilities, and obligations; for this reason the state of an agent is called its mental state. The mental state of agents is described formally in an extension of standard epistemic logics: beside temporalizing the knowledge and belief operators, AOP introduces operators for obligation, decision, and capability. Agents are controlled by agent programs, which include primitives for communicating with other agents. In the spirit of speech act theory, each communication primitive is of a certain type: informing, requesting, offering, and so on. This article presents the concept of AOP, discusses the concept of mental state and its formal underpinning, defines a class of agent interpreters, and then describes in detail a specific interpreter that has been implemented.", "title": "" }, { "docid": "d64a0520a0cb49b1906d1d343ca935ec", "text": "A 3D LTCC (low temperature co-fired ceramic) millimeter wave balun using asymmetric structure was investigated in this paper. The proposed balun consists of embedded multilayer microstrip and CPS (coplanar strip) lines. It was designed at 40GHz. The measured insertion loss of the back-to-back balanced transition is -1.14dB, thus the estimated insertion loss of each device is -0.57dB including the CPS line loss. The 10dB return loss bandwidth of the unbalanced back-to-back transition covers the frequency range of 17.3/spl sim/46.6GHz (91.7%). The area occupied by this balun is 0.42 /spl times/ 0.066/spl lambda//sub 0/ (2.1 /spl times/ 0.33mm/sup 2/). The high performances have been achieved using the low loss and relatively high dielectric constant of LTCC (/spl epsiv//sub r/=5.4, tan/spl delta/=0.0015 at 35GHz) and a 3D stacked configuration. This balun can be used as a transition of microstrip-to-CPS and vice-versa and insures also an impedance transformation from 50 to 110 Ohm for an easy integration with a high input impedance antenna. This is the first reported 40 GHz wideband 3D LTCC balun using asymmetric structure to balance the output amplitude and phase difference.", "title": "" }, { "docid": "0264a3c21559a1b9c78c42d7c9848783", "text": "This paper presents the first linear bulk CMOS power amplifier (PA) targeting low-power fifth-generation (5G) mobile user equipment integrated phased array transceivers. The output stage of the PA is first optimized for power-added efficiency (PAE) at a desired error vector magnitude (EVM) and range given a challenging 5G uplink use case scenario. Then, inductive source degeneration in the optimized output stage is shown to enable its embedding into a two-stage transformer-coupled PA; by broadening interstage impedance matching bandwidth and helping to reduce distortion. Designed and fabricated in 1P7M 28 nm bulk CMOS and using a 1 V supply, the PA achieves +4.2 dBm/9% measured Pout/PAE at -25 dBc EVM for a 250 MHz-wide 64-quadrature amplitude modulation orthogonal frequency division multiplexing signal with 9.6 dB peak-to-average power ratio. The PA also achieves 35.5%/10% PAE for continuous wave signals at saturation/9.6 dB back-off from saturation. To the best of the authors' knowledge, these are the highest measured PAE values among published K-and Ka-band CMOS PAs.", "title": "" }, { "docid": "56667d286f69f8429be951ccf5d61c24", "text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.", "title": "" }, { "docid": "64a634a76a39fbc1930a7ca66e21e125", "text": "This paper presents a broadband cascode SiGe power amplifier (PA) in the polar transmitter (TX) system using the envelope-tacking (ET) technique. The cascode PA achieves the power-added efficiency (PAE) of >30% across the frequency range of 0.6∼2.4 GHz in continuous wave (CW) mode. The ET-based polar TX system using this cascode PA is evaluated and compared with the conventional stand-alone cascode PA. The experimental data shows that the cascode PA is successfully linearized by the ET scheme, passing the stringent WiMAX spectral mask and the required error vector magnitude (EVM). The entire polar TX system reaches the PAE of 30%/36% at the average output power of 18/17 dBm at 2.3/0.7 GHz for WiMAX 16QAM 3.5 MHz signals. These measurement results suggest that our saturated cascode SiGe PA can be attractive for dual-mode WiMAX applications.", "title": "" }, { "docid": "caf333abcf4e22b973532bb3bc48cc90", "text": "This paper presents a multi-layer secure IoT network model based on blockchain technology. The model reduces the difficulty of actual deployment of the blockchain technology by dividing the Internet of Things into a multi-level de-centric network and adopting the technology of block chain technology at all levels of the network, with the high security and credibility assurance of the blockchain technology retaining. It provides a wide-area networking solution of Internet of Things.", "title": "" }, { "docid": "8106487f98bcc94c1310799e74e7a173", "text": "We present a method to predict long-term motion of pedestrians, modeling their behavior as jump-Markov processes with their goal a hidden variable. Assuming approximately rational behavior, and incorporating environmental constraints and biases, including time-varying ones imposed by traffic lights, we model intent as a policy in a Markov decision process framework. We infer pedestrian state using a Rao-Blackwellized filter, and intent by planning according to a stochastic policy, reflecting individual preferences in aiming at the same goal.", "title": "" }, { "docid": "0f87fefbe2cfc9893b6fc490dd3d40b7", "text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.", "title": "" }, { "docid": "b397d82e24f527148cb46fbabda2b323", "text": "This paper describes Illinois corn yield estimation using deep learning and another machine learning, SVR. Deep learning is a technique that has been attracting attention in recent years of machine learning, it is possible to implement using the Caffe. High accuracy estimation of crop yield is very important from the viewpoint of food security. However, since every country prepare data inhomogeneously, the implementation of the crop model in all regions is difficult. Deep learning is possible to extract important features for estimating the object from the input data, so it can be expected to reduce dependency of input data. The network model of two InnerProductLayer was the best algorithm in this study, achieving RMSE of 6.298 (standard value). This study highlights the advantages of deep learning for agricultural yield estimating.", "title": "" }, { "docid": "b853f492667d4275295c0228566f4479", "text": "This study reports spore germination, early gametophyte development and change in the reproductive phase of Drynaria fortunei, a medicinal fern, in response to changes in pH and light spectra. Germination of D. fortunei spores occurred on a wide range of pH from 3.7 to 9.7. The highest germination (63.3%) occurred on ½ strength Murashige and Skoog basal medium supplemented with 2% sucrose at pH 7.7 under white light condition. Among the different light spectra tested, red, far-red, blue, and white light resulted in 71.3, 42.3, 52.7, and 71.0% spore germination, respectively. There were no morphological differences among gametophytes grown under white and blue light. Elongated or filamentous but multiseriate gametophytes developed under red light, whereas under far-red light gametophytes grew as uniseriate filaments consisting of mostly elongated cells. Different light spectra influenced development of antheridia and archegonia in the gametophytes. Gametophytes gave rise to new gametophytes and developed antheridia and archegonia after they were transferred to culture flasks. After these gametophytes were transferred to plastic tray cells with potting mix of tree fern trunk fiber mix (TFTF mix) and peatmoss the highest number of sporophytes was found. Sporophytes grown in pots developed rhizomes.", "title": "" }, { "docid": "92e19d897d527ecf506b328f8f629044", "text": "AutoTutor is a tutoring system that helps students construct answers to deep-reasoning questions by holding a conversation in natural language. AutoTutor delivers its dialog moves with an animated conversational agent whereas students type in their answers via keyboard. We conducted an experiment on 81 college students who learned topics on computer literacy (hardware, operating systems, internet) with AutoTutor or control conditions, and were assessed on learning gains. There was an experimental design that allowed us to assess the impact of learning condition (AutoTutor, read-text control, versus nothing) and the medium of presenting AutoTutor’s dialog moves (print only, speech only, talking head, versus talking head + print). All versions of AutoTutor improved performance in assessments of deep learning, but not shallow learning. Effects of the medium were more subtle, which suggests that it is the message (the dialog moves of AutoTutor) that is more important", "title": "" }, { "docid": "f427dc8838618d0904cfe27200ac032d", "text": "Sequential pattern mining has been studied extensively in data mining community. Most previous studies require the specification of a minimum support threshold to perform the mining. However, it is difficult for users to provide an appropriate threshold in practice. To overcome this difficulty, we propose an alternative task: mining topfrequent closed sequential patterns of length no less than , where is the desired number of closed sequential patterns to be mined, and is the minimum length of each pattern. We mine closed patterns since they are compact representations of frequent patterns. We developed an efficient algorithm, called TSP, which makes use of the length constraint and the properties of topclosed sequential patterns to perform dynamic supportraising and projected database-pruning. Our extensive performance study shows that TSP outperforms the closed sequential pattern mining algorithm even when the latter is running with the best tuned minimum support threshold.", "title": "" }, { "docid": "49f0371f84d7874a6ccc6f9dd0779d3b", "text": "Managing customer satisfaction has become a crucial issue in fast-food industry. This study aims at identifying determinant factor related to customer satisfaction in fast-food restaurant. Customer data are analyzed by using data mining method with two classification techniques such as decision tree and neural network. Classification models are developed using decision tree and neural network to determine underlying attributes of customer satisfaction. Generated rules are beneficial for managerial and practical implementation in fast-food industry. Decision tree and neural network yield more than 80% of predictive accuracy.", "title": "" } ]
scidocsrr
23d58d950cfd90afd020349abb1c5c11
Android based heart rate monitoring and automatic notification system
[ { "docid": "5eccbb19af4a1b19551ce4c93c177c07", "text": "This paper presents the design and development of a microcontroller based heart rate monitor using fingertip sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over tape-based recording systems. The important feature of this research is the use of Discrete Fourier Transforms to analyse the ECG signal in order to measure the heart rate. Evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense physical activity. The performance of HRM device was compared with ECG signal represented on an oscilloscope and manual pulse measurement of heartbeat, giving excellent results. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly.", "title": "" }, { "docid": "db6a91e0216440a4573aee6c78c78cbf", "text": "ObjectiveHeart rate monitoring using wrist type Photoplethysmographic (PPG) signals is getting popularity because of construction simplicity and low cost of wearable devices. The task becomes very difficult due to the presence of various motion artifacts. The objective is to develop algorithms to reduce the effect of motion artifacts and thus obtain accurate heart rate estimation. MethodsProposed heart rate estimation scheme utilizes both time and frequency domain analyses. Unlike conventional single stage adaptive filter, multi-stage cascaded adaptive filtering is introduced by using three channel accelerometer data to reduce the effect of motion artifacts. Both recursive least squares (RLS) and least mean squares (LMS) adaptive filters are tested. Moreover, singular spectrum analysis (SSA) is employed to obtain improved spectral peak tracking. The outputs from the filter block and SSA operation are logically combined and used for spectral domain heart rate estimation. Finally, a tracking algorithm is incorporated considering neighbouring estimates. ResultsThe proposed method provides an average absolute error of 1.16 beat per minute (BPM) with a standard deviation of 1.74 BPM while tested on publicly available database consisting of recordings from 12 subjects during physical activities. ConclusionIt is found that the proposed method provides consistently better heart rate estimation performance in comparison to that recently reported by TROIKA, JOSS and SPECTRAP methods. SignificanceThe proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach and thus feasible for implementing in wearable devices to monitor heart rate for fitness and clinical purpose.", "title": "" } ]
[ { "docid": "135ceae69b9953cf8fe989dcf8d3d0da", "text": "Recent advances in development of Wireless Communication in Vehicular Adhoc Network (VANET) has provided emerging platform for industrialists and researchers. Vehicular adhoc networks are multihop networks with no fixed infrastructure. It comprises of moving vehicles communicating with each other. One of the main challenge in VANET is to route the data efficiently from source to destination. Designing an efficient routing protocol for VANET is tedious task. Also because of wireless medium it is vulnerable to several attacks. Since attacks mislead the network operations, security is mandatory for successful deployment of such technology. This survey paper gives brief overview of different routing protocols. Also attempt has been made to identify major security issues and challenges associated with different routing protocols. .", "title": "" }, { "docid": "abcb9b8feb996917df2dcbd85dbeaff4", "text": "Nearly all aspects of modern life are in some way being changed by big data and machine learning. Netflix knows what movies people like to watch and Google knows what people want to know based on their search histories. Indeed, Google has recently begun to replace much of its existing non–machine learning technology with machine learning algorithms, and there is great optimism that these techniques can provide similar improvements across many sectors. It isnosurprisethenthatmedicineisawashwithclaims of revolution from the application of machine learning to big health care data. Recent examples have demonstrated that big data and machine learning can create algorithms that perform on par with human physicians.1 Though machine learning and big data may seem mysterious at first, they are in fact deeply related to traditional statistical models that are recognizable to most clinicians. It is our hope that elucidating these connections will demystify these techniques and provide a set of reasonable expectations for the role of machine learning and big data in health care. Machine learning was originally described as a program that learns to perform a task or make a decision automatically from data, rather than having the behavior explicitlyprogrammed.However,thisdefinitionisverybroad and could cover nearly any form of data-driven approach. For instance, consider the Framingham cardiovascular risk score,whichassignspointstovariousfactorsandproduces a number that predicts 10-year cardiovascular risk. Should this be considered an example of machine learning? The answer might obviously seem to be no. Closer inspection oftheFraminghamriskscorerevealsthattheanswermight not be as obvious as it first seems. The score was originally created2 by fitting a proportional hazards model to data frommorethan5300patients,andsothe“rule”wasinfact learnedentirelyfromdata.Designatingariskscoreasamachine learning algorithm might seem a strange notion, but this example reveals the uncertain nature of the original definition of machine learning. It is perhaps more useful to imagine an algorithm as existing along a continuum between fully human-guided vs fully machine-guided data analysis. To understand the degree to which a predictive or diagnostic algorithm can said to be an instance of machine learning requires understanding how much of its structure or parameters were predetermined by humans. The trade-off between human specificationofapredictivealgorithm’spropertiesvslearning those properties from data is what is known as the machine learning spectrum. Returning to the Framingham study, to create the original risk score statisticians and clinical experts worked together to make many important decisions, such as which variables to include in the model, therelationshipbetweenthedependentandindependent variables, and variable transformations and interactions. Since considerable human effort was used to define these properties, it would place low on the machine learning spectrum (#19 in the Figure and Supplement). Many evidence-based clinical practices are based on a statistical model of this sort, and so many clinical decisions in fact exist on the machine learning spectrum (middle left of Figure). On the extreme low end of the machine learning spectrum would be heuristics and rules of thumb that do not directly involve the use of any rules or models explicitly derived from data (bottom left of Figure). Suppose a new cardiovascular risk score is created that includes possible extensions to the original model. For example, it could be that risk factors should not be added but instead should be multiplied or divided, or perhaps a particularly important risk factor should square the entire score if it is present. Moreover, if it is not known in advance which variables will be important, but thousands of individual measurements have been collected, how should a good model be identified from among the infinite possibilities? This is precisely what a machine learning algorithm attempts to do. As humans impose fewer assumptions on the algorithm, it moves further up the machine learning spectrum. However, there is never a specific threshold wherein a model suddenly becomes “machine learning”; rather, all of these approaches exist along a continuum, determined by how many human assumptions are placed onto the algorithm. An example of an approach high on the machine learning spectrum has recently emerged in the form of so-called deep learning models. Deep learning models are stunningly complex networks of artificial neurons that were designed expressly to create accurate models directly from raw data. Researchers recently demonstrated a deep learning algorithm capable of detecting diabetic retinopathy (#4 in the Figure, top center) from retinal photographs at a sensitivity equal to or greater than that of ophthalmologists.1 This model learned the diagnosis procedure directly from the raw pixels of the images with no human intervention outside of a team of ophthalmologists who annotated each image with the correct diagnosis. Because they are able to learn the task with little human instruction or prior assumptions, these deep learning algorithms rank very high on the machine learning spectrum (Figure, light blue circles). Though they require less human guidance, deep learning algorithms for image recognition require enormous amounts of data to capture the full complexity, variety, and nuance inherent to real-world images. Consequently, these algorithms often require hundreds of thousands of examples to extract the salient image features that are correlated with the outcome of interest. Higher placement on the machine learning spectrum does not imply superiority, because different tasks require different levels of human involvement. While algorithms high on the spectrum are often very flexible and can learn many tasks, they are often uninterpretable VIEWPOINT", "title": "" }, { "docid": "b9a32c7b3e56174016d920c9ec4c1456", "text": "When an individual has been inoculated with a plasmodium parasite, a variety of clinical effects may follow, within the sequence: Infection?asymptomatic parasitaemia?uncomplicated illness?severe malaria?death. Many factors influence the disease manifestations of the infection and the likelihood of progression to the last two categories. These factors include the species of the infecting parasite, the levels of innate and acquired immunity of the host, and the timing and efficacy of treatment, if any.", "title": "" }, { "docid": "8bcc223389b7cc2ce2ef4e872a029489", "text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.", "title": "" }, { "docid": "68e714e5a3e92924c63167781149e628", "text": "This paper presents a millimeter wave wideband differential line to waveguide transition using a short ended slot line. The slot line connected in parallel to the rectangular waveguide can effectively compensate the frequency dependence of the susceptance in the waveguide. Thus it is suitable to achieve a wideband characteristic together with a simpler structure. It is experimentally demonstrated that the proposed transitions have the relative bandwidth of 20.2 % with respect to -10 dB reflection, which is a significant wideband characteristic compared with the conventional transition's bandwidth of 11%.", "title": "" }, { "docid": "71b0dbd905c2a9f4111dfc097bfa6c67", "text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One", "title": "" }, { "docid": "0cd863fc634b75f1b93137698d42080d", "text": "Prior research has established that peer tutors can benefit academically from their tutoring experiences. However, although tutor learning has been observed across diverse settings, the magnitude of these gains is often underwhelming. In this review, the authors consider how analyses of tutors’ actual behaviors may help to account for variation in learning outcomes and how typical tutoring behaviors may create or undermine opportunities for learning. The authors examine two tutoring activities that are commonly hypothesized to support tutor learning: explaining and questioning. These activities are hypothesized to support peer tutors’ learning via reflective knowledge-building, which includes self-monitoring of comprehension, integration of new and prior knowledge, and elaboration and construction of knowledge. The review supports these hypotheses but also finds that peer tutors tend to exhibit a pervasive knowledge-telling bias. Peer tutors, even when trained, focus more on delivering knowledge rather than developing it. As a result, the true potential for tutor learning may rarely be achieved. The review concludes by offering recommendations for how future research can utilize tutoring process data to understand how tutors learn and perhaps develop new training methods.", "title": "" }, { "docid": "690659887c8261e2984802e2cdb71b5f", "text": "The Discrete Hodge Helmholtz Decomposition (DHHD) is able to locate critical points in a vector field. We explore two novel applications of this technique to image processing problems, viz., hurricane tracking and fingerprint analysis. The eye of the hurricane represents a rotational center, which is shown to be robustly detected using DHHD. This is followed by an automatic segmentation and tracking of the hurricane eye, which does not require manual initializations. DHHD is also used for identification of reference points in fingerprints. The new technique for reference point detection is relatively insensitive to noise in the orientation field. The DHHD based method is shown to detect reference points correctly for 96.25% of the images in the database used.", "title": "" }, { "docid": "6757bde927be1bf081ffd95908ebbbf3", "text": "Human action recognition has been studied in many fields including computer vision and sensor networks using inertial sensors. However, there are limitations such as spatial constraints, occlusions in images, sensor unreliability, and the inconvenience of users. In order to solve these problems we suggest a sensor fusion method for human action recognition exploiting RGB images from a single fixed camera and a single wrist mounted inertial sensor. These two different domain information can complement each other to fill the deficiencies that exist in both image based and inertial sensor based human action recognition methods. We propose two convolutional neural network (CNN) based feature extraction networks for image and inertial sensor data and a recurrent neural network (RNN) based classification network with long short term memory (LSTM) units. Training of deep neural networks and testing are done with synchronized images and sensor data collected from five individuals. The proposed method results in better performance compared to single sensor-based methods with an accuracy of 86.9% in cross-validation. We also verify that the proposed algorithm robustly classifies the target action when there are failures in detecting body joints from images.", "title": "" }, { "docid": "d8d86da66ebeaae73e9aaa2a30f18bb5", "text": "In this paper, a novel approach to the characterization of structural damage in civil structures is presented. Structural damage often results in subtle changes to structural stiffness and damping properties that are manifested by changes in the location of transfer function characteristic equation roots (poles) upon the complex plane. Using structural response time-history data collected from an instrumented structure, transfer function poles can be estimated using traditional system identification methods. Comparing the location of poles corresponding to the structure in an unknown structural state to those of the undamaged structure, damage can be accurately identified. The IASC-ASCE structural health monitoring benchmark structure is used in this study to illustrate the merits of the transfer function pole migration approach to damage detection in civil structures.", "title": "" }, { "docid": "83df427d2921434aa8becd711134f2e2", "text": "The native periodontium includes cementum, a functionally oriented periodontal ligament, alveolar bone and gingiva. Pathologic and/or traumatic events may lead to the loss or damage of this anatomical structure. Since the 1970s, a number of procedures have been investigated in an attempt to restore such lost tissues. Numerous clinical trials have shown positive outcomes for various reconstructive surgical protocols. Reduced probing depths, clinical attachment gain, and radiographic bone fill have been reported extensively for intrabony and furcation defects following scaling and root planing, open flap debridement, autogenous bone grafting, implantation of biomaterials including bone derivatives and bone substitutes, guided-tissue regeneration (GTR) procedures, and implantation of biologic factors, including enamel matrix proteins. Histological studies have shown that various surgical periodontal procedures can lead to different patterns of healing. Healing by formation of a long junctional epithelium (epithelial attachment) is characterized by a thin epithelium extending apically interposed between the root surface and the gingival connective tissue (4, 23). Connective tissue repair (new attachment) is represented by collagen fibers oriented parallel or perpendicular to a root surface previously exposed to periodontal disease or otherwise deprived of its periodontal attachment. In contrast, periodontal regeneration is characterized by de novo formation of cementum, a functionally oriented periodontal ligament, alveolar bone, and gingiva (restitutio ad integrum). Nevertheless, it would be naive to expect these to occur as distinctly separate biologic outcomes following reconstruction of the periodontal attachment. For example, periodontal regeneration should be expected to include elements of a new, as well as an epithelial, attachment. Predictability of outcomes following surgical procedures is of fundamental importance in medicine. As periodontal-regenerative procedures are time consuming and financially demanding, there is increasing interest by clinicians to learn of factors that may influence the clinical outcome following periodontal reconstructive surgery in order to provide the best possible service to patients. This goal can only be achieved if biological aspects of wound healing and regeneration are taken into consideration. The objectives of the present article are to provide an overview of wound healing following periodontal surgical procedures, to discuss the basic principles of periodontal regeneration, and to illustrate the factors that influence this process.", "title": "" }, { "docid": "d16369b68d7730a7d34f8200150b3248", "text": "3-D motion estimation is a fundamental problem that has far-reaching implications in robotics. A scene flow formulation is attractive as it makes no assumptions about scene complexity, object rigidity, or camera motion. RGB-D cameras provide new information useful for computing dense 3-D flow in challenging scenes. In this work we show how to generalize two-frame variational 2-D flow algorithms to 3-D. We show that scene flow can be reliably computed using RGB-D data, overcoming depth noise and outperforming previous results on a variety of scenes. We apply dense 3-D flow to rigid motion segmentation.", "title": "" }, { "docid": "33d98005d696cc5cee6a23f5c1e7c538", "text": "Design activity has recently attempted to embrace designing the user experience. Designers need to demystify how we design for user experience and how the products we design achieve specific user experience goals. This paper proposes an initial framework for understanding experience as it relates to user-product interactions. We propose a system for talking about experience, and look at what influences experience and qualities of experience. The framework is presented as a tool to understand what kinds of experiences products evoke.", "title": "" }, { "docid": "d6697ddaaf5e31ff2a6367115d7467c6", "text": "A feature-rich second-generation 60-GHz transceiver chipset is introduced. It integrates dual-conversion superheterodyne receiver and transmitter chains, a sub-integer frequency synthesizer, full programmability from a digital interface, modulator and demodulator circuits to support analog modulations (e.g. MSK, BPSK), as well as a universal I&Q interface for digital modulation formats (e.g. OFDM). Achieved performance includes 6-dB receiver noise figure and 12 dBm transmitter output ldB compression point. Wireless link experiments with different modulation formats for 2-Gb/s real-time uncompressed HDTV transmission are discussed. Additionally, recent millimeter-wave package and antenna developments are summarized and a 60GHz silicon micromachined antenna is presented.", "title": "" }, { "docid": "15fd626d5a6eb1258b8846137c62f97d", "text": "Since leadership plays a vital role in democratic movements, understanding the nature of democratic leadership is essential. However, the definition of democratic leadership is unclear (Gastil, 1994). Also, little research has defined democratic leadership in the context of democratic movements. The leadership literature has paid no attention to democratic leadership in such movements, focusing on democratic leadership within small groups and organizations. This study proposes a framework of democratic leadership in democratic movements. The framework includes contexts, motivations, characteristics, and outcomes of democratic leadership. The study considers sacrifice, courage, symbolism, citizen participation, and vision as major characteristics in the display of democratic leadership in various political, social, and cultural contexts. Applying the framework to Nelson Mandela, Lech Walesa, and Dae Jung Kim; the study considers them as exemplary models of democratic leadership in democratic movements for achieving democracy. They have showed crucial characteristics of democratic leadership, offering lessons for democratic governance.", "title": "" }, { "docid": "2be043b09e6dd631b5fe6f9eed44e2ec", "text": "This article aims to contribute to a critical research agenda for investigating the democratic implications of citizen journalism and social news. The article calls for a broad conception of ‘citizen journalism’ which is (1) not an exclusively online phenomenon, (2) not confined to explicitly ‘alternative’ news sources, and (3) includes ‘metajournalism’ as well as the practices of journalism itself. A case is made for seeing democratic implications not simply in the horizontal or ‘peer-to-peer’ public sphere of citizen journalism networks, but also in the possibility of a more ‘reflexive’ culture of news consumption through citizen participation. The article calls for a research agenda that investigates new forms of gatekeeping and agendasetting power within social news and citizen journalism networks and, drawing on the example of three sites, highlights the importance of both formal and informal status differentials and of the software ‘code’ structuring these new modes of news", "title": "" }, { "docid": "314fba798c73569f6c8fa266821bac8e", "text": "Core to integrated navigation systems is the concept of fusing noisy observations from GPS, Inertial Measurement Units (IMU), and other available sensors. The current industry standard and most widely used algorithm for this purpose is the extended Kalman filter (EKF) [6]. The EKF combines the sensor measurements with predictions coming from a model of vehicle motion (either dynamic or kinematic), in order to generate an estimate of the current navigational state (position, velocity, and attitude). This paper points out the inherent shortcomings in using the EKF and presents, as an alternative, a family of improved derivativeless nonlinear Kalman filters called sigma-point Kalman filters (SPKF). We demonstrate the improved state estimation performance of the SPKF by applying it to the problem of loosely coupled GPS/INS integration. A novel method to account for latency in the GPS updates is also developed for the SPKF (such latency compensation is typically inaccurate or not practical with the EKF). A UAV (rotor-craft) test platform is used to demonstrate the results. Performance metrics indicate an approximate 30% error reduction in both attitude and position estimates relative to the baseline EKF implementation.", "title": "" }, { "docid": "b425265606966c9490519ab1d49f8141", "text": "Any books that you read, no matter how you got the sentences that have been read from the books, surely they will give you goodness. But, we will show you one of recommendation of the book that you need to read. This web usability a user centered design approach is what we surely mean. We will show you the reasonable reasons why you need to read this book. This book is a kind of precious book written by an experienced author.", "title": "" }, { "docid": "a4b1c27b2f0b96ebf39dda4498a6aa2a", "text": "Computers and users process information in distinct ways -so do individual users. Although it's relatively easy to get a computer to understand input, what with fixed standards and universal APIs, usability with human users is not absolute. User-interface usability is relative to the experience level of individual users. UI designer Mike Padilla provides an overview of UI design for Web-based productivity software with a focus on the broadest range of users, examining what makes an application UI-usable and detailing concepts that can facilitate an efficient, broad-based UI design.", "title": "" }, { "docid": "5e31d7ff393d69faa25cb6dea5917a0e", "text": "In this paper we aim to formally explain the phenomenon of fast convergence of Stochastic Gradient Descent (SGD) observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for mini-batch SGD parallel to that for full gradient descent. We show that there is a critical batch size m∗ such that: (a) SGD iteration with mini-batch sizem ≤ m∗ is nearly equivalent to m iterations of mini-batch size 1 (linear scaling regime). (b) SGD iteration with mini-batch m > m∗ is nearly equivalent to a full gradient descent iteration (saturation regime). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying O(n) acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction. † See full version of this paper at arxiv.org/abs/1712.06559. Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA. Correspondence to: Siyuan Ma <masi@cse.ohio-state.edu>, Raef Bassily <bassily.1@osu.edu>, Mikhail Belkin <mbelkin@cse.ohio-state.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).", "title": "" } ]
scidocsrr
82b8426dadae4a9a7892acf8e6715b34
Keep Your Friends Close and Your Facebook Friends Closer: A Multiplex Network Approach to the Analysis of Offline and Online Social Ties
[ { "docid": "4da99c6895dcde2889c6d5b41c673f41", "text": "Social media have attracted considerable attention because their open-ended nature allows users to create lightweight semantic scaffolding to organize and share content. To date, the interplay of the social and topical components of social media has been only partially explored. Here, we study the presence of homophily in three systems that combine tagging social media with online social networks. We find a substantial level of topical similarity among users who are close to each other in the social network. We introduce a null model that preserves user activity while removing local correlations, allowing us to disentangle the actual local similarity between users from statistical effects due to the assortative mixing of user activity and centrality in the social network. This analysis suggests that users with similar interests are more likely to be friends, and therefore topical similarity measures among users based solely on their annotation metadata should be predictive of social links. We test this hypothesis on several datasets, confirming that social networks constructed from topical similarity capture actual friendship accurately. When combined with topological features, topical similarity achieves a link prediction accuracy of about 92%.", "title": "" }, { "docid": "3130e666076d119983ac77c5d77d0aed", "text": "of Ph.D. dissertation, University of Haifa, Israel.", "title": "" }, { "docid": "1d6ecfc4451d065b77d20e9062ddfab6", "text": "Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed.", "title": "" } ]
[ { "docid": "7f553d57ec54b210e86e4d7abba160d7", "text": "SUMMARY\nBioIE is a rule-based system that extracts informative sentences relating to protein families, their structures, functions and diseases from the biomedical literaturE. Based on manual definition of templates and rules, it aims at precise sentence extraction rather than wide recall. After uploading source text or retrieving abstracts from MEDLINE, users can extract sentences based on predefined or user-defined template categories. BioIE also provides a brief insight into the syntactic and semantic context of the source-text by looking at word, N-gram and MeSH-term distributions. Important Applications of BioIE are in, for example, annotation of microarray data and of protein databases.\n\n\nAVAILABILITY\nhttp://umber.sbs.man.ac.uk/dbbrowser/bioie/", "title": "" }, { "docid": "8f704e4c4c2a0c696864116559a0f22c", "text": "Friendships with competitors can improve the performance of organizations through the mechanisms of enhanced collaboration, mitigated competition, and better information exchange. Moreover, these benefits are best achieved when competing managers are embedded in a cohesive network of friendships (i.e., one with many friendships among competitors), since cohesion facilitates the verification of information culled from the network, eliminates the structural holes faced by customers, and facilitates the normative control of competitors. The first part of this analysis examines the performance implications of the friendship-network structure within the Sydney hotel industry, with performance being the yield (i.e., revenue per available room) of a given hotel. This shows that friendships with competitors lead to dramatic improvements in hotel yields. Performance is further improved if a manager’s competitors are themselves friends, evidencing the benefit of cohesive friendship networks. The second part of the analysis examines the structure of friendship ties among hotel managers and shows that friendships are more likely between managers who are competitors.", "title": "" }, { "docid": "a6f6525af5a1d9306d6b62ebd821f4ba", "text": "In this report, we introduce the outline of our system in Task 3: Disease Classification of ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection. We fine-tuned multiple pre-trained neural network models based on Squeeze-and-Excitation Networks (SENet) which achieved state-of-the-art results in the field of image recognition. In addition, we used the mean teachers as a semi-supervised learning framework and introduced some specially designed data augmentation strategies for skin lesion analysis. We confirmed our data augmentation strategy improved classification performance and demonstrated 87.2% in balanced accuracy on the official ISIC2018 validation dataset.", "title": "" }, { "docid": "50964057831f482d806bf1c9d46621c0", "text": "We propose a unified framework for deep density models by formally defining density destructors. A density destructor is an invertible function that transforms a given density to the uniform density—essentially destroying any structure in the original density. This destructive transformation generalizes Gaussianization via ICA and more recent autoregressive models such as MAF and Real NVP. Informally, this transformation can be seen as a generalized whitening procedure or a multivariate generalization of the univariate CDF function. Unlike Gaussianization, our destructive transformation has the elegant property that the density function is equal to the absolute value of the Jacobian determinant. Thus, each layer of a deep density can be seen as a shallow density—uncovering a fundamental connection between shallow and deep densities. In addition, our framework provides a common interface for all previous methods enabling them to be systematically combined, evaluated and improved. Leveraging the connection to shallow densities, we also propose a novel tree destructor based on tree densities and an image-specific destructor based on pixel locality. We illustrate our framework on a 2D dataset, MNIST, and CIFAR-10. Code is available on first author’s website.", "title": "" }, { "docid": "e6c1747e859f64517e7dddb6c1fd900e", "text": "More and more mobile objects are now equipped with sensors allowing real time monitoring of their movements. Nowadays, the data produced by these sensors can be stored in spatio-temporal databases. The main goal of this article is to perform a data mining on a huge quantity of mobile object’s positions moving in an open space in order to deduce its behaviour. New tools must be defined to ease the detection of outliers. First of all, a zone graph is set up in order to define itineraries. Then, trajectories of mobile objects following the same itinerary are extracted from the spatio-temporal database and clustered. A statistical analysis on this set of trajectories lead to spatio-temporal patterns such as the main route and spatio-temporal channel followed by most of trajectories of the set. Using these patterns, unusual situations can be detected. Furthermore, a mobile object’s behaviour can be defined by comparing its positions with these spatio-temporal patterns. In this article, this technique is applied to ships’ movements in an open maritime area. Unusual behaviours such as being ahead of schedule or delayed or veering to the left or to the right of the main route are detected. A case study illustrates these processes based on ships’ positions recorded during two years around the Brest area. This method can be extended to almost all kinds of mobile objects (pedestrians, aircrafts, hurricanes, ...) moving in an open area.", "title": "" }, { "docid": "9e4044150b05752693e11627e7f8cd2b", "text": "Snarr RL, Esco MR, Witte EV, Jenkins CT, Brannan RM. Electromyographic Activity of Rectus Abdominis During a Suspension Push-up Compared to Traditional Exercises. JEPonline 2013;16(3):1-8. The purpose of this study was to compare the electromyographic (EMG) activity of the rectus abdominis (RA) across three different exercises [i.e., suspension pushup (SPU), standard pushup (PU) and abdominal supine crunch (C)]. Fifteen apparently healthy men (n = 12, age = 25.75 ± 3.91 yrs) and women (n = 3, age = 22.33 ± 1.15) volunteered to participate in this study. The subjects performed four repetitions of SPU, PU, and C. The order of the exercises was randomized. Mean peak EMG activity of the RA was recorded across the 4 repetitions of each exercise. Raw (mV) and normalized (%MVC) values were analyzed. The results of this study showed that SPU and C elicited a significantly greater (P<0.05) activation of the RA reported as raw (2.2063 ± 1.00198 mV and 1.9796 ± 1.36190 mV, respectively) and normalized values (68.0 ± 16.5% and 52 ± 28.7%, respectively) compared to PU (i.e., 0.8448 ± 0.76548 mV and 21 ± 16.6%). The SPU and C were not significantly different (P>0.05). This investigation indicated that SPU and C provided similar activation levels of the RA that were significantly greater than PU.", "title": "" }, { "docid": "1963b3b1326fa4ed99ef39c9aaab0719", "text": "We take an ecological approach to studying social media use and its relation to mood among college students. We conducted a mixed-methods study of computer and phone logging with daily surveys and interviews to track college students' use of social media during all waking hours over seven days. Continual and infrequent checkers show different preferences of social media sites. Age differences also were found. Lower classmen tend to be heavier users and to primarily use Facebook, while upper classmen use social media less frequently and utilize sites other than Facebook more often. Factor analysis reveals that social media use clusters into patterns of content-sharing, text-based entertainment/discussion, relationships, and video consumption. The more constantly one checks social media daily, the less positive is one's mood. Our results suggest that students construct their own patterns of social media usage to meet their changing needs in their environment. The findings can inform further investigation into social media use as a benefit and/or distraction for students.", "title": "" }, { "docid": "82e1fa35686183ebd9ad4592d6ba599e", "text": "We propose a method for model-based control of building air conditioning systems that minimizes energy costs while maintaining occupant comfort. The method uses a building thermal model in the form of a thermal circuit identified from collected sensor data, and reduces the building thermal dynamics to a Markov decision process (MDP) whose decision variables are the sequence of temperature set-points over a suitable horizon, for example one day. The main advantage of the resulting MDP model is that it is completely discrete, which allows for a very fast computation of the optimal sequence of temperature set-points. Experiments on thermal models demonstrate savings that can exceed 50% with respect to usual control strategies in buildings such as night setup. 2013 REHVA World Congress (CLIMA) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2013 201 Broadway, Cambridge, Massachusetts 02139 A Method for Computing Optimal Set-Point Schedules for HVAC Systems Daniel Nikovski#1, Jingyang Xu#2, and Mio Nonaka∗3 #Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139, USA {1nikovski,2jxu}@merl.com ∗Mitsubishi Electric Corporation, 8-1-1, Tsukaguchi-Honmachi, Hyogo 661-8661, Japan 3nonaka.mio@dc.mitsubishielectric.co.jp Abstract We propose a method for model-based control of building air conditioning systems that minimizes energy costs while maintaining occupant comfort. The method uses a building thermal model in the form of a thermal circuit identified from collected sensor data, and reduces the building thermal dynamics to a Markov decision process (MDP) whose decision variables are the sequence of temperature set-points over a suitable horizon, for example one day. The main advantage of the resulting MDP model is that it is completely discrete, which allows for a very fast computation of the optimal sequence of temperature set-points. Experiments on thermal models demonstrate savings that can exceed 50% with respect to usual control strategies in buildings such as night setup.", "title": "" }, { "docid": "e6c0aa517c857ed217fc96aad58d7158", "text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.", "title": "" }, { "docid": "9698bfe078a32244169cbe50a04ebb00", "text": "Maximum power point tracking (MPPT) controllers play an important role in photovoltaic systems. They maximize the output power of a PV array for a given set of conditions. This paper presents an overview of the different MPPT techniques. Each technique is evaluated on its ability to detect multiple maxima, convergence speed, ease of implementation, efficiency over a wide output power range, and cost of implementation. The perturbation and observation (P & O), and incremental conductance (IC) algorithms are widely used techniques, with many variants and optimization techniques reported. For this reason, this paper evaluates the performance of these two common approaches from a dynamic and steady state perspective.", "title": "" }, { "docid": "c741ce32db0d5d36ac373a52d95040f5", "text": "From the market microstructure perspective, technical analysis can be profitable when informed traders make systematic mistakes or when uninformed traders have predictable impacts on price. However, chartists face a considerable degree of trading uncertainty because technical indicators such as moving averages are essentially imperfect filters with a nonzero phase shift. Consequently, technical trading may result in erroneous trading recommendations and substantial losses. This paper presents an uncertainty reduction approach based on fuzzy logic that addresses two problems related to the uncertainty embedded in technical trading strategies: market timing and order size. The results of our high-frequency exercises show that ‘fuzzy technical indicators’ dominate standard moving average technical indicators and filter rules for the Euro-US dollar (EUR-USD) exchange rates, especially on high-volatility days. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "88a84edc60da43a32feb5c4786bc19b9", "text": "Low drop-out (LDO) linear regulators have become a key building block in portable communication systems for power management ICs. The LDO usually comes after a switching DC-DC converter to reduce the output ripples and provide a regulated voltage source for noise-sensitive blocks. For a higher level of integration, there is a need to increase the operating frequency of the switching converters [1]. This necessitates a subsequent LDO regulator with high ripple rejection at frequencies up to several MHz. These LDO regulators should also provide a low drop-out voltage to cope with the low supply voltage of the state-of-the-art CMOS technologies. In addition, due to the feedback nature of the system, the LDO should be stable for a wide range of supply currents while consuming a very low quiescent current.", "title": "" }, { "docid": "359da4efff872d1fd762c0aef1aa590c", "text": "One of the most efficient ways for a learning-based robotic arm to learn to process complex tasks as human, is to directly learn from observing how human complete those tasks, and then imitate. Our idea is based on success of Deep Q-Learning (DQN) algorithm according to reinforcement learning, and then extend to Deep Deterministic Policy Gradient (DDPG) algorithm. We developed a learning-based method, combining modified DDPG and visual imitation network. Our approach acquires frames only from a monocular camera, and no need to either construct a 3D environment or generate actual points. The result we expected during training, was that robot would be able to move as almost the same as how human hands did.", "title": "" }, { "docid": "ed9e53f132eada9ceb1f943cce00f20a", "text": "With the proliferation of e-commerce websites and the ubiquitousness of smart phones, cross-domain image retrieval using images taken by smart phones as queries to search products on e-commerce websites is emerging as a popular application. One challenge of this task is to locate the attention of both the query and database images. In particular, database images, e.g. of fashion products, on e-commerce websites are typically displayed with other accessories, and the images taken by users contain noisy background and large variations in orientation and lighting. Consequently, their attention is difficult to locate. In this paper, we exploit the rich tag information available on the e-commerce websites to locate the attention of database images. For query images, we use each candidate image in the database as the context to locate the query attention. Novel deep convolutional neural network architectures, namely TagYNet and CtxYNet, are proposed to learn the attention weights and then extract effective representations of the images. Experimental results on public datasets confirm that our approaches have significant improvement over the existing methods in terms of the retrieval accuracy and efficiency.", "title": "" }, { "docid": "5affa179dd8b6742ac14fa5992c82575", "text": "It is commonly believed that good security improves trust, and that the perceptions of good security and trust will ultimately increase the use of electronic commerce. In fact, customers’ perceptions of the security of e-payment systems have become a major factor in the evolution of electronic commerce in markets. In this paper, we examine issues related to e-payment security from the viewpoint of customers. This study proposes a conceptual model that delineates the determinants of consumers’ perceived security and perceived trust, as well as the effects of perceived security and perceived trust on the use of epayment systems. To test the model, structural equation modeling is employed to analyze data collected from 219 respondents in Korea. This research provides a theoretical foundation for academics and also practical guidelines for service providers in dealing with the security aspects of e-payment systems. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a527ba87019dc1adeb2bc62ab98b5cb8", "text": "Word embedding, specially with its recent developments, promises a quantification of the similarity between terms. However, it is not clear to which extent this similarity value can be genuinely meaningful and useful for subsequent tasks. We explore how the similarity score obtained from the models is really indicative of term relatedness. We first observe and quantify the uncertainty factor of the word embedding models regarding to the similarity value. Based on this factor, we introduce a general threshold on various dimensions which effectively filters the highly related terms. Our evaluation on four information retrieval collections supports the effectiveness of our approach as the results of the introduced threshold are significantly better than the baseline while being equal to or statistically indistinguishable from the optimal results.", "title": "" }, { "docid": "e777bb21d57393a4848fcb04c6d5b913", "text": "A 2.5 GHz fully integrated voltage controlled oscillator (VCO) for wireless application has been designed in a 0.35μm CMOS technology. A method for compensating the effect of temperature on the carrier oscillation frequency has been presented in this work. We compare also different VCOs topologies in order to select one with low phase noise, low supply sensitivity and large tuning frequency. Good results are obtained with a simple NMOS –Gm VCO. This proposed VCO has a wide operating range from 300 MHz with a good linearity between the output frequency and the control input voltage, with a temperature coefficient of -5 ppm/°C from 20°C to 120°C range. The phase noise is about -135.2dBc/Hz at 1MHz from the carrier with a power consumption of 5mW.", "title": "" }, { "docid": "9cf9145a802c2093f7c6f5986aabb352", "text": "Although researchers have long studied using statistical modeling techniques to detect anomaly intrusion and profile user behavior, the feasibility of applying multinomial logistic regression modeling to predict multi-attack types has not been addressed, and the risk factors associated with individual major attacks remain unclear. To address the gaps, this study used the KDD-cup 1999 data and bootstrap simulation method to fit 3000 multinomial logistic regression models with the most frequent attack types (probe, DoS, U2R, and R2L) as an unordered independent variable, and identified 13 risk factors that are statistically significantly associated with these attacks. These risk factors were then used to construct a final multinomial model that had an ROC area of 0.99 for detecting abnormal events. Compared with the top KDD-cup 1999 winning results that were based on a rule-based decision tree algorithm, the multinomial logistic model-based classification results had similar sensitivity values in detecting normal and a significantly lower overall misclassification rate (18.9% vs. 35.7%). The study emphasizes that the multinomial logistic regression modeling technique with the 13 risk factors provides a robust approach to detect anomaly intrusion.", "title": "" }, { "docid": "eeee6fceaec33b4b1ef5aed9f8b0dcf5", "text": "This paper presents a novel orthomode transducer (OMT) with the dimension of WR-10 waveguide. The internal structure of the OMT is in the shape of Y so we named it a Y-junction OMT, it contain one square waveguide port with the dimension 2.54mm × 2.54mm and two WR-10 rectangular waveguide ports with the dimension of 1.27mm × 2.54mm. The operating frequency band of OMT is 70-95GHz (more than 30% bandwidth) with simulated insertion loss <;-0.3dB and cross polarization better than -40dB throughout the band for both TE10 and TE01 modes.", "title": "" }, { "docid": "310b8159894bc88b74a907c924277de6", "text": "We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graph-theoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers.", "title": "" } ]
scidocsrr
430da088371b542eac4a0c1e111ec82a
Concept of fuzzy planar graphs
[ { "docid": "fd48614d255b7c7bc7054b4d5de69a15", "text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009", "title": "" } ]
[ { "docid": "486e15d89ea8d0f6da3b5133c9811ee1", "text": "Frequency-modulated continuous wave radar systems suffer from permanent leakage of the transmit signal into the receive path. Besides leakage within the radar device itself, an unwanted object placed in front of the antennas causes so-called short-range (SR) leakage. In an automotive application, for instance, it originates from signal reflections of the car’s own bumper. Particularly the residual phase noise of the downconverted SR leakage signal causes a severe degradation of the achievable sensitivity. In an earlier work, we proposed an SR leakage cancellation concept that is feasible for integration in a monolithic microwave integrated circuit. In this brief, we present a hardware prototype that holistically proves our concept with discrete components. The fundamental theory and properties of the concept are proven with measurements. Further, we propose a digital design for real-time operation of the cancellation algorithm on a field programmable gate array. Ultimately, by employing measurements with a bumper mounted in front of the antennas, we show that the leakage canceller significantly improves the sensitivity of the radar.", "title": "" }, { "docid": "00828c9f8d8e0ef17505973d84f92dbf", "text": "A new modeling approach for the design of planar multilayered meander-line polarizers is presented. For the first time a multielement equivalent circuit is adopted to characterize the meander-line unit cell. This equivalent circuit significantly improves the bandwidth performance with respect to the state-of-the-art. In addition to this, a polynomial interpolation matrix approach is employed to take into account the dependence on the meander-line geometrical parameters. This leads to an accuracy comparable to that of a full-wave analysis. At the same time, the computational cost is minimized so as to make this model suitable for real-time tuning and fast optimizations. A four-layer polarizer is designed to validate the presented modeling procedure. Comparison with full-wave simulations confirms its high accuracy over a wide frequency range.", "title": "" }, { "docid": "d90d40a59f91b59bd63a3c52a8d715a4", "text": "The paradigm shift from planar (two dimensional (2D)) to vertical (three-dimensional (3D)) models has placed the NAND flash technology on the verge of a design evolution that can handle the demands of next-generation storage applications. However, it also introduces challenges that may obstruct the realization of such 3D NAND flash. Specifically, we observed that the fast threshold drift (fast-drift) in a charge-trap flash-based 3D NAND cell can make it lose a critical fraction of the stored charge relatively soon after programming and generate errors.\n In this work, we first present an elastic read reference (VRef) scheme (ERR) for reducing such errors in ReveNAND—our fast-drift aware 3D NAND design. To address the inherent limitation of the adaptive VRef, we introduce a new intra-block page organization (hitch-hike) that can enable stronger error correction for the error-prone pages. In addition, we propose a novel reinforcement-learning-based smart data refill scheme (iRefill) to counter the impact of fast-drift with minimum performance and hardware overhead. Finally, we present the first analytic model to characterize fast-drift and evaluate its system-level impact. Our results show that, compared to conventional 3D NAND design, our ReveNAND can reduce fast-drift errors by 87%, on average, and can lower the ECC latency and energy overheads by 13× and 10×, respectively.", "title": "" }, { "docid": "4979f47593bc96e3275e33bdf49e5de5", "text": "Teager Energy Operator (TEO) proposed by Kaiser and Teager is based on a definition of energy required to generate the signal. TEO gives us the running estimate of energy as a function of amplitude and instantaneous frequency content of the signal. However, it considers three consecutive samples to calculate the energy estimate. In this paper, we suggests an alternative and generalized approach to TEO to calculate the instantaneous estimate of the energy where not only consecutive but other distant samples can also be incorporated in the calculation of running estimate of the energy and the number of samples taken to calculate energy can also be increased depending on our signal properties to better capture the energy content variations in the speech signal.", "title": "" }, { "docid": "3bbbdf4d6572e548106fc1d24b50cbc6", "text": "Predicting the a↵ective valence of unknown multiword expressions is key for concept-level sentiment analysis. A↵ectiveSpace 2 is a vector space model, built by means of random projection, that allows for reasoning by analogy on natural language concepts. By reducing the dimensionality of a↵ective common-sense knowledge, the model allows semantic features associated with concepts to be generalized and, hence, allows concepts to be intuitively clustered according to their semantic and a↵ective relatedness. Such an a↵ective intuition (so called because it does not rely on explicit features, but rather on implicit analogies) enables the inference of emotions and polarity conveyed by multi-word expressions, thus achieving e cient concept-level sentiment analysis.", "title": "" }, { "docid": "73325aa0f4253294e7f116f7e0706766", "text": "To protect SDN-enabled networks under large-scale, unexpected link failures, we propose ResilientFlow that deploys distributed modules called Control Channel Maintenance Module (CCMM) for every switch and controllers. The CCMMs makes switches able to maintain their own control channels, which are core and fundamental part of SDN. In this paper, we design, implement, and evaluate the ResilientFlow.", "title": "" }, { "docid": "072351b995d3f3ae76ecc666e84b3323", "text": "An internal planar tablet computer antenna having a small size of 12 × 35 mm2 printed on a 0.8-mm thick FR4 substrate for the WWAN operation in the 824-960 and 1710-2170 MHz bands is presented. The antenna comprises a driven strip, a parasitic shorted strip and a ground pad, all printed on the small-size FR4 substrate. For bandwidth enhancement of the antenna's lower band, the antenna applies a parallel-resonant spiral slit embedded in the ground pad, which generates a parallel resonance at about 1.2 GHz and in turn results in a new resonance occurred nearby the quarter-wavelength mode of the parasitic shorted strip. This feature leads to a dual-resonance characteristic obtained for the antenna's lower band, making it capable of wideband operation to cover the desired 824-960 MHz with a small antenna size. The antenna's upper band is formed by the higher-order resonant mode contributed by the parasitic shorted strip and the quarter-wavelength resonant mode of the driven strip and can cover the desired 1710-2170 MHz band. Details of the proposed antenna and the operating principle of the parallel-resonant spiral slit are presented.", "title": "" }, { "docid": "81fa6a7931b8d5f15d55316a6ed1d854", "text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.", "title": "" }, { "docid": "e834a1a349cc4f0da70c6eaedc32f5e3", "text": "The ability to create stable, encompassing grasps with subsets of fingers is greatly increased by using soft fingertips that deform during contact and apply a larger space of frictional forces and moments than their rigid counterparts. This is true not only for human grasping, but also for robotic hands using fingertips made of soft materials. The superiority of deformable human fingertips as compared to hard robot gripper fingers for grasping and manipulation has led to a number of investigations with robot hands employing elastomers or materials such as fluids or powders beneath a membrane at the fingertips. When the fingers are soft, during holding and for manipulation of the object through precise dimensions, their property of softness maintains the area contact between, the fingertips and the manipulating object, which restraints the object and provides stability. In human finger there is a natural softness which is a combination of elasticity and damping. This combination of elasticity and damping is produced by nature due to flesh and blood beneath the skin. This keeps the contact firm and helps in holding the object firmly and stably.", "title": "" }, { "docid": "71b97f2571379716711cfcb7a5acea2f", "text": "In this paper we present a large-scale approach for the extraction of verbs in reference contexts. We analyze citation contexts in relation with the IMRaD structure of scientific articles and use rank correlation analysis to characterize the distances between the section types. The results show strong differences in the verb frequencies around citations between the sections in the IMRaD structure. This study is a ”one-more-step” towards the lexical and semantic analysis of citation contexts.", "title": "" }, { "docid": "4e4650dc1e9d9d11bbb6a403e8cf0914", "text": "The classification of different tumor types is of great importance in cancer diagnosis and drug discovery. However, most previous cancer classification studies are clinical-based and have limited diagnostic ability. Cancer classification using gene expression data is known to contain the keys for addressing the fundamental problems relating to cancer diagnosis and drug discovery. The recent advent of DNA microarray technique has made simultaneous monitoring of thousands of gene expressions possible. With this abundance of gene expression data, researchers have started to explore the possibilities of cancer classification using gene expression data. Quite a number of methods have been proposed in recent years with promising results. But there are still a lot of issues which need to be addressed and understood. In order to gain deep insight into the cancer classification problem, it is necessary to take a closer look at the problem, the proposed solutions and the related issues all together. In this survey paper, we present a comprehensive overview of various proposed cancer classification methods and evaluate them based on their computation time, classification accuracy and ability to reveal biologically meaningful gene information. We also introduce and evaluate various proposed gene selection methods which we believe should be an integral preprocessing step for cancer classification. In order to obtain a full picture of cancer classification, we also discuss several issues related to cancer classification, including the biological significance vs. statistical significance of a cancer classifier, the asymmetrical classification errors for cancer classifiers, and the gene contamination problem.", "title": "" }, { "docid": "92b26cb86ba44eb63e3e9baba2e90acb", "text": "A compound or collision tumor is a rare occurrence in dermatological findings [1]. The coincidence of malignant melanoma (MM) and basal cell carcinoma (BCC) within the same lesion have only been described in few cases in the literature [2–5]. However, until now the pathogenesis of collision tumors existing of MM and BCC remains unclear [2]. To our knowledge it has not been yet established whether there is a concordant genetic background or independent origin as a possible cause for the development of such a compound tumor. We, therefore, present the extremely rare case of a collision tumor of MM and BCC and the results of a genome-wide analysis by single nucleotide polymorphism array (SNP-Array) for detection of identical genomic aberrations.", "title": "" }, { "docid": "1464f9d7a60a59bfdd6399ea6cd9fd99", "text": "Table of", "title": "" }, { "docid": "fa5c27d91feb3b392e2dba2b2121e184", "text": "Planned experiments are the gold standard in reliably comparing the causal effect of switching from a baseline policy to a new policy. One critical shortcoming of classical experimental methods, however, is that they typically do not take into account the dynamic nature of response to policy changes. For instance, in an experiment where we seek to understand the effects of a new ad pricing policy on auction revenue, agents may adapt their bidding in response to the experimental pricing changes. Thus, causal effects of the new pricing policy after such adaptation period, the long-term causal effects, are not captured by the classical methodology even though they clearly are more indicative of the value of the new policy. Here, we formalize a framework to define and estimate long-term causal effects of policy changes in multiagent economies. Central to our approach is behavioral game theory, which we leverage to formulate the ignorability assumptions that are necessary for causal inference. Under such assumptions we estimate long-term causal effects through a latent space approach, where a behavioral model of how agents act conditional on their latent behaviors is combined with a temporal model of how behaviors evolve over time.", "title": "" }, { "docid": "129efeb93aad31aca7be77ef499398e2", "text": "Using a Neonatal Intensive Care Unit (NICU) case study, this work investigates the current CRoss Industry Standard Process for Data Mining (CRISP-DM) approach for modeling Intelligent Data Analysis (IDA)-based systems that perform temporal data mining (TDM). The case study highlights the need for an extended CRISP-DM approach when modeling clinical systems applying Data Mining (DM) and Temporal Abstraction (TA). As the number of such integrated TA/DM systems continues to grow, this limitation becomes significant and motivated our proposal of an extended CRISP-DM methodology to support TDM, known as CRISP-TDM. This approach supports clinical investigations on multi-dimensional time series data. This research paper has three key objectives: 1) Present a summary of the extended CRISP-TDM methodology; 2) Demonstrate the applicability of the proposed model to the NICU data, focusing on the challenges associated with multi-dimensional time series data; and 3) Describe the proposed IDA architecture for applying integrated TDM.", "title": "" }, { "docid": "5c83df8ba41b37d86f46de7963798b2f", "text": "Experiments show a primary role of extracellular potassium concentrations in neuronal hyperexcitability and in the generation of epileptiform bursting and depolarization blocks without synaptic mechanisms. We adopt a physiologically relevant hippocampal CA1 neuron model in a zero-calcium condition to better understand the function of extracellular potassium in neuronal seizurelike activities. The model neuron is surrounded by interstitial space in which potassium ions are able to accumulate. Potassium currents, Na{+}-K{+} pumps, glial buffering, and ion diffusion are regulatory mechanisms of extracellular potassium. We also consider a reduced model with a fixed potassium concentration. The bifurcation structure and spiking frequency of the two models are studied. We show that, besides hyperexcitability and bursting pattern modulation, the potassium dynamics can induce not only bistability but also tristability of different firing patterns. Our results reveal the emergence of the complex behavior of multistability due to the dynamical [K{+}]{o} modulation on neuronal activities.", "title": "" }, { "docid": "08cf1e6353fa3c9969188d946874c305", "text": "In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.", "title": "" }, { "docid": "9b5f61fae632cac44709903ac64de3d2", "text": "Tagging has been widely used and studied in various domains. Recently, people-tagging has emerged as a means to categorize contacts, and is also used in some social access control mechanisms. In this paper, we investigate whether there are differences between people-tagging and bookmark-tagging. We show that the way we tag documents about people, who we do not know personally, is similar to the way we tag online documents (i.e., bookmarks) about other categories (i.e., city, country, event). However, we show that the tags assigned to a document related to a friend, differ from the tags assigned to someone we do not know personally. We also analyze whether the age and gender of a taggee a person, who is tagged by others have influences on social people-tags (i.e., people-tags assigned in social Web 2.0 platforms).", "title": "" }, { "docid": "22fc1e303a4c2e7d1e5c913dca73bd9e", "text": "The artificial potential field (APF) approach provides a simple and effective motion planning method for practical purpose. However, artificial potential field approach has a major problem, which is that the robot is easy to be trapped at a local minimum before reaching its goal. The avoidance of local minimum has been an active research topic in path planning by potential field. In this paper, we introduce several methods to solve this problem, emphatically, introduce and evaluate the artificial potential field approach with simulated annealing (SA). As one of the powerful techniques for escaping local minimum, simulated annealing has been applied to local and global path planning", "title": "" }, { "docid": "9951ef687bdf5f01f8d4a38b1120c459", "text": "Urban ecosystems evolve over time and space as the outcome of dynamic interactions between socio-economic and biophysical processes operating over multiple scales. The ecological resilience of urban ecosystems—the degree to which they tolerate alteration before reorganizing around a new set of structures and processes—is influenced by these interactions. In cities and urbanizing areas fragmentation of natural habitats, simplification and homogenization of species composition, disruption of hydrological systems, and alteration of energy flow and nutrient cycling reduce cross-scale resilience, leaving systems increasingly vulnerable to shifts in system control and structure. Because varied urban development patterns affect the amount and interspersion of built and natural land cover, as well as the human demands on ecosystems differently, we argue that alternative urban patterns (i.e., urban form, land use distribution, and connectivity) generate varied effects on ecosystem dynamics and their ecological resilience. We build on urban economics, landscape ecology, population dynamics, and complex system science to propose a conceptual model and a set of hypotheses that explicitly link urban pattern to human and ecosystem functions in urban ecosystems. Drawing on preliminary results from an empirical study of the relationships between urban pattern and bird and aquatic macroinvertebrate diversity in the Puget Sound region, we propose that resilience in urban ecosystems is a function of the patterns of human activities and natural habitats that control and are controlled by both socio-economic and biophysical processes operating at various scales. We discuss the implications of this conceptual model for urban planning and design.", "title": "" } ]
scidocsrr
8dd35b3a53a2e6d1c0ed26f76af8ca6b
Mining the search trails of surfing crowds: identifying relevant websites from user activity
[ { "docid": "5fd55cd22aa9fd4df56b212d3d578134", "text": "Relevance feedback has a history in information retrieval that dates back well over thirty years (c.f. [SL96]). Relevance feedback is typically used for query expansion during short-term modeling of a user’s immediate information need and for user profiling during long-term modeling of a user’s persistent interests and preferences. Traditional relevance feedback methods require that users explicitly give feedback by, for example, specifying keywords, selecting and marking documents, or answering questions about their interests. Such relevance feedback methods force users to engage in additional activities beyond their normal searching behavior. Since the cost to the user is high and the benefits are not always apparent, it can be difficult to collect the necessary data and the effectiveness of explicit techniques can be limited.", "title": "" } ]
[ { "docid": "963d6b615ffd025723c82c1aabdbb9c6", "text": "A single high-directivity microstrip patch antenna (MPA) having a rectangular profile, which can substitute a linear array is proposed. It is designed by using genetic algorithms with the advantage of not requiring a feeding network. The patch fits inside an area of 2.54 x 0.25, resulting in a broadside pattern with a directivity of 12 dBi and a fractional impedance bandwidth of 4 %. The antenna is fabricated and the measurements are in good agreement with the simulated results. The genetic MPA provides a similar directivity as linear arrays using a corporate or series feeding, with the advantage that the genetic MPA results in more bandwidth.", "title": "" }, { "docid": "e4d1053a64a09a02f4890af66b28bbba", "text": "Branchio-oculo-facial syndrome (BOFS) is a rare autosomal dominant condition with variable expressivity, caused by mutations in the TFAP2A gene. We report a three generational family with four affected individuals. The consultand has typical features of BOFS including infra-auricular skin nodules, coloboma, lacrimal duct atresia, cleft lip, conductive hearing loss and typical facial appearance. She also exhibited a rare feature of preaxial polydactyly. Her brother had a lethal phenotype with multiorgan failure. We also report a novel variant in TFAP2A gene. This family highlights the variable severity of BOFS and, therefore, the importance of informed genetic counselling in families with BOFS.", "title": "" }, { "docid": "33b1c3b2a999c62fe4f1da5d3cc7f534", "text": "Individuals often appear with multiple names when considering large bibliographic datasets, giving rise to the synonym ambiguity problem. Although most related works focus on resolving name ambiguities, this work focus on classifying and characterizing multiple name usage patterns—the root cause for such ambiguity. By considering real examples bibliographic datasets, we identify and classify patterns of multiple name usage by individuals, which can be interpreted as name change, rare name usage, and name co-appearance. In particular, we propose a methodology to classify name usage patterns through a supervised classification task and show that different classes are robust (across datasets) and exhibit significantly different properties. We show that the collaboration network structure emerging around nodes corresponding to ambiguous names from different name usage patterns have strikingly different characteristics, such as their common neighborhood and degree evolution. We believe such differences in network structure and in name usage patterns can be leveraged to design more efficient name disambiguation algorithms that target the synonym problem.", "title": "" }, { "docid": "de016ffaace938c937722f8a47cc0275", "text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.", "title": "" }, { "docid": "e50a77b38d81d094c678dadf5c408c20", "text": "The calibration method of the soft iron and hard iron distortion based on attitude and heading reference system (AHRS) can boil down to the estimation of 12 parameters of magnetic deviation, normally using 12-state Kalman filter (KF) algorithm. The performance of compensation is limited by the accuracy of local inclination angle of magnetic field and initial heading. A 14-state extended Kalman filter (EKF) algorithm is developed to calibrate magnetic deviation, local magnetic inclination angle error and initial heading error all together. The calibration procedure is to change the attitude of AHRS and rotate it two cycles. As the strapdown matrix can hold high precision after initial alignment of AHRS in short time for the gyropsilas short-term precision, the magnetic field vector can be projected onto the body frame of AHRS. The experiment results demonstrate that 14-state EKF outperforms 12-state KF, with measurement errors exist in the initial heading and local inclination angle. The heading accuracy (variance) after compensation is 0.4 degree for tilt angle ranging between 0 and 60 degree.", "title": "" }, { "docid": "5e6990d8f1f81799e2e7fdfe29d14e4d", "text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.", "title": "" }, { "docid": "6f930c154d8a45b34fd1c934482abd19", "text": "In this chapter we review the possible biological bases for developmental dyscalculia, which is a disorder in mathematical abilities presumed to be due to impaired brain function. By reviewing what is known about the localization of numerical cognition functions in the adult brain, the causes of acquired dyscalculia, and the normal development of numerical cognition, we propose several hypotheses for causes of developmental dyscalculia, including that of a core deficit of “number sense” related to an impairment in the horizontal intraparietal sulcus (HIPS) area. We then discuss research on dyscalculia, including the contribution of recent imaging results in special populations, and evaluate to what extent this research supports our hypotheses. We conclude that there is promising preliminary evidence for a core deficit of number sense in dyscalculia, but we also emphasize that more research is needed to test the hypothesis of multiple types of dyscalculia, particularly in the area of dyscalculia subtyping. We complete the chapter with a discussion of future directions to be taken, the implications for education, and the construction of number sense remediation software in our laboratory. Number Sense and Dyscalculia 3", "title": "" }, { "docid": "86ee2f9f92c6da2a21cd91c446b30ab3", "text": "Face detection (FD) is widely used in interactive user interfaces, in advertising industry, entertainment services, video coding, is necessary first stage for all face recognition systems, etc. However, the last practical and independent comparisons of FD algorithms were made by Hjelmas et al. and by Yang et al. in 2001. The aim of this work is to propose parameters of FD algorithms quality evaluation and methodology of their objective comparison, and to show the current state of the art in face detection. The main idea is routine test of the FD algorithm in the labeled image datasets. Faces are represented by coordinates of the centers of the eyes in these datasets. For algorithms, representing detected faces by rectangles, the statistical model of eyes’ coordinates estimation was proposed. In this work the seven face detection algorithms were tested; article contains the results of their comparison.", "title": "" }, { "docid": "a5447f6bf7dbbab55d93794b47d46d12", "text": "The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.", "title": "" }, { "docid": "f5d58660137891111a009bc841950ad2", "text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).", "title": "" }, { "docid": "b103e091df051f4958317b3b7806fa71", "text": "We present a static, precise, and scalable technique for finding CVEs (Common Vulnerabilities and Exposures) in stripped firmware images. Our technique is able to efficiently find vulnerabilities in real-world firmware with high accuracy. Given a vulnerable procedure in an executable binary and a firmware image containing multiple stripped binaries, our goal is to detect possible occurrences of the vulnerable procedure in the firmware image. Due to the variety of architectures and unique tool chains used by vendors, as well as the highly customized nature of firmware, identifying procedures in stripped firmware is extremely challenging. Vulnerability detection requires not only pairwise similarity between procedures but also information about the relationships between procedures in the surrounding executable. This observation serves as the foundation for a novel technique that establishes a partial correspondence between procedures in the two binaries. We implemented our technique in a tool called FirmUp and performed an extensive evaluation over 40 million procedures, over 4 different prevalent architectures, crawled from public vendor firmware images. We discovered 373 vulnerabilities affecting publicly available firmware, 147 of them in the latest available firmware version for the device. A thorough comparison of FirmUp to previous methods shows that it accurately and effectively finds vulnerabilities in firmware, while outperforming the detection rate of the state of the art by 45% on average.", "title": "" }, { "docid": "07f0996fe2dcd3b52931b0aa09ac6f45", "text": "We are interested in the situation where we have two or more re presentations of an underlying phenomenon. In particular we ar e interested in the scenario where the representation are complementary. This implies that a single individual representation is not sufficient to fully dis criminate a specific instance of the underlying phenomenon, it also means that each r presentation is an ambiguous representation of the other complementary spa ce . In this paper we present a latent variable model capable of consolidating multiple complementary representations. Our method extends canonical cor relation analysis by introducing additional latent spaces that are specific to th e different representations, thereby explaining the full variance of the observat ions. These additional spaces, explaining representation specific variance, sepa rat ly model the variance in a representation ambiguous to the other. We develop a spec tral algorithm for fast computation of the embeddings and a probabilistic mode l (based on Gaussian processes) for validation and inference. The proposed mode l has several potential application areas, we demonstrate its use for multi-modal r egression on a benchmark human pose estimation data set.", "title": "" }, { "docid": "5efa00e0b5973515dff10d8267ac025f", "text": "Porokeratosis, a disorder of keratinisation, is clinically characterized by the presence of annular plaques with a surrounding keratotic ridge. Clinical variants include linear, disseminated superficial actinic, verrucous/hypertrophic, disseminated eruptive, palmoplantar and porokeratosis of Mibelli (one or two typical plaques with atrophic centre and guttered keratotic rim). All of these subtypes share the histological feature of a cornoid lamella, characterized by a column of 'stacked' parakeratosis with focal absence of the granular layer, and dysmaturation (prematurely keratinised cells in the upper spinous layer). In recent years, a proposed new subtype, follicular porokeratosis (FP_, has been described, in which the cornoid lamella are exclusively located in the follicular ostia. We present four new cases that showed typical histological features of FP.", "title": "" }, { "docid": "5eb5dcf91534f88fc34badee5da2f24e", "text": "This paper describes three driver options for integrated half-bridge power stage using depletion-mode GaN-on-SiC 0.15μm RF process: an active pull-up driver, a bootstrapped driver, and a modified active pull-up driver. The approaches are evaluated and compared in 5 W, 20 V synchronous Buck converter prototypes operating at 100 MHz switching frequency over a wide range of operating points. Measured efficiency peaks above 91% for the designs using the bootstrap and the modified active pull-up integrated drivers.", "title": "" }, { "docid": "9eb0976833a48b7667a459d967b566eb", "text": "A comprehensive scheme is described to construct rational trivariate solid T-splines from boundary triangulations with arbitrary topology. To extract the topology of the input geometry, we first compute a smooth harmonic scalar field defined over the mesh, and saddle points are extracted to determine the topology. By dealing with the saddle points, a polycube whose topology is equivalent to the input geometry is built, and it serves as the parametric domain for the trivariate T-spline. A polycube mapping is then used to build a one-to-one correspondence between the input triangulation and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement and applying templates to handle extraordinary nodes and partial extraordinary nodes. The T-spline that is obtained is C2-continuous everywhere over the boundary surface except for the local region surrounding polycube corner nodes. The efficiency and robustness of the presented technique are demonstrated with several applications in isogeometric analysis. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3a45d8c53eeb228551d5c8c0981066f5", "text": "Python robotics toolkit for exploring topics in AI and robotics. We present key abstractions that allow Pyro controllers to run unchanged on a variety of real and simulated robots. We demonstrate Pyro’s use in a set of curricular modules. We then describe how Pyro can provide a smooth transition for the student from symbolic agents to real-world robots, which significantly reduces the cost of learning to use robots. Finally we show how Pyro has been successfully integrated into existing AI and robotics courses.", "title": "" }, { "docid": "055faaaa14959a204ca19a4962f6e822", "text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom", "title": "" }, { "docid": "62230a6faba0c5e70558a3ac3d9d50ae", "text": "Bluetooth devices are widely employed in the home network systems. It is important to secure the home members’ Bluetooth devices, because they always store and transmit personal sensitive information. In the Bluetooth standard, Secure Simple Pairing (SSP) is an essential security mechanism for Bluetooth devices. We examine the security of SSP in the recent Bluetooth standard V5.0. The passkey entry association model in SSP is analyzed under the man-in-the-middle (MITM) attacks. Our contribution is twofold. (1) We demonstrate that the passkey entry association model is vulnerable to the MITM attack, once the host reuses the passkey. (2) An improved passkey entry protocol is therefore designed to fix the reusing passkey defect in the passkey entry association model. The improved passkey entry protocol can be easily adapted to the Bluetooth standard, because it only uses the basic cryptographic components existed in the Bluetooth standard. Our research results are beneficial to the security enhancement of Bluetooth devices in the home network systems.", "title": "" }, { "docid": "adc587c3400cdf927c433e9d0f929894", "text": "With continuous increase in urban population, the need to plan and implement smart cities based solutions for better urban governance is becoming more evident. These solutions are driven, on the one hand, by innovations in ICT and, on the other hand, to increase the capability and capacity of cities to mitigate environmental, social inclusion, economic growth and sustainable development challenges. In this respect, citizens' science or public participation provides a key input for informed and intelligent planning decision and policy making. However, the challenge here is to facilitate public in acquiring the right contextual information in order to be more productive, innovative and be able to make appropriate decisions which impact on their well being, in particular, and economic and environmental sustainability in general. Such a challenge requires contemporary ICT solutions, such as using Cloud computing, capable of storing and processing significant amount of data and produce intelligent contextual information. However, processing and visualising contextual information in a Cloud environment is not straightforward due to user profiling and contextual segregation of data that could be used in different applications of a smart city. In this regard, we present a Cloud-based architecture for context-aware citizen services for smart cities and walkthrough it using a hypothetical case study.", "title": "" } ]
scidocsrr
ca49c4822ca6be0e7b5e384d103e1dee
Deep Comparison: Relation Columns for Few-Shot Learning
[ { "docid": "b5347e195b44d5ae6d4674c685398fa3", "text": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory.", "title": "" }, { "docid": "418a5ef9f06f8ba38e63536671d605c1", "text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.", "title": "" } ]
[ { "docid": "05a543846b5275f46be63e1b472b295e", "text": "Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI) were compared for monitoring live fuel moisture in a shrubland ecosystem. Both indices were calculated from 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data covering a 33 month period from 2000 to 2002. Both NDVI and NDWI were positively correlated with live fuel moisture measured by the Los Angeles County Fire Department (LACFD). NDVI had R values ranging between 0.25 to 0.60, while NDWI had significantly higher R values, varying between 0.39 and 0.80. Water absorption measures, such as NDWI, may prove more appropriate for monitoring live fuel moisture than measures of chlorophyll absorption such as NDV", "title": "" }, { "docid": "588fcb80381f75efca073438c3eda7fb", "text": "Nowadays, parents are perturbed about school going children because of the increasing number of cases of missing students. On occasion, students need to wait a much longer time for arrival of their school bus. There exist some communication technologies that are used to ensure the safety of students. But these are incapable of providing efficient services to parents. This paper presents the development of a school bus monitoring system, capable of providing productive services through emerging technologies like Internet of Things (Iota). The proposed IoT based system tracks students in a school bus using a combination of RFID/GPS/GSM/GPRS technologies. In addition to the tracking, a prediction algorithm is implemented for computation of the arrival time of a school-bus. Through an Android application, parents can continuously monitor the bus route and forecast arrival time of the bus.", "title": "" }, { "docid": "5b3ca1cc607d2e8f0394371f30d9e83a", "text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.", "title": "" }, { "docid": "a1f2d91de4ba7899c03bfbe7a7a8f422", "text": "Pervasive gaming is a genre of gaming systematically blurring and breaking the traditional boundaries of game. The limits of the magic circle are explored in spatial, temporal and social dimensions. These ways of expanding the game are not new, since many intentional and unintentional examples of similar expansions can be found from earlier games, but the recently emerged fashion of pervasive gaming is differentiated with the use of these expansions in new, efficient ways to produce new kinds of gameplay experiences. These new game genres include alternate reality games, reality games, trans-reality games and crossmedia games.", "title": "" }, { "docid": "577adebed5563610314770f057619fee", "text": "Assisting teams of knowledge workers achieve common strategic and tactical goals is becoming an increasing priority as information analysis tasks become more complex. Tools to monitor and support individual workers, such as TaskTracer, demonstrated the potential for assisting individuals but there is a lack of tools for analyzing workflows and information needs at a collaborative level within enterprises. Providing assistance for collaboration is a current priority for the new generation of ‘smart digital assistants’ and presents unique challenges, in terms of associating collective goals with user activities in way that minimizes disruption to the user's workflow, and in generating useful summaries. To address these challenges, we have developed ‘Journaling’ interfaces to: capture a user or team's tasks and goals, associate them with information artifacts, assist in information recall, and display aggregate visualizations. These interfaces are supported by a passive instrumentation platform that aims to monitor user's consented activities in a minimally intrusive manner, including interactions with various applications, documents and URLs as part of collaborative workflows. We have deployed these interfaces in a production environment using heterogeneous workflows performed by groups of students and intelligence analysts. Evaluations based on user interviews and engagement metrics suggest that our approach is useful for understanding and supporting user collaboration, as well as being easy to work with and suitable for continuous use. In addition, the data gathered provides situational awareness for individuals, teams, educators and managers. Through our platform, we are enabling continuous collection of labelled user activity logs and document corpora that are enabling further research in this nascent field.", "title": "" }, { "docid": "683bad69cfb2c8980020dd1f8bd8cea4", "text": "BRUTUS is a program that tells stories. The stories are intriguing, they hold a hint of mystery, and—not least impressive—they are written in correct English prose. An example (p. 124) is shown in Figure 1. This remarkable feat is grounded in a complex architecture making use of a number of levels, each of which is parameterized so as to become a locus of possible variation. The specific BRUTUS1 implementation that illustrates the program’s prowess exploits the theme of betrayal, which receives an elaborate analysis, culminating in a set", "title": "" }, { "docid": "2d14dae606e7f4a6364b4c0e56911a58", "text": "The convergence of Services Computing and Web 2.0 gains a large space of opportunities to compose “situational” web applications from web-delivered services. However, the large number of services and the complexity of composition constraints make manual composition difficult to application developers, who might be non-professional programmers or even end-users. This paper presents a systematic data-driven approach to assisting situational application development. We first propose a technique to extract useful information from multiple sources to abstract service capabilities with a set tags. This supports intuitive expression of user's desired composition goals by simple queries, without having to know underlying technical details. A planning technique then exploits composition solutions which can constitute the desired goals, even with some potential new interesting composition opportunities. A browser-based tool facilitates visual and iterative refinement of composition solutions, to finally come up with the satisfying outputs. A series of experiments demonstrate the efficiency and effectiveness of our approach.", "title": "" }, { "docid": "a0d2ea9b5653d6ca54983bb3d679326e", "text": "A dynamic reasoning system (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct timestep. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, that is, that they serve to (1) derive all salient information and (2) preserve the consistency of the belief set. This article illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and two examples. The latter example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.", "title": "" }, { "docid": "849ffc68aa0e14c2cfdf53c9d99d5079", "text": "To encourage repeatable research, fund repeatability engineering and reward commitments to sharing research artifacts.", "title": "" }, { "docid": "ecfb3bc4b89b7ec955180745620dc84e", "text": "q Adversarial Attack: a small perturbation to an input that fools ML model into predicting incorrect output (STOP→YIELD) q Attack-to-Protect: designing better algorithms for generating adversarial perturbations helps train robust ML models q ML model ⎯ Binarized Neural Network (BNN): • Weights: +1 / -1 • Activation: sign & ∈ {−1,+1} q Why care about BNNs? • Fast inference / Small size • Great for low-power devices, smartphones", "title": "" }, { "docid": "7fd138b84e262a1f520ccdb4baa8bbc4", "text": "Dynamic complex networks are used to model the evolving relationships between entities in widely varying fields of research such as epidemiology, ecology, sociology, and economics. In the study of complex networks, a network is said to have community structure if it divides naturally into groups of vertices with dense connections within groups and sparser connections between groups. Detecting the evolution of communities within dynamically changing networks is crucial to understanding complex systems. In this paper, we develop a fast community detection algorithm for real-time dynamic network data. Our method takes advantage of community information from previous time steps and thereby improves efficiency while maintaining the quality of community detection. Our experiments on citation-based networks show that the execution time improves as much as 30% (average 13%) over static methods.", "title": "" }, { "docid": "7babd48cd74c959c6630a7bc8d1150d7", "text": "This paper discusses a novel hybrid approach for text categorization that combines a machine learning algorithm, which provides a base model trained with a labeled corpus, with a rule-based expert system, which is used to improve the results provided by the previous classifier, by filtering false positives and dealing with false negatives. The main advantage is that the system can be easily fine-tuned by adding specific rules for those noisy or conflicting categories that have not been successfully trained. We also describe an implementation based on k-Nearest Neighbor and a simple rule language to express lists of positive, negative and relevant (multiword) terms appearing in the input text. The system is evaluated in several scenarios, including the popular Reuters-21578 news corpus for comparison to other approaches, and categorization using IPTC metadata, EUROVOC thesaurus and others. Results show that this approach achieves a precision that is comparable to top ranked methods, with the added value that it does not require a demanding human expert workload to train.", "title": "" }, { "docid": "6a6191695c948200658ad6020f21f203", "text": "Given a random pair of images, an arbitrary style transfer method extracts the feel from the reference image to synthesize an output based on the look of the other content image. Recent arbitrary style transfer methods transfer second order statistics from reference image onto content image via a multiplication between content image features and a transformation matrix, which is computed from features with a pre-determined algorithm. These algorithms either require computationally expensive operations, or fail to model the feature covariance and produce artifacts in synthesized images. Generalized from these methods, in this work, we derive the form of transformation matrix theoretically and present an arbitrary style transfer approach that learns the transformation matrix with a feed-forward network. Our algorithm is highly efficient yet allows a flexible combination of multi-level styles while preserving content affinity during style transfer process. We demonstrate the effectiveness of our approach on four tasks: artistic style transfer, video and photo-realistic style transfer as well as domain adaptation, including comparisons with the stateof-the-art methods.", "title": "" }, { "docid": "b9e7fedbc42f815b35351ec9a0c31b33", "text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ", "title": "" }, { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" }, { "docid": "f79def9a56be8d91c81385abfc6dbee7", "text": "Computational Creativity is the AI subfield in which we study how to build computational models of creative thought in science and the arts. From an engineering perspective, it is des irable to have concrete measures for assessing the progress made from one version of a program to another, or for comparing and contras ting different software systems for the same creative task. We de scribe the Turing Test and versions of it which have been used in orde r to measure progress in Computational Creativity. We show th at the versions proposed thus far lack the important aspect of inte rac ion, without which much of the power of the Turing Test is lost. We a rgue that the Turing Test is largely inappropriate for the purpos es of evaluation in Computational Creativity, since it attempts to ho mogenise creativity into a single (human) style, does not take into ac count the importance of background and contextual information for a c eative act, encourages superficial, uninteresting advances in fro nt-ends, and rewards creativity which adheres to a certain style over tha t which creates something which is genuinely novel. We further argu e that although there may be some place for Turing-style tests for C omputational Creativity at some point in the future, it is curren tly untenable to apply any defensible version of the Turing Test. As an alternative to Turing-style tests, we introduce two de scriptive models for evaluating creative software, the FACE mode l which describes creative acts performed by software in terms of tu ples of generative acts, and the IDEA model which describes how such creative acts can have an impact upon an ideal audience, given id eal information about background knowledge and the software de v lopment process. While these models require further study and e l boration, we believe that they can be usefully applied to current sys ems as well as guiding further development of creative systems. 1 The Turing Test and Computational Creativity The Turing Test (TT), in which a computer and human are interr ogated, with the computer considered intelligent if the huma n interrogator is unable to distinguish between them, is principal ly a philosophical construct proposed by Alan Turing as a way of determ ining whether AI has achieved its goal of simulating intelligence [1]. The TT has provoked much discussion, both historical and contem porary, however this has principally been within the philosophy of A I: most AI researchers see it as a distraction from their goals, enco uraging a mere trickery of intelligence and ever more sophisticated n atural language front ends, as opposed to focussing on real problems. D espite the appeal of the (as yet unawarded) Loebner Prize, most subfi elds of AI have developed and follow their own evaluation criteri a and methodologies, which have little to do with the TT. 1 School of Informatics, University of Edinburgh, UK 2 Department of Computing, Imperial College, London, UK Computational Creativity (CC) is a subfield of AI, in which re searchers aim to model creative thought by building program s which can produce ideas and artefacts which are novel, surprising and valuable, either autonomously or in conjunction with humans. Th ere are three main motivations for the study of Computational Creat ivity: • to provide a computational perspective on human creativity , in order to help us to understand it (cognitive science); • to enable machines to be creative, in order to enhance our liv es in some way (engineering); and • to produce tools which enhance human creativity (aids for cr eative individuals). Creativity can be subdivided into everyday problem-solvin g, and the sort of creativity reserved for the truly great, in which a problem is solved or an object created that has a major impact on other people. These are respectively known as “little-c” (mundane) a nd “bigC” (eminent) creativity [2]. Boden [3] draws a similar disti nction in her view of creativity as search within a conceptual space, w h re “exploratory creativity” searches within the space, and “tran sformational creativity” involves expanding the space by breaking one or m e of the defining characteristics and creating a new conceptua l space. Boden sees transformational creativity as more surprising , i ce, according to the defining rules of the conceptual space, ideas w ithin this space could not have been found before. There are two notions of evaluation in CC: ( i) judgements which determine whether an idea or artefact is valuable or not (an e ssential criterion for creativity) – these judgements may be made int rnally by whoever produced the idea, or externally, by someone else and (ii ) judgements to determine whether a system is acting creativ ely or not. In the following discussion, by evaluation, we mean the latter judgement. Finding measures of evaluation of CC is an active area of research, both influenced by, and influencing, practical a nd theoretical aspects of CC. It is a particularly important area, s ince such measures suggest ways of defining progress in the field, 3 as well as strongly guiding program design. While tests of creativity in humans are important for our understanding of creativity, they do n ot usually causehumans to be creative (creativity training programs, which train people to do well at such tests, notwithstanding). Way s in which CC is evaluated, on the other hand, will have a deep influence o future development of potentially creative programs. Clearl y, different modes of evaluation will be appropriate for the different mo tivations listed above. 3 The necessity for good measures of evaluation in CC is somewh at paralleled in the psychology of creativity: “Creativity is becoming a p opular topic in educational, economic and political circles throughout th e world – whether this popularity is just a passing fad or a lasting change in in terest in creativity and innovation will probably depend, in large part, on wh ether creativity assessment keeps pace with the rest of the field.” [4, p. 64] The Turing Test is of particular interest to CC for two reason s. Firstly, unlike the general situation in AI, the TT, or varia tions of it, arecurrently being used to evaluate candidate programs in CC. T hus, the TT is having a major influence on the development of CC. Thi s influence is usually neither noted nor questioned. Secondly , there are huge philosophical problems with using a test based on imita tion to evaluate competence in an area of thought which is based on or iginality. While there are varying definitions of creativity, t he majority consider some interpretation of novelty and utility to be es sential criteria. For instance, one of the commonalities found by Rothe nberg in a collection of international perspectives on creativit y is that “creativity involves thinking that is aimed at producing ideas o r products that are relatively novel” [5, p.2], and in CC the combin ation of novelty and usefulness is accepted as key (for instance, s ee [6] or [3]). In [4], Plucker and Makel list “similar, overlapping a nd possibly synonymous terms for creativity: imagination, ingenuity, innovation, inspiration, inventiveness, muse, novelty, originality, serendipity, talent and unique”. The term ‘imitation’ is simply antipodal to many of these terms. In the following sections, we firstly describe and discuss so me attempts to evaluate Computational Creativity using the Turi ng Test or versions of it ( §2), concluding that these attempts all omit the important aspect of interaction, and suggest the sort of directio n that a TT for a creative computer art system might follow. We then pres ent a series of arguments that the TT is inappropriate for measuring creativity in computers (or humans) in §3, and suggest that although there may be some place for Turing-style tests for Computational C reativity at some point in the future, it is currently untenable and impractical. As an alternative to Turing-style tests, in §4, we introduce two descriptive models for evaluating creative software, the F ACE model which describes creative acts performed by software in term s of tuples of generative acts, and the IDEA model which describes h ow such creative acts can have an impact upon an ideal audience, given ideal information about background knowledge and the softw are development process. We conclude our discussion in §5. 2 Attempts to evaluate Computational Creativity using the Turing Test or versions of it There have been several attempts to evaluate Computational Cre tivity using the Turing Test or versions of it. While these are us f l in terms of advancing our understanding of CC, they do not go f ar enough. In this section we discuss two such advances ( §2.1 and§2.2), and two further suggestions on using human creative behavio ur as a guide for evaluating Computational Creativity ( §2.3). We highlight the importance of interaction in §2.4. 2.1 Discrimination tests Pearce and Wiggins [7] assert for the need for objective, fal sifi ble measures of evaluation in cognitive musicology. They propo se the ‘discrimination test’, which is analogous to the TT, in whic subjects are played segments of both machine and human-generated mus ic and asked to distinguish between them. This might be in a part icular style, such as Bach’s music, or might be more general. The y also present one of the most considered analyses of whether Turin g-style tests such as the framework they propose might be appropriat e for evaluating Computational Creativity [7, §7]. While they do not directly refer to Boden’s exploratory creativity [3], instea d referring to Boden’s distinction between psychological (P-creativity , concerning ideas which are novel with resepct to a particular mind) and h istorical creativity (H-creativity, concerning ideas which are novel with respect to the whole of human history ), they do argue that much creative work is carried out within a particular style. They cite Garnham’s response ", "title": "" }, { "docid": "a3628ca53dfbe7b3e10593cc361cdaac", "text": "In order to ensure the safe supply of the drinking water the quality needs to be monitor in real time. In this paper we present a design and development of a low cost system for real time monitoring of the water quality in IOT(internet of things).the system consist of several sensors is used to measuring physical and chemical parameters of the water. The parameters such as temperature, PH, turbidity, conductivity, dissolved oxygen of the water can be measured. The measured values from the sensors can be processed by the core controller. The raspberry PI B+ model can be used as a core controller. Finally, the sensor data can be viewed on internet using cloud computing.", "title": "" }, { "docid": "9d161e7e124a7ce971a40a53d7ba6fba", "text": "Emulsions are commonly used in foods, pharmaceuticals and home-personal-care products. For emulsion based products, it is highly desirable to control the droplet size distribution to improve storage stability, appearance and in-use property. We report preparation of uniform-sized silicone oil microemulsions with different droplets diameters (1.4-40.0 μm) using SPG membrane emulsification technique. These microemulsions were then added into model shampoos and conditioners to investigate the effects of size, uniformity, and storage stability on silicone oil deposition on hair surface. We observed much improved storage stability of uniform-sized microemulsions when the droplets diameter was ≤22.7 μm. The uniform-sized microemulsion of 40.0 μm was less stable but still more stable than non-uniform sized microemulsions prepared by conventional homogenizer. The results clearly indicated that uniform-sized droplets enhanced the deposition of silicone oil on hair and deposition increased with decreasing droplet size. Hair switches washed with small uniform-sized droplets had lower values of coefficient of friction compared with those washed with larger uniform and non-uniform droplets. Moreover the addition of alginate thickener in the shampoos and conditioners further enhanced the deposition of silicone oil on hair. The good correlation between silicone oil droplets stability, deposition on hair and resultant friction of hair support that droplet size and uniformity are important factors for controlling the stability and deposition property of emulsion based products such as shampoo and conditioner.", "title": "" }, { "docid": "596d8b2b90f18e13f2e4751727cfbb21", "text": "Due to proliferation of Web 2.0, there is an exponential growth in user generated contents in the form of customer reviews on the Web, containing precious information useful for both customers and manufacturers. However, most of the contents are stored in either unstructured or semi-structured format due to which distillation of knowledge from this huge repository is a challenging task. In this paper, we propose a text mining approach to mine product features, opinions and their reliability scores from Web opinion sources. A rule-based system is implemented, which applies linguistic and semantic analysis of texts to mine feature-opinion pairs that have sentence-level co-occurrence in review documents. The extracted feature-opinion pairs and source documents are modeled using a bipartite graph structure. Considering feature-opinion pairs as hubs and source documents as authorities, Hyperlink-Induced Topic Search (HITS) algorithm is applied to generate reliability score for each feature-opinion pair with respect to the underlying corpus. The efficacy of the proposed system is established through experimentation over customer reviews on different models of electronic products.", "title": "" }, { "docid": "e16d89d3a6b3d38b5823fae977087156", "text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.", "title": "" } ]
scidocsrr
cb18bc82af0aad94818daf2fe4e79cbe
A Probabilistic Approach for Optimizing Spectral Clustering
[ { "docid": "fea6d5cffd6b2943fac155231e7e9d89", "text": "We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported. Spectral graph partitioning methods have been successfully applied to circuit layout [3, 1], load balancing [4] and image segmentation [10, 6]. As a discriminative approach, they do not make assumptions about the global structure of data. Instead, local evidence on how likely two data points belong to the same class is first collected and a global decision is then made to divide all data points into disjunct sets according to some criterion. Often, such a criterion can be interpreted in an embedding framework, where the grouping relationships among data points are preserved as much as possible in a lower-dimensional representation. What makes spectral methods appealing is that their global-optima in the relaxed continuous domain are obtained by eigendecomposition. However, to get a discrete solution from eigenvectors often requires solving another clustering problem, albeit in a lower-dimensional space. That is, eigenvectors are treated as geometrical coordinates of a point set. Various clustering heuristics such as Kmeans [10, 9], transportation [2], dynamic programming [1], greedy pruning or exhaustive search [3, 10] are subsequently employed on the new point set to retrieve partitions. We show that there is a principled way to recover a discrete optimum. This is based on a fact that the continuous optima consist not only of the eigenvectors, but of a whole family spanned by the eigenvectors through orthonormal transforms. The goal is to find the right orthonormal transform that leads to a discretization.", "title": "" } ]
[ { "docid": "3c22c94c9ab99727840c2ca00c66c0f3", "text": "The impact of numerous distributed generators (DGs) coupled with the implementation of virtual inertia on the transient stability of power systems has been studied extensively. Time-domain simulation is the most accurate and reliable approach to evaluate the dynamic behavior of power systems. However, the computational efficiency is restricted by their multi-time-scale property due to the combination of various DGs and synchronous generators. This paper presents a novel projective integration method (PIM) for the efficient transient stability simulation of power systems with high DG penetration. One procedure of the proposed PIM is decomposed into two stages, which adopt mixed explicit-implicit integration methods to achieve both efficiency and numerical stability. Moreover, the stability of the PIM is not affected by its parameter, which is related to the step size. Based on this property, an adaptive parameter scheme is developed based on error estimation to fit the time constants of the system dynamics and further increase the simulation speed. The presented approach is several times faster than the conventional integration methods with a similar level of accuracy. The proposed method is demonstrated using test systems with DGs and virtual synchronous generators, and the performance is verified against MATLAB/Simulink and DIgSILENT PowerFactory.", "title": "" }, { "docid": "179299ec6ebad6bc0a778b002e36b8ee", "text": "A steady plant monitoring is necessary to control the spread of a disease but its cost may be high and as a result, the producers often skip critical preventive procedures to keep the production cost low. Although, official disease recognition is a responsibility of professional agriculturists, low cost observation and computational assisted diagnosis can effectively help in the recognition of a plant disease in its early stages. The most important symptoms of a disease such as lesions in the leaves, fruits, stems, etc, are visible. The features (color, area, number of spots) of these lesions can form significant decision criteria supplemented by other more expensive molecular analyses and tests that can follow. An image processing technique capable of recognizing the plant lesion features is described in this paper. The low complexity of this technique can allow its implementation on mobile phones. The achieved accuracy is higher than 90% according to the experimental results.", "title": "" }, { "docid": "aae7c62819cb70e21914486ade94a762", "text": "From failure experience on power transformers very often it was suspected that inrush currents, occurring when energizing unloaded transformers, were the reason for damage. In this paper it was investigated how mechanical forces within the transformer coils build up under inrush compared to those occurring at short circuit. 2D and 3D computer modeling for a real 268 MVA, 525/17.75 kV three-legged step up transformer were employed. The results show that inrush current peaks of 70% of the rated short circuit current cause local forces in the same order of magnitude as those at short circuit. The resulting force summed up over the high voltage coil is even three times higher. Although inrush currents are normally smaller, the forces can have similar amplitudes as those at short circuit, with longer exposure time, however. Therefore, care has to be taken to avoid such high inrush currents. Today controlled switching offers an elegant and practical solution.", "title": "" }, { "docid": "e7a0b70d02875fc8ae5861fb5a6f6865", "text": "Spatial and temporal database systems, both in theory and in practice, have developed dramatically over the past two decades to the point where usable commercial systems, underpinned by a robust theoretical foundation, are now starting to appear. While much remains to be done, topics for research must be chosen carefully to avoid embarking on impractical or unprofitable areas. This is particularly true for doctoral research where the candidate must build a tangible contribution in a relatively short time.The panel session at the Eighth International Symposium on Spatial and Temporal Databases (SSTD 2003) held on Santorini Island, Greece [7] in July 2003 thus took as its focus the question What to focus on (and what to avoid) in Spatial and Temporal Databases: recommendations for doctoral research. This short paper, authored by the panel members, summarizes these discussions.", "title": "" }, { "docid": "5a61a6249b389a26d439f9a66efcc5f5", "text": "The vast majority of current robot mapping and navigation systems require specific, well-characterized sensors that may require human-supervised calibration and are applicable only in one type of environment. Furthermore, if a sensor degrades in performance, either through damage to itself or changes in environmental conditions, the effect on the mapping system is usually catastrophic. In contrast, the natural world presents robust, reasonably well-characterized solutions to these problems. Using simple movement behaviors and neural learning mechanisms, rats calibrate their sensors for mapping and navigation in an incredibly diverse range of environments and then go on to adapt to sensor damage and changes in the environment over their lifetimes. In this paper, we introduce similar movement-based autonomous calibration techniques that calibrate place recognition and self-motion processes as well as methods for online multi-sensor weighting and fusion. We present calibration and mapping results from multiple robot platforms and multisensory configurations in an office building, university campus and forest. With moderate assumptions and almost no prior knowledge of the robot, sensor suite or environment, the methods enable the bio-inspired RatSLAM system to generate topologically correct maps in the majority of experiments.", "title": "" }, { "docid": "acff8bc4a955a3a41796138151035e38", "text": "Using data from the Fragile Families and Child Wellbeing Study (N=3,870) and cross-lagged path analysis, the authors examined whether spanking at ages 1 and 3 is adversely associated with cognitive skills and behavior problems at ages 3 and 5. The authors found spanking at age 1 was associated with a higher level of spanking and externalizing behavior at age 3, and spanking at age 3 was associated with a higher level of internalizing and externalizing behavior at age 5. The associations between spanking at age 1 and behavioral problems at age 5 operated predominantly through ongoing spanking at age 3. The authors did not find an association between spanking at age 1 and cognitive skills at age 3 or 5.", "title": "" }, { "docid": "e4893b639d75a6650756927d36fa37f8", "text": "BACKGROUND\nThe length of stay (LOS) is an important indicator of the efficiency of hospital management. Reduction in the number of inpatient days results in decreased risk of infection and medication side effects, improvement in the quality of treatment, and increased hospital profit with more efficient bed management. The purpose of this study was to determine which factors are associated with length of hospital stay, based on electronic health records, in order to manage hospital stay more efficiently.\n\n\nMATERIALS AND METHODS\nResearch subjects were retrieved from a database of patients admitted to a tertiary general university hospital in South Korea between January and December 2013. Patients were analyzed according to the following three categories: descriptive and exploratory analysis, process pattern analysis using process mining techniques, and statistical analysis and prediction of LOS.\n\n\nRESULTS\nOverall, 55% (25,228) of inpatients were discharged within 4 days. The department of rehabilitation medicine (RH) had the highest average LOS at 15.9 days. Of all the conditions diagnosed over 250 times, diagnoses of I63.8 (cerebral infarction, middle cerebral artery), I63.9 (infarction of middle cerebral artery territory) and I21.9 (myocardial infarction) were associated with the longest average hospital stay and high standard deviation. Patients with these conditions were also more likely to be transferred to the RH department for rehabilitation. A range of variables, such as transfer, discharge delay time, operation frequency, frequency of diagnosis, severity, bed grade, and insurance type was significantly correlated with the LOS.\n\n\nCONCLUSIONS\nAccurate understanding of the factors associating with the LOS and progressive improvements in processing and monitoring may allow more efficient management of the LOS of inpatients.", "title": "" }, { "docid": "d5641090db7579faff175e4548c25096", "text": "Integration is central to HIV-1 replication and helps mold the reservoir of cells that persists in AIDS patients. HIV-1 interacts with specific cellular factors to target integration to interior regions of transcriptionally active genes within gene-dense regions of chromatin. The viral capsid interacts with several proteins that are additionally implicated in virus nuclear import, including cleavage and polyadenylation specificity factor 6, to suppress integration into heterochromatin. The viral integrase protein interacts with transcriptional co-activator lens epithelium-derived growth factor p75 to principally position integration within gene bodies. The integrase additionally senses target DNA distortion and nucleotide sequence to help fine-tune the specific phosphodiester bonds that are cleaved at integration sites. Research into virus–host interactions that underlie HIV-1 integration targeting has aided the development of a novel class of integrase inhibitors and may help to improve the safety of viral-based gene therapy vectors.", "title": "" }, { "docid": "75b64f9106b2c334c572bc3180d93aef", "text": "This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions.", "title": "" }, { "docid": "c3e712cd4c0652e2711b540ecf36f4f6", "text": "Finding image correspondences remains a challenging problem in the presence of intra-class variations and large changes in scene layout. Semantic flow methods are designed to handle images depicting different instances of the same object or scene category. We introduce a novel approach to semantic flow, dubbed proposal flow, that establishes reliable correspondences using object proposals. Unlike prevailing semantic flow approaches that operate on pixels or regularly sampled local regions, proposal flow benefits from the characteristics of modern object proposals, that exhibit high repeatability at multiple scales, and can take advantage of both local and geometric consistency constraints among proposals. We also show that proposal flow can effectively be transformed into a conventional dense flow field. We introduce a new dataset that can be used to evaluate both general semantic flow techniques and region-based approaches such as proposal flow. We use this benchmark to compare different matching algorithms, object proposals, and region features within proposal flow, to the state of the art in semantic flow. This comparison, along with experiments on standard datasets, demonstrates that proposal flow significantly outperforms existing semantic flow methods in various settings.", "title": "" }, { "docid": "d302bfb7c2b95def93525050016ac07c", "text": "Face recognition remains a challenge today as recognition performance is strongly affected by variability such as illumination, expressions and poses. In this work we apply Convolutional Neural Networks (CNNs) on the challenging task of both 2D and 3D face recognition. We constructed two CNN models, namely CNN-1 (two convolutional layers) and CNN-2 (one convolutional layer) for testing on 2D and 3D dataset. A comprehensive parametric study of two CNN models on face recognition is represented in which different combinations of activation function, learning rate and filter size are investigated. We find that CNN-2 has a better accuracy performance on both 2D and 3D face recognition. Our experimental results show that an accuracy of 85.15% was accomplished using CNN-2 on depth images with FRGCv2.0 dataset (4950 images with 557 objectives). An accuracy of 95% was achieved using CNN-2 on 2D raw image with the AT&T dataset (400 images with 40 objectives). The results indicate that the proposed CNN model is capable to handle complex information from facial images in different dimensions. These results provide valuable insights into further application of CNN on 3D face recognition.", "title": "" }, { "docid": "07310c30b78d74a1e237af4dd949d68e", "text": "The vulnerability of face, fingerprint and iris recognition systems to attacks based on morphed biometric samples has been established in the recent past. However, so far a reliable detection of morphed biometric samples has remained an unsolved research challenge. In this work, we propose the first multi-algorithm fusion approach to detect morphed facial images. The FRGCv2 face database is used to create a set of 4,808 morphed and 2,210 bona fide face images which are divided into a training and test set. From a single cropped facial image features are extracted using four types of complementary feature extraction algorithms, including texture descriptors, keypoint extractors, gradient estimators and a deep learning-based method. By performing a score-level fusion of comparison scores obtained by four different types of feature extractors, a detection equal error rate (D-EER) of 2.8% is achieved. Compared to the best single algorithm approach achieving a D-EER of 5.5%, the D-EER of the proposed multi-algorithm fusion system is al- most twice as low, confirming the soundness of the presented approach.", "title": "" }, { "docid": "8e0754baed82072945e1bf0c968bb0be", "text": "Previous studies examining the relationship between physical activity levels and broad-based measures of psychological wellbeing in adolescents have been limited by not controlling for potentially confounding variables. The present study examined the relationship between adolescents’ self-reported physical activity level, sedentary behaviour and psychological wellbeing; while controlling for a broad range of sociodemographic, health and developmental factors. The study entailed a cross-sectional school-based survey in ten British towns. Two thousand six hundred and twenty three adolescents (aged 13–16 years) reported physical activity levels, patterns of sedentary behaviour (TV/computer/video usage) and completed the strengths and difficulties questionnaire (SDQ). Lower levels of self-reported physical activity and higher levels of sedentary behaviour showed graded associations with higher SDQ total difficulties scores, both for boys (P < 0.001) and girls (P < 0.02) after adjustment for age and town. Additional adjustment for social class, number of parents, predicted school examination results, body mass index, ethnicity, alcohol intake and smoking status had little effect on these findings. Low levels of self-reported physical activity are independently associated with diminished psychological wellbeing among adolescents. Longitudinal studies may provide further insights into the relationship between wellbeing and activity levels in this population. Ultimately, randomised controlled trials are needed to evaluate the effects of increasing physical activity on psychological wellbeing among adolescents.", "title": "" }, { "docid": "90fdac33a73d1615db1af0c94016da5b", "text": "AIM OF THE STUDY\nThe purpose of this study was to define antidiabetic effects of fruit of Vaccinium arctostaphylos L. (Ericaceae) which is traditionally used in Iran for improving of health status of diabetic patients.\n\n\nMATERIALS AND METHODS\nFirstly, we examined the effect of ethanolic extract of Vaccinium arctostaphylos fruit on postprandial blood glucose (PBG) after 1, 3, 5, 8, and 24h following a single dose administration of the extract to alloxan-diabetic male Wistar rats. Also oral glucose tolerance test was carried out. Secondly, PBG was measured at the end of 1, 2 and 3 weeks following 3 weeks daily administration of the extract. At the end of treatment period the pancreatic INS and cardiac GLUT-4 mRNA expression and also the changes in the plasma lipid profiles and antioxidant enzymes activities were assessed. Finally, we examined the inhibitory activity of the extract against rat intestinal α-glucosidase.\n\n\nRESULTS\nThe obtained results showed mild acute (18%) and also significant chronic (35%) decrease in the PBG, significant reduction in triglyceride (47%) and notable rising of the erythrocyte superoxide dismutase (57%), glutathione peroxidase (35%) and catalase (19%) activities due to treatment with the extract. Also we observed increased expression of GLUT-4 and INS genes in plant extract treated Wistar rats. Furthermore, in vitro studies displayed 47% and 56% inhibitory effects of the extract on activity of intestinal maltase and sucrase enzymes, respectively.\n\n\nCONCLUSIONS\nFindings of this study allow us to establish scientifically Vaccinium arctostaphylos fruit as a potent antidiabetic agent with antihyperglycemic, antioxidant and triglyceride lowering effects.", "title": "" }, { "docid": "b71477154243283819d499c381119c2d", "text": "Indonesia is one of countries well-known as the biggest palm oil producers in the world. In 2015, this country succeeded to produce 32.5 million tons of palm oil, and used 26.4 million of it to export to other countries. The quality of Indonesia's palm oil production has become the reason why Indonesia becomes the famous exporter in a global market. For this reason, many Indonesian palm oil companies are trying to improve their quality through smart farming. One of the ways to improve is by using technology such as Internet of Things (IoT). In order to have the actual and real-time condition of the land, using the IoT concept by connecting some sensors. A previous research has accomplished to create some Application Programming Interfaces (API), which can be used to support the use of technology. However, these APIs have not been integrated to a User Interface (UI), as it can only be used by developers or programmers. These APIs have not been able to be used as a monitoring information system for palm oil plantation, which can be understood by the employees. Based on those problems, this research attempts to develop a monitoring information system, which will be integrated with the APIs from the previous research by using the Progressive Web App (PWA) approach. So, this monitoring information system can be accessed by the employees, either by using smartphone or by using desktop. Even, it can work similar with a native application.", "title": "" }, { "docid": "72e0824602462a21781e9a881041e726", "text": "In an effort to develop a genomics-based approach to the prediction of drug response, we have developed an algorithm for classification of cell line chemosensitivity based on gene expression profiles alone. Using oligonucleotide microarrays, the expression levels of 6,817 genes were measured in a panel of 60 human cancer cell lines (the NCI-60) for which the chemosensitivity profiles of thousands of chemical compounds have been determined. We sought to determine whether the gene expression signatures of untreated cells were sufficient for the prediction of chemosensitivity. Gene expression-based classifiers of sensitivity or resistance for 232 compounds were generated and then evaluated on independent sets of data. The classifiers were designed to be independent of the cells' tissue of origin. The accuracy of chemosensitivity prediction was considerably better than would be expected by chance. Eighty-eight of 232 expression-based classifiers performed accurately (with P < 0.05) on an independent test set, whereas only 12 of the 232 would be expected to do so by chance. These results suggest that at least for a subset of compounds genomic approaches to chemosensitivity prediction are feasible.", "title": "" }, { "docid": "609806e76f3f919da03900165c2727b8", "text": "Modern and powerful mobile devices comprise an attractive target for any potential intruder or malicious code. The usual goal of an attack is to acquire users’ sensitive data or compromise the device so as to use it as a stepping stone (or bot) to unleash a number of attacks to other targets. In this paper, we focus on the popular iPhone device. We create a new stealth and airborne malware namely iSAM able to wirelessly infect and self-propagate to iPhone devices. iSAM incorporates six different malware mechanisms, and is able to connect back to the iSAM bot master server to update its programming logic or to obey commands and unleash a synchronized attack. Our analysis unveils the internal mechanics of iSAM and discusses the way all iSAM components contribute towards achieving its goals. Although iSAM has been specifically designed for iPhone it can be easily modified to attack any iOS-based device.", "title": "" }, { "docid": "756929d22f107a5ff0b3bf0b19414a06", "text": "Users of social networking sites such as Facebook frequently post self-portraits on their profiles. While research has begun to analyze the motivations for posting such pictures, less is known about how selfies are evaluated by recipients. Although producers of selfies typically aim to create a positive impression, selfies may also be regarded as narcissistic and therefore fail to achieve the intended goal. The aim of this study is to examine the potentially ambivalent reception of selfies compared to photos taken by others based on the Brunswik lens model Brunswik (1956). In a between-subjects online experiment (N = 297), Facebook profile mockups were shown which differed with regard to picture type (selfie vs. photo taken by others), gender of the profile owner (female vs. male), and number of individuals within a picture (single person vs. group). Results revealed that selfies were indeed evaluated more negatively than photos taken by others. Persons in selfies were rated as less trustworthy, less socially attractive, less open to new experiences, more narcissistic and more extroverted than the same persons in photos taken by others. In addition, gender differences were observed in the perception of pictures. Male profile owners were rated as more narcissistic and less trustworthy than female profile owners, but there was no significant interaction effect of type of picture and gender. Moreover, a mediation analysis of presumed motives for posting selfies revealed that negative evaluations of selfie posting individuals were mainly driven by the perceived motivation of impression management. Findings suggest that selfies are likely to be evaluated less positively than producers of selfies might suppose.", "title": "" }, { "docid": "4432b8022f49b8cffed8fb6800a98a48", "text": "6 Recommendation systems play an extremely important role in e-commerce; 7 by recommending products that suit the taste of the consumers, e-commerce 8 companies can generate large profits. The most commonly used 9 recommender systems typically produce a list of recommendations through 10 collaborative or content-based filtering; neither of those approaches take 11 into account the content of the written reviews, which contain rich 12 information about user’s taste. In this paper, we evaluate the performance of 13 ten different recurrent neural network (RNN) structure on the task of generating 14 recommendations using written reviews. The RNN structures we study include 15 well know implementations such as Multi-stacked bi-directional Gated 16 Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) as well as novel 17 implementation of attention-based RNN structure. The attention-based structures 18 are not only among the best models in terms of prediction accuracy, they also 19 assign an attention weight to each word in the review; by plotting the attention 20 weight of each word we gain additional insight into the underlying mechanisms 21 involved in the prediction process. We develop and test the recommendation 22 systems using the data provided by Yelp Data Challenge. 23", "title": "" }, { "docid": "27c125643ffc8f1fee7ed5ee22025c01", "text": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called IMAGENET-P which enables researchers to benchmark a classifier’s robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.", "title": "" } ]
scidocsrr