query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
883716c6c6d3d57c0265cf7c53f8965b
A High-Isolation, Wideband and Dual-Linear Polarization Patch Antenna
[ { "docid": "a5b147f5b3da39fed9ed11026f5974a2", "text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).", "title": "" } ]
[ { "docid": "2b3335d6fb1469c4848a201115a78e2c", "text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.", "title": "" }, { "docid": "bfd94756f73fc7f9eb81437f5d192ac3", "text": "Technological advances in upper-limb prosthetic design offer dramatically increased possibilities for powered movement. The DEKA Arm system allows users 10 powered degrees of movement. Learning to control these movements by utilizing a set of motions that, in most instances, differ from those used to obtain the desired action prior to amputation is a challenge for users. In the Department of Veterans Affairs \"Study to Optimize the DEKA Arm,\" we attempted to facilitate motor learning by using a virtual reality environment (VRE) program. This VRE program allows users to practice controlling an avatar using the controls designed to operate the DEKA Arm in the real world. In this article, we provide highlights from our experiences implementing VRE in training amputees to use the full DEKA Arm. This article discusses the use of VRE in amputee rehabilitation, describes the VRE system used with the DEKA Arm, describes VRE training, provides qualitative data from a case study of a subject, and provides recommendations for future research and implementation of VRE in amputee rehabilitation. Our experience has led us to believe that training with VRE is particularly valuable for upper-limb amputees who must master a large number of controls and for those amputees who need a structured learning environment because of cognitive deficits.", "title": "" }, { "docid": "3ec9f9abda7d8266d9bcbbb34d468fe6", "text": "This paper presents the Homeo-Heterostatic Value Gradients (HHVG) algorithm as a formal account on the constructive interplay between boredom and curiosity which gives rise to effective exploration and superior forward model learning. We offer an instrumental view of action selection, in which an action serves to disclose outcomes that have intrinsic meaningfulness to an agent itself. This motivated two central algorithmic ingredients: devaluation and devaluation progress, both underpin agent's cognition concerning intrinsically generated rewards. The two serve as an instantiation of homeostatic and heterostatic intrinsic motivation. A key insight from our algorithm is that the two seemingly opposite motivations can be reconciled-without which exploration and information-gathering cannot be effectively carried out. We supported this claim with empirical evidence, showing that boredom-enabled agents consistently outperformed other curious or explorative agent variants in model building benchmarks based on self-assisted experience accumulation.", "title": "" }, { "docid": "33b281b2f3509a6fdc3fd5f17f219820", "text": "Personal robots will contribute mobile manipulation capabilities to our future smart homes. In this paper, we propose a low-cost object localization system that uses static devices with Bluetooth capabilities, which are distributed in an environment, to detect and localize active Bluetooth beacons and mobile devices. This system can be used by a robot to coarsely localize objects in retrieval tasks. We attach small Bluetooth low energy tags to objects and require at least four static Bluetooth receivers. While commodity Bluetooth devices could be used, we have built low-cost receivers from Raspberry Pi computers. The location of a tag is estimated by lateration of its received signal strengths. In experiments, we evaluate accuracy and timing of our approach, and report on the successful demonstration at the RoboCup German Open 2014 competition in Magdeburg.", "title": "" }, { "docid": "0923e899e5d7091a6da240db21eefad2", "text": "A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction. The method uses changes in specimen position at previous tilt angles to predict the position at the current tilt angle. Actual measurement of the position or focus is skipped if the statistical error of the prediction is low enough. This method allows a tilt series to be acquired rapidly when conditions are good but falls back toward the traditional approach of taking focusing and tracking images when necessary. The method has been implemented in a program, SerialEM, that provides an efficient environment for data acquisition. This program includes control of an energy filter as well as a low-dose imaging mode, in which tracking and focusing occur away from the area of interest. The program can automatically acquire a montage of overlapping frames, allowing tomography of areas larger than the field of the CCD camera. It also includes tools for navigating between specimen positions and finding regions of interest.", "title": "" }, { "docid": "05b449ac5088bdeb543d5a8588f82b79", "text": "A fringing field capacitive sensor is described for measuring the moisture content (MC) and temperature of agricultural commodities. Sensor performance was characterized by mounting the device on handheld probes and in acrylic canisters to determine the dielectric constant and MC of wheat and corn. The handheld probes demonstrated a promising capability to measure the MC of grain in hoppers, truck beds, and cargo holds. It is proposed that the sensors be supported on cables in grain silos and storage bins to acquire in situ data for grain storage management and control of aeration systems. The sensor is watertight and constructed with corrosion resistant materials which allow MC measurements to be made of industrial materials, chemicals, and fuels.", "title": "" }, { "docid": "0be5ab2533511ce002d87ff6a12f7b08", "text": "This paper deals with the solar photovoltaic (SPV) array fed water-pumping system using a Luo converter as an intermediate DC-DC converter and a permanent magnet brushless DC (BLDC) motor to drive a centrifugal water pump. Among the different types of DC-DC converters, an elementary Luo converter is selected in order to extract the maximum power available from the SPV array and for safe starting of BLDC motor. The elementary Luo converter with reduced components and single semiconductor switch has inherent features of reducing the ripples in its output current and possessing a boundless region for maximum power point tracking (MPPT). The electronically commutated BLDC motor is used with a voltage source inverter (VSI) operated at fundamental frequency switching thus avoiding the high frequency switching losses resulting in a high efficiency of the system. The SPV array is designed such that the power at rated DC voltage is supplied to the BLDC motor-pump under standard test condition and maximum switch utilization of Luo converter is achieved which results in efficiency improvement of the converter. Performances at various operating conditions such as starting, dynamic and steady state behavior are analyzed and suitability of the proposed system is demonstrated using MATLAB/Simulink based simulation results.", "title": "" }, { "docid": "40aa8b356983686472b3d2871add4491", "text": "Illegal logging is in these days widespread problem. In this paper we propose the system based on principles of WSN for monitoring the forest. Acoustic signal processing and evaluation system described in this paper is dealing with the detection of chainsaw sound with autocorrelation method. This work is describing first steps in building the integrated system.", "title": "" }, { "docid": "3ed5ec863971e04523a7ede434eaa80d", "text": "This article reports on the design, implementation, and usage of the CourseMarker (formerly known as CourseMaster) courseware Computer Based Assessment (CBA) system at the University of Nottingham. Students use CourseMarker to solve (programming) exercises and to submit their solutions. CourseMarker returns immediate results and feedback to the students. Educators author a variety of exercises that benefit the students while offering practical benefits. To date, both educators and students have been hampered by CBA software that has been constructed to assess text-based or multiple-choice answers only. Although there exist a few CBA systems with some capability to automatically assess programming coursework, none assess Java programs and none are as flexible, architecture-neutral, robust, or secure as the CourseMarker CBA system.", "title": "" }, { "docid": "0c01132904f2c580884af1391069addd", "text": "BACKGROUND\nThe inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches.\n\n\nMETHODS\nA systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis.\n\n\nRESULTS\nThe processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity.\n\n\nCONCLUSION\nOn the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.", "title": "" }, { "docid": "0f49df994b3bc963d42c960a46137e0d", "text": "Finding the best makeup for a given human face is an art in its own right. Experienced makeup artists train for years to be skilled enough to propose a best-fit makeup for an individual. In this work we propose a system that automates this task. We acquired the appearance of 56 human faces, both without and with professional makeup. To this end, we use a controlled-light setup, which allows to capture detailed facial appearance information, such as diffuse reflectance, normals, subsurface-scattering, specularity, or glossiness. A 3D morphable face model is used to obtain 3D positional information and to register all faces into a common parameterization. We then define makeup to be the change of facial appearance and use the acquired database to find a mapping from the space of human facial appearance to makeup. Our main application is to use this mapping to suggest the best-fit makeup for novel faces that are not in the database. Further applications are makeup transfer, automatic rating of makeup, makeup-training, or makeup-exaggeration. As our makeup representation captures a change in reflectance and scattering, it allows us to synthesize faces with makeup in novel 3D views and novel lighting with high realism. The effectiveness of our approach is further validated in a user-study.", "title": "" }, { "docid": "ef99799bf977ba69a63c9f030fc65c7f", "text": "In this paper, we propose a novel transductive learning framework named manifold-ranking based image retrieval (MRBIR). Given a query image, MRBIR first makes use of a manifold ranking algorithm to explore the relationship among all the data points in the feature space, and then measures relevance between the query and all the images in the database accordingly, which is different from traditional similarity metrics based on pair-wise distance. In relevance feedback, if only positive examples are available, they are added to the query set to improve the retrieval result; if examples of both labels can be obtained, MRBIR discriminately spreads the ranking scores of positive and negative examples, considering the asymmetry between these two types of images. Furthermore, three active learning methods are incorporated into MRBIR, which select images in each round of relevance feedback according to different principles, aiming to maximally improve the ranking result. Experimental results on a general-purpose image database show that MRBIR attains a significant improvement over existing systems from all aspects.", "title": "" }, { "docid": "d80fbd6e24d93991c8a64a8ecfb37d92", "text": "THE DEVELOPMENT OF PHYSICAL FITNESS IN YOUNG ATHLETES IS A RAPIDLY EXPANDING FIELD OF INTEREST FOR STRENGTH AND CONDITIONING COACHES, PHYSICAL EDUCATORS, SPORTS COACHES, AND PARENTS. PREVIOUS LONG-TERM ATHLETE DEVELOPMENT MODELS HAVE CLASSIFIED YOUTH-BASED TRAINING METHODOLOGIES IN RELATION TO CHRONOLOGIC AGE GROUPS, AN APPROACH THAT HAS DISTINCT LIMITATIONS. MORE RECENT MODELS HAVE ATTEMPTED TO BRIDGE MATURATION AND PERIODS OF TRAINABILITY FOR A LIMITED NUMBER OF FITNESS QUALITIES, ALTHOUGH SUCH MODELS APPEAR TO BE BASED ON SUBJECTIVE ANALYSIS. THE YOUTH PHYSICAL DEVELOPMENT MODEL PROVIDES A LOGICAL AND EVIDENCE-BASED APPROACH TO THE SYSTEMATIC DEVELOPMENT OF PHYSICAL PERFORMANCE IN YOUNG ATHLETES.", "title": "" }, { "docid": "d0d5081b93f48972c92b3c5a7e69350e", "text": "Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.", "title": "" }, { "docid": "25b77292def9ba880fecb58a38897400", "text": "In this paper, we present a successful operation of Gallium Nitride(GaN)-based three-phase inverter with high efficiency of 99.3% for driving motor at 900W under the carrier frequency of 6kHz. This efficiency well exceeds the value by IGBT (Insulated Gate Bipolar Transistor). This demonstrates that GaN has a great potential for power switching application competing with SiC. Fully reduced on-state resistance in a new normally-off GaN transistor called Gate Injection Transistor (GIT) greatly helps to increase the efficiency. In addition, use of the bidirectional operation of the lateral and compact GITs with synchronous gate driving, the inverter is operated free from fly-wheel diodes which have been connected in parallel with IGBTs in a conventional inverter system.", "title": "" }, { "docid": "9c3474b501051fc02232feb3b1da1fc8", "text": "This paper presents a real-time simulator of a permanent magnet synchronous motor (PMSM) drive based on a finite-element analysis (FEA) method and implemented on an FPGA card for HIL testing of motor drive controllers. The proposed PMSM model is a phase domain model with inductances and flux profiles computed from the JMAG-RT finite element analysis software. A 3-phase IGBT inverter drives the PMSM machine. Both models are implemented on an FPGA chip, with no VHDL coding, using the RT-LAB real-time simulation platform from Opal-RT and a Simulink blockset called xilinx system generator (XSG). The PMSM drive, along with an open-loop test source for the pulse width modulation, is coded for an FPGA card. The PMSM drive is completed with various encoder models (quadrature, Hall effects and resolver). The overall model compilation and simulation is entirely automated by RT-LAB. The drive is designed to run in a closed loop with a HIL-interfaced controller connected to the I/O of the real-time simulator. The PMSM drive model runs with an equivalent 10 nanosecond time step (100 MHz FPGA card) and has a latency of 300 ns (PMSM machine and inverter) with the exception of the FEA-computed inductance matrix routines which are updated in parallel on a CPU of the real-time simulator at a 40 us rate. The motor drive is directly connected to digital inputs and analog outputs with 1 microsecond settling time on the FPGA card and has a resulting total hardware-in-the-loop latency of 1.3 microseconds.", "title": "" }, { "docid": "ee7473e3b283790c400f7616392e4c33", "text": "Evolutionary computation is emerging as a new engineering computational paradigm, which may significantly change the present structural design practice. For this reason, an extensive study of evolutionary computation in the context of structural design has been conducted in the Information Technology and Engineering School at George Mason University and its results are reported here. First, a general introduction to evolutionary computation is presented and recent developments in this field are briefly described. Next, the field of evolutionary design is introduced and its relevance to structural design is explained. Further, the issue of creativity/novelty is discussed and possible ways of achieving it during a structural design process are suggested. Current research progress in building engineering systems’ representations, one of the key issues in evolutionary design, is subsequently discussed. Next, recent developments in constraint-handling methods in evolutionary optimization are reported. Further, the rapidly growing field of evolutionary multiobjective optimization is presented and briefly described. An emerging subfield of coevolutionary design is subsequently introduced and its current advancements reported. Next, a comprehensive review of the applications of evolutionary computation in structural design is provided and chronologically classified. Finally, a summary of the current research status and a discussion on the most promising paths of future research are also presented.", "title": "" }, { "docid": "b19aab238e0eafef52974a87300750a3", "text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.", "title": "" }, { "docid": "84c2b96916ce68245cf81bdf8f4b435c", "text": "INTRODUCTION\nComplete and accurate coding of injury causes is essential to the understanding of injury etiology and to the development and evaluation of injury-prevention strategies. While civilian hospitals use ICD-9-CM external cause-of-injury codes, military hospitals use codes derived from the NATO Standardization Agreement (STANAG) 2050.\n\n\nDISCUSSION\nThe STANAG uses two separate variables to code injury cause. The Trauma code uses a single digit with 10 possible values to identify the general class of injury as battle injury, intentionally inflicted nonbattle injury, or unintentional injury. The Injury code is used to identify cause or activity at the time of the injury. For a subset of the Injury codes, the last digit is modified to indicate place of occurrence. This simple system contains fewer than 300 basic codes, including many that are specific to battle- and sports-related injuries not coded well by either the ICD-9-CM or the draft ICD-10-CM. However, while falls, poisonings, and injuries due to machinery and tools are common causes of injury hospitalizations in the military, few STANAG codes correspond to these events. Intentional injuries in general and sexual assaults in particular are also not well represented in the STANAG. Because the STANAG does not map directly to the ICD-9-CM system, quantitative comparisons between military and civilian data are difficult.\n\n\nCONCLUSIONS\nThe ICD-10-CM, which will be implemented in the United States sometime after 2001, expands considerably on its predecessor, ICD-9-CM, and provides more specificity and detail than the STANAG. With slight modification, it might become a suitable replacement for the STANAG.", "title": "" } ]
scidocsrr
4b9765c6922ae392ada4db9aabc5619f
A Survey of Visualization Systems for Malware Analysis
[ { "docid": "c76d8ac34709f84215e365e2412b9f4e", "text": "Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmed malicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware.\n This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that leverage these It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior.", "title": "" }, { "docid": "a717222db438adc4be0fd82f916bacdc", "text": "This paper presents MalwareVis, a utility that provides security researchers a method to browse, filter, view and compare malware network traces as entities.\n Specifically, we propose a cell-like visualization model to view the network traces of a malware sample's execution. This model is a intuitive representation of the heterogeneous attributes (protocol, host ip, transmission size, packet number, duration) of a list of network streams associated with a malware instance. We encode these features into colors and basic geometric properties of common shapes. The list of streams is organized circularly in a clock-wise fashion to form an entity. Our design takes into account of the sparse and skew nature of these attributes' distributions and proposes mapping and layout strategies to allow a clear global view of a malware sample's behaviors.\n We demonstrate MalwareVis on a real-world corpus of malware samples and display their individual activity patterns. We show that it is a simple to use utility that provides intriguing visual representations that facilitate user interaction to perform security analysis.", "title": "" }, { "docid": "ec48c3ba506409be7219320fe8e263ca", "text": "Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.", "title": "" } ]
[ { "docid": "c3dba6bf97368e6fb707ea622ca5fbfc", "text": "This paper studies the problem of obtaining depth information from focusing and defocusing, which have long been noticed as important sources of depth information for human and machine vision. In depth from focusing, we try to eliminate the local maxima problem which is the main source of inaccuracy in focusing; in depth from defocusing, a new computational model is proposed to achieve higher accuracy. The major contributions of this paper are: (1) In depth from focusing, instead of the popular Fibonacci search which is often trapped in local maxima, we propose the combination of Fibonacci search and curve tting, which leads to an unprecedentedly accurate result; (2) New model of the blurring e ect which takes the geometric blurring as well as the imaging blurring into consideration, and the calibration of the blurring model; (3) In spectrogram-based depth from defocusing, an iterative estimation method is proposed to decrease or eliminate the window e ect. This paper reports focus ranging with less than 1/1000 error and the defocus ranging with about 1/200 error. With this precision, depth from focus ranging is becoming competitive with stereo vision for reconstructing 3D depth information.", "title": "" }, { "docid": "e303b7edea2e32bdc78712efb129588b", "text": "The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of \"recent\" paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome.", "title": "" }, { "docid": "41c35407c55878910f5dfc2dfe083955", "text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.", "title": "" }, { "docid": "e1f8ac0ee1a5ec2175f3420e5874722d", "text": "In this paper we present an approach for the task of author profiling. We propose a coherent grouping of features combined with appropriate preprocessing steps for each group. The groups we used were stylometric and structural, featuring among others, trigrams and counts of twitter specific characteristics. We address gender and age prediction as a classification task and personality prediction as a regression problem using Support Vector Machines and Support Vector Machine Regression respectively on documents created by joining each user’s tweets.", "title": "" }, { "docid": "687caec27d44691a6aac75577b32eb81", "text": "We present unsupervised approaches to the problem of modeling dialog acts in asynchronous conversations; i.e., conversations where participants collaborate with each other at different times. In particular, we investigate a graph-theoretic deterministic framework and two probabilistic conversation models (i.e., HMM and HMM+Mix) for modeling dialog acts in emails and forums. We train and test our conversation models on (a) temporal order and (b) graph-structural order of the datasets. Empirical evaluation suggests (i) the graph-theoretic framework that relies on lexical and structural similarity metrics is not the right model for this task, (ii) conversation models perform better on the graphstructural order than the temporal order of the datasets and (iii) HMM+Mix is a better conversation model than the simple HMM model.", "title": "" }, { "docid": "8e19813c7257c8d8d73867b9a4f9fa8d", "text": "Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.", "title": "" }, { "docid": "3481067aa5e7e10095f4cdb782e061b4", "text": "We empirically explored the roles and scope of knowledge management systems in organizations. Building on a knowledgebased view of the firm, we hypothesized and empirically tested our belief that more integration is needed between technologies intended to support knowledge and those supporting business operations. Findings from a Delphi study and in-depth interviews illustrated this and led us to suggest a revised approach to developing organizational knowledge management systems. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5d15ba47aaa29f388328824fa592addc", "text": "Breast cancer continues to be a significant public health problem in the world. The diagnosing mammography method is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult for radiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. This paper describes a new method for mammographic image enhancement and denoising based on wavelet transform and homomorphic filtering. The mammograms are acquired from the Faculty of Medicine of the University of Akdeniz and the University of Istanbul in Turkey. Firstly wavelet transform of the mammograms is obtained and the approximation coefficients are filtered by homomorphic filter. Then the detail coefficients of the wavelet associated with noise and edges are modeled by Gaussian and Laplacian variables, respectively. The considered coefficients are compressed and enhanced using these variables with a shrinkage function. Finally using a proposed adaptive thresholding the fine details of the mammograms are retained and the noise is suppressed. The preliminary results of our work indicate that this method provides much more visibility for the suspicious regions.", "title": "" }, { "docid": "ab92c8ded0001d4103be4e7a8ee3a1f7", "text": "Metabolic syndrome defines a cluster of interrelated risk factors for cardiovascular disease and diabetes mellitus. These factors include metabolic abnormalities, such as hyperglycemia, elevated triglyceride levels, low high-density lipoprotein cholesterol levels, high blood pressure, and obesity, mainly central adiposity. In this context, extracellular vesicles (EVs) may represent novel effectors that might help to elucidate disease-specific pathways in metabolic disease. Indeed, EVs (a terminology that encompasses microparticles, exosomes, and apoptotic bodies) are emerging as a novel mean of cell-to-cell communication in physiology and pathology because they represent a new way to convey fundamental information between cells. These microstructures contain proteins, lipids, and genetic information able to modify the phenotype and function of the target cells. EVs carry specific markers of the cell of origin that make possible monitoring their fluctuations in the circulation as potential biomarkers inasmuch their circulating levels are increased in metabolic syndrome patients. Because of the mixed components of EVs, the content or the number of EVs derived from distinct cells of origin, the mode of cell stimulation, and the ensuing mechanisms for their production, it is difficult to attribute specific functions as drivers or biomarkers of diseases. This review reports recent data of EVs from different origins, including endothelial, smooth muscle cells, macrophages, hepatocytes, adipocytes, skeletal muscle, and finally, those from microbiota as bioeffectors of message, leading to metabolic syndrome. Depicting the complexity of the mechanisms involved in their functions reinforce the hypothesis that EVs are valid biomarkers, and they represent targets that can be harnessed for innovative therapeutic approaches.", "title": "" }, { "docid": "df5df8eb9b7bdd4dbbcaa4469486fec6", "text": "The human population generates vast quantities of waste material. Macro (>1 mm) and microscopic (<1 mm) fragments of plastic debris represent a substantial contamination problem. Here, we test hypotheses about the influence of wind and depositional regime on spatial patterns of micro- and macro-plastic debris within the Tamar Estuary, UK. Debris was identified to the type of polymer using Fourier-transform infrared spectroscopy (FT-IR) and categorized according to density. In terms of abundance, microplastic accounted for 65% of debris recorded and mainly comprised polyvinylchloride, polyester, and polyamide. Generally, there were greater quantities of plastic at downwind sites. For macroplastic, there were clear patterns of distribution for less dense items, while for microplastic debris, clear patterns were for denser material. Small particles of sediment and plastic are both likely to settle slowly from the water-column and are likely to be transported by the flow of water and be deposited in areas where the movements of water are slower. There was, however, no relationship between the abundance of microplastic and the proportion of clay in sediments from the strandline. These results illustrate how FT-IR spectroscopy can be used to identify the different types of plastic and in this case was used to indicate spatial patterns, demonstrating habitats that are downwind acting as potential sinks for the accumulation of debris.", "title": "" }, { "docid": "896bacf4147c3a597339bb021e0502f9", "text": "We study sliding window multi-join processing in continuous queries over data streams. Several algorithms are reported for performing continuous, incremental joins, under the assumption that all the sliding windows fit in main memory. The algorithms include multiway incremental nested loop joins (NLJs) and multi-way incremental hash joins. We also propose join ordering heuristics to minimize the processing cost per unit time. We test a possible implementation of these algorithms and show that, as expected, hash joins are faster than NLJs for performing equi-joins, and that the overall processing cost is influenced by the strategies used to remove expired tuples from the sliding windows.", "title": "" }, { "docid": "73015dbfed8e1ed03965779a93e14190", "text": "The DataMiningGrid system has been designed to meet the requirements of modern and distributed data mining scenarios. Based on the Globus Toolkit and other open technology and standards, the DataMiningGrid system provides tools and services facilitating the grid-enabling of data mining applications without any intervention on the application side. Critical features of the system include flexibility, extensibility, scalability, efficiency, conceptual simplicity and ease of use. The system has been developed and evaluated on the basis of a diverse set of use cases from different sectors in science and technology. The DataMiningGrid software is freely available under Apache License 2.0. c © 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "195a57e6aaf0e8496e808366ff4d1bca", "text": "BACKGROUND AND PURPOSE\nThe Mini-Mental State Examination (MMSE) is insensitive to mild cognitive impairment and executive function. The more recently developed Montreal Cognitive Assessment (MoCA), an alternative, brief 30-point global cognitive screen, might pick up more cognitive abnormalities in patients with cerebrovascular disease.\n\n\nMETHODS\nIn a population-based study (Oxford Vascular Study) of transient ischemic attack and stroke, the MMSE and MoCA were administered to consecutive patients at 6-month or 5-year follow-up. Accepted cutoffs of MMSE <27 and MoCA <26 were taken to indicate cognitive impairment.\n\n\nRESULTS\nOf 493 patients, 413 (84%) were testable. Untestable patients were older (75.5 versus 69.9 years, P<0.001) and often had dysphasia (24%) or dementia (15%). Although MMSE and MoCA scores were highly correlated (r(2)=0.80, P<0.001), MMSE scores were skewed toward higher values, whereas MoCA scores were normally distributed: median and interquartile range 28 (26 to 29) and 23 (20 to 26), respectively. Two hundred ninety-one of 413 (70%) patients had MoCA <26 of whom 162 had MMSE > or =27, whereas only 5 patients had MoCA > or =26 and MMSE <27 (P<0.0001). In patients with MMSE > or =27, MoCA <26 was associated with higher Rankin scores (P=0.0003) and deficits in delayed recall, abstraction, visuospatial/executive function, and sustained attention.\n\n\nCONCLUSIONS\nThe MoCA picked up substantially more cognitive abnormalities after transient ischemic attack and stroke than the MMSE, demonstrating deficits in executive function, attention, and delayed recall.", "title": "" }, { "docid": "b0ebcd7a340725713e90d05e9a50ae24", "text": "Analogies are ubiquitous in science, both in theory and experiments. Based on an ethnographic study of a research lab in neural engineering, we focus on a case of conceptual innovation where the cross-breeding of two types of analogies led to a breakthrough. In vivo phenomena were recreated in two analogical forms: one, as an in vitro physical model, and the other, as a computational model of the first physical model. The computational model also embodied constraints drawn from the neuroscience and engineering literature. Cross connections and linkages were then made between these two analogical models, over time, to solve problems. We describe how the development of the intermediary, hybrid computational model led to a conceptual innovation, and subsequent engineering innovations. Using this case study, we highlight some of the peculiar features of such hybrid analogies that are now used widely in the sciences and engineering sciences, and the significant questions they raise for current theories of analogy.", "title": "" }, { "docid": "57256bce5741b23fa4827fad2ad9e321", "text": "This study assessed the depth of online learning, with a focus on the nature of online interaction in four distance education course designs. The Study Process Questionnaire was used to measure the shift in students’ approach to learning from the beginning to the end of the courses. Design had a significant impact on the nature of the interaction and whether students approached learning in a deep and meaningful manner. Structure and leadership were found to be crucial for online learners to take a deep and meaningful approach to learning.", "title": "" }, { "docid": "5b5345a894d726186ba7f6baf76cb65e", "text": "In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.", "title": "" }, { "docid": "d015cb7a9afaac66909243de840a446b", "text": "In the classical job shop scheduling problem (JSSP), n jobs are processed to completion on m unrelated machines. Each job requires processing on each machine exactly once. For each job, technology constraints specify a complete, distinct routing which is fixed and known in advance. Processing times are sequence-independent, fixed, and known in advance. Each machine is continuously available from time zero, and operations are processed without preemption. The objective is to minimize the maximum completion time (makespan). The flexible-routing job shop (FRJS) scheduling problem, or job shop with multipurpose machines, extends JSSP by assuming that a machine may be capable of performing more than one type of operation. (For a given operation, there must exist at least one machine capable of performing it.) FRJS approximates a flexible manufacturing environment with numerically controlled work centers equipped with interchangeable tool magazines. This report extends a dynamic, adaptive tabu search (TS) strategy previously described for job shops with single and multiple instances of single-purpose machines, and applies it to FRJS. We present “proof-of-concept” results for three problems constructed from difficult JSSP instances.", "title": "" }, { "docid": "72bc688726c5fc26b2dd7e63d3b28ac0", "text": "In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "title": "" }, { "docid": "113cf957b47a8b8e3bbd031aa9a28ff2", "text": "We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use nonpropositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
b8ccff5d43fff0bfe283360f05f13c9c
Digital watermarking: algorithms and applications
[ { "docid": "d27724584b2bfe92cf9f5a38f0f60599", "text": "The growth of new imaging technologies has created a need for techniques that can be used for copyright protection of digital images. One approach for copyright protection is to introduce an invisible signal known as a digital watermark in the image. In this paper, we describe digital image watermarking techniques known as perceptually watermarks that are designed to exploit aspects of the human visual system in order to produce a transparent, yet robust watermark.", "title": "" } ]
[ { "docid": "e9621784df5009b241c563a54583bab9", "text": "CONTEXT\nPsychopathic antisocial individuals have previously been characterized by abnormal interhemispheric processing and callosal functioning, but there have been no studies on the structural characteristics of the corpus callosum in this group.\n\n\nOBJECTIVES\nTo assess whether (1) psychopathic individuals with antisocial personality disorder show structural and functional impairments in the corpus callosum, (2) group differences are mirrored by correlations between dimensional measures of callosal structure and psychopathy, (3) callosal abnormalities are associated with affective deficits, and (4) callosal abnormalities are independent of psychosocial deficits.\n\n\nDESIGN\nCase-control study.\n\n\nSETTING\nCommunity sample.\n\n\nPARTICIPANTS\nFifteen men with antisocial personality disorder and high psychopathy scores and 25 matched controls, all from a larger sample of 83 community volunteers.\n\n\nMAIN OUTCOME MEASURES\nStructural magnetic resonance imaging measures of the corpus callosum (volume estimate of callosal white matter, thickness, length, and genu and splenium area), functional callosal measures (2 divided visual field tasks), electrodermal and cardiovascular activity during a social stressor, personality measures of affective and interpersonal deficits, and verbal and spatial ability.\n\n\nRESULTS\nPsychopathic antisocial individuals compared with controls showed a 22.6% increase in estimated callosal white matter volume (P<.001), a 6.9% increase in callosal length (P =.002), a 15.3% reduction in callosal thickness (P =.04), and increased functional interhemispheric connectivity (P =.02). Correlational analyses in the larger unselected sample confirmed the association between antisocial personality and callosal structural abnormalities. Larger callosal volumes were associated with affective and interpersonal deficits, low autonomic stress reactivity, and low spatial ability. Callosal abnormalities were independent of psychosocial deficits.\n\n\nCONCLUSIONS\nCorpus callosum abnormalities in psychopathic antisocial individuals may reflect atypical neurodevelopmental processes involving an arrest of early axonal pruning or increased white matter myelination. These findings may help explain affective deficits and previous findings of abnormal interhemispheric transfer in psychopathic individuals.", "title": "" }, { "docid": "b429b37623a690cd4b224a334985f7dd", "text": "Data centers play a key role in the expansion of cloud computing. However, the efficiency of data center networks is limited by oversubscription. The typical unbalanced traffic distributions of a DCN further aggravate the problem. Wireless networking, as a complementary technology to Ethernet, has the flexibility and capability to provide feasible approaches to handle the problem. In this article, we analyze the challenges of DCNs and articulate the motivations of employing wireless in DCNs. We also propose a hybrid Ethernet/wireless DCN architecture and a mechanism to dynamically schedule wireless transmissions based on traffic demands. Our simulation study demonstrates the effectiveness of the proposed wireless DCN.", "title": "" }, { "docid": "680306f2f5a4e54e1b024f5cd47f60f4", "text": "Age is one of the important biometric traits for reinforcing the identity authentication. The challenge of facial age estimation mainly comes from two difficulties: (1) the wide diversity of visual appearance existing even within the same age group and (2) the limited number of labeled face images in real cases. Motivated by previous research on human cognition, human beings can confidently rank the relative ages of facial images, we postulate that the age rank plays a more important role in the age estimation than visual appearance attributes. In this paper, we assume that the age ranks can be characterized by a set of ranking features lying on a low-dimensional space. We propose a simple and flexible subspace learning method by solving a sequence of constrained optimization problems. With our formulation, both the aging manifold, which relies on exact age labels, and the implicit age ranks are jointly embedded in the proposed subspace. In addition to supervised age estimation, our method also extends to semi-supervised age estimation via automatically approximating the age ranks of unlabeled data. Therefore, we can successfully include more available data to improve the feature discriminability. In the experiments, we adopt the support vector regression on the proposed ranking features to learn our age estimators. The results on the age estimation demonstrate that our method outperforms classic subspace learning approaches, and the semi-supervised learning successfully incorporates the age ranks from unlabeled data under different scales and sources of data set.", "title": "" }, { "docid": "019d465534b9229c2a97f1727c400832", "text": "OBJECTIVE\nResearch on learning from feedback has produced ambiguous guidelines for feedback design--some have advocated minimal feedback, whereas others have recommended more extensive feedback that highly supported performance. The objective of the current study was to investigate how individual differences in cognitive resources may predict feedback requirements and resolve previous conflicted findings.\n\n\nMETHOD\nCognitive resources were controlled for by comparing samples from populations with known differences, older and younger adults.To control for task demands, a simple rule-based learning task was created in which participants learned to identify fake Windows pop-ups. Pop-ups were divided into two categories--those that required fluid ability to identify and those that could be identified using crystallized intelligence.\n\n\nRESULTS\nIn general, results showed participants given higher feedback learned more. However, when analyzed by type of task demand, younger adults performed comparably with both levels of feedback for both cues whereas older adults benefited from increased feedbackfor fluid ability cues but from decreased feedback for crystallized ability cues.\n\n\nCONCLUSION\nOne explanation for the current findings is feedback requirements are connected to the cognitive abilities of the learner-those with higher abilities for the type of demands imposed by the task are likely to benefit from reduced feedback.\n\n\nAPPLICATION\nWe suggest the following considerations for feedback design: Incorporate learner characteristics and task demands when designing learning support via feedback.", "title": "" }, { "docid": "a2d65e627505d2eb44544ffe910b398c", "text": "With the increasing utilization and popularity of the cloud infrastructure, more and more data are moved to the cloud storage systems. This makes the availability of cloud storage services critically important, particularly given the fact that outages of cloud storage services have indeed happened from time to time. Thus, solely depending on a single cloud storage provider for storage services can risk violating the service-level agreement (SLA) due to the weakening of service availability. This has led to the notion of Cloud-of-Clouds, where data redundancy is introduced to distribute data among multiple independent cloud storage providers, to address the problem. The key in the effectiveness of the Cloud-of-Clouds approaches lies in how the data redundancy is incorporated and distributed among the clouds. However, the existing Cloud-of-Clouds approaches utilize either replication or erasure codes to redundantly distribute data across multiple clouds, thus incurring either high space or high performance overheads. In this paper, we propose a hybrid redundant data distribution approach, called HyRD, to improve the cloud storage availability in Cloud-of-Clouds by exploiting the workload characteristics and the diversity of cloud providers. In HyRD, large files are distributed in multiple cost-efficient cloud storage providers with erasure-coded data redundancy while small files and file system metadata are replicated on multiple high-performance cloud storage providers. The experiments conducted on our lightweight prototype implementation of HyRD show that HyRD improves the cost efficiency by 33.4% and 20.4%, and reduces the access latency by 58.7% and 34.8% than the DuraCloud and RACS schemes, respectively.", "title": "" }, { "docid": "554a3f5f19503a333d3788cf46ffcef2", "text": "Hospital overcrowding has been a problem in Thai public healthcare system. The main cause of this problem is the limited available resources, including a limited number of doctors, nurses, and limited capacity and availability of medical devices. There have been attempts to alleviate the problem through various strategies. In this paper, a low-cost system was developed and tested in a public hospital with limited budget. The system utilized QR code and smartphone application to capture as-is hospital processes and the time spent on individual activities. With the available activity data, two algorithms were developed to identify two quantities that are valuable to conduct process improvement: the most congested time and bottleneck activities. The system was implemented in a public hospital and results were presented.", "title": "" }, { "docid": "e7c8abf3387ba74ca0a6a2da81a26bc4", "text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "76d029c669e84e420c8513bd837fb59b", "text": "Since its original publication, the Semi-Global Matching (SGM) technique has been re-implemented by many researchers and companies. The method offers a very good trade off between runtime and accuracy, especially at object borders and fine structures. It is also robust against radiometric differences and not sensitive to the choice of parameters. Therefore, it is well suited for solving practical problems. The applications reach from remote sensing, like deriving digital surface models from aerial and satellite images, to robotics and driver assistance systems. This paper motivates and explains the method, shows current developments as well as examples from various applications.", "title": "" }, { "docid": "5b36ec4a7282397402d582de7254d0c1", "text": "Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applications such as automatic speech recognition (ASR). Significant performance improvements in both perplexity and word error rate over standard n-gram LMs have been widely reported on ASR tasks. In contrast, published research on using RNNLMs for keyword search systems has been relatively limited. In this paper the application of RNNLMs for the IARPA Babel keyword search task is investigated. In order to supplement the limited acoustic transcription data, large amounts of web texts are also used in large vocabulary design and LM training. Various training criteria were then explored to improved RNNLMs' efficiency in both training and evaluation. Significant and consistent improvements on both keyword search and ASR tasks were obtained across all languages.", "title": "" }, { "docid": "c6a30835ce21b418f5f097e6e4533332", "text": "© 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.", "title": "" }, { "docid": "ddd8b7e40fab76ad41696e9fb407c0b2", "text": "This paper presents a new method for extracting cylinders from an unorganized set of 3D points. The originality of this approach is to separate the extraction problem into two distinct steps. The first step consists in extracting a constrained plane in the Gaussian image. This yields a subset of 3D points along with a direction. In the second step, cylinders of known direction are extracted in the corresponding subset of points. Robustness is achieved by the use of a random sampling method in both steps. Experimental results showing the extraction of pipes in digitized industrial environments are presented.", "title": "" }, { "docid": "61225cc75aac3bd6b61d7a45ad4ceb1f", "text": "We present a pipeline of algorithms that decomposes a given polygon model into parts such that each part can be 3D printed with high (outer) surface quality. For this we exploit the fact that most 3D printing technologies have an anisotropic resolution and hence the surface smoothness varies significantly with the orientation of the surface. Our pipeline starts by segmenting the input surface into patches such that their normals can be aligned perpendicularly to the printing direction. A 3D Voronoi diagram is computed such that the intersections of the Voronoi cells with the surface approximate these surface patches. The intersections of the Voronoi cells with the input model’s volume then provide an initial decomposition. We further present an algorithm to compute an assembly order for the parts and generate connectors between them. A post processing step further optimizes the seams between segments to improve the visual quality. We run our pipeline on a wide range of 3D models and experimentally evaluate the obtained improvements in terms of numerical, visual, and haptic quality.", "title": "" }, { "docid": "0102748c7f9969fb53a3b5ee76b6eefe", "text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:", "title": "" }, { "docid": "c839542db0e80ce253a170a386d91bab", "text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).", "title": "" }, { "docid": "bf1bcf55307b02adca47ff696be6f801", "text": "INTRODUCTION\nMobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps.\n\n\nMATERIALS AND METHODS\nGuidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics.\n\n\nRESULTS\nWe propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow.\n\n\nCONCLUSIONS\nUltimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.", "title": "" }, { "docid": "6e2239ebdf662f33b81b665b20516eec", "text": "We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.", "title": "" }, { "docid": "51760cbc4145561e23702b6624bfa9f8", "text": "Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called \"deep learning meta-architectures\". We combine each of these meta-architectures with \"deep feature extractors\" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.", "title": "" }, { "docid": "fa604c528539ac5cccdbd341a9aebbf7", "text": "BACKGROUND\nAn understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts.\n\n\nMETHODS\nThe uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles.\n\n\nRESULTS/CONCLUSIONS\nP-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.", "title": "" }, { "docid": "934b1a0959389d32382978cdd411ba87", "text": "Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like \"bleed\" and \"punch\" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.", "title": "" }, { "docid": "e4e2bb8bf8cc1488b319a59f82a71f08", "text": "We aim to dismantle the prevalent black-box neural architectures used in complex visual reasoning tasks, into the proposed eXplainable and eXplicit Neural Modules (XNMs), which advance beyond existing neural module networks towards using scene graphs — objects as nodes and the pairwise relationships as edges — for explainable and explicit reasoning with structured knowledge. XNMs allow us to pay more attention to teach machines how to “think”, regardless of what they “look”. As we will show in the paper, by using scene graphs as an inductive bias, 1) we can design XNMs in a concise and flexible fashion, i.e., XNMs merely consist of 4 meta-types, which significantly reduce the number of parameters by 10 to 100 times, and 2) we can explicitly trace the reasoning-flow in terms of graph attentions. XNMs are so generic that they support a wide range of scene graph implementations with various qualities. For example, when the graphs are detected perfectly, XNMs achieve 100% accuracy on both CLEVR and CLEVR CoGenT, establishing an empirical performance upper-bound for visual reasoning; when the graphs are noisily detected from real-world images, XNMs are still robust to achieve a competitive 67.5% accuracy on VQAv2.0, surpassing the popular bag-of-objects attention models without graph structures.", "title": "" } ]
scidocsrr
570a67532b697e98bf25bf66c128a2f8
Usage, costs, and benefits of continuous integration in open-source projects
[ { "docid": "aef81485359f80e7960a36c828095d71", "text": "Software processes comprise many steps; coding is followed by building, integration testing, system testing, deployment, operations, among others. Software process integration and automation have been areas of key concern in software engineering, ever since the pioneering work of Osterweil; market pressures for Agility, and open, decentralized, software development have provided additional pressures for progress in this area. But do these innovations actually help projects? Given the numerous confounding factors that can influence project performance, it can be a challenge to discern the effects of process integration and automation. Software project ecosystems such as GitHub provide a new opportunity in this regard: one can readily find large numbers of projects in various stages of process integration and automation, and gather data on various influencing factors as well as productivity and quality outcomes. In this paper we use large, historical data on process metrics and outcomes in GitHub projects to discern the effects of one specific innovation in process automation: continuous integration. Our main finding is that continuous integration improves the productivity of project teams, who can integrate more outside contributions, without an observable diminishment in code quality.", "title": "" } ]
[ { "docid": "2d3d56123896a61433f8bc4029e1bb72", "text": "Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.", "title": "" }, { "docid": "c9bab6f494d8c01e47449141526daeab", "text": "In this letter, we propose a conceptually simple and intuitive learning objective function, i.e., additive margin softmax, for face verification. In general, face verification tasks can be viewed as metric learning problems, even though lots of face verification models are trained in classification schemes. It is possible when a large-margin strategy is introduced into the classification model to encourage intraclass variance minimization. As one alternative, angular softmax has been proposed to incorporate the margin. In this letter, we introduce another kind of margin to the softmax loss function, which is more intuitive and interpretable. Experiments on LFW and MegaFace show that our algorithm performs better when the evaluation criteria are designed for very low false alarm rate.", "title": "" }, { "docid": "6c7c96b90bc00d420be09740b32b474d", "text": "Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data.", "title": "" }, { "docid": "201273e307d5c5fe5b5498937bd7e848", "text": "The technology of Augmented Reality exists for several decades and it is attributed the potential to provide an ideal, efficient and intuitive way of presenting information. However it is not yet widely used. This is because the realization of such Augmented Reality systems requires to solve many principal problems from various areas. Much progress has been made in solving problems of core technologies, which enables us now to intensively explore the development of Augmented Reality applications. As an exemplary industrial use case for this exploration, I selected the order picking process in logistics applications. This thesis reports on the development of an application to support this task, by iteratively improving Augmented Reality-based metaphors. In such order picking tasks, workers collect sets of items from assortments in warehouses according to work orders. This order picking process has been subject to optimization for a long time, as it occurs a million times a day in industrial life. For this Augmented Reality application development, workers have been equipped with mobile hardware, consisting of a wearable computer (in a back-pack) and tracked head-mounted displays (HMDs). This thesis presents the iterative approach of exploring, evaluating and refining the Augmented Reality system, focusing on usability and utility. It starts in a simple laboratory setup and goes up to a realistic industrial setup in a factory hall. The Augmented Reality visualization shown in the HMD was the main subject of optimization in this thesis. Overall, the task was challenging, as workers have to be guided on different levels, from very coarse to very fine granularity and accuracy. The resulting setup consists of a combined and adaptive visualization to precisely and efficiently guide the user, even if the actual target of the augmentation is not always in the field of view of the HMD. A side-effect of this iterative evaluation and refinement of visualizations in an industrial setup is the report on many lessons learned and an advice on the way Augmented Reality user interfaces should be improved and refined.", "title": "" }, { "docid": "05518ac3a07fdfb7bfede8df8a7a500b", "text": "The prevalence of food allergy is rising for unclear reasons, with prevalence estimates in the developed world approaching 10%. Knowledge regarding the natural course of food allergies is important because it can aid the clinician in diagnosing food allergies and in determining when to consider evaluation for food allergy resolution. Many food allergies with onset in early childhood are outgrown later in childhood, although a minority of food allergy persists into adolescence and even adulthood. More research is needed to improve food allergy diagnosis, treatment, and prevention.", "title": "" }, { "docid": "b134824f6c135a331e503b77d17380c0", "text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.", "title": "" }, { "docid": "33a436e4b987093fdd5f1fcc1a4b74cf", "text": "Observational methods are fundamental to the study of human behavior in the behavioral sciences. For example, in the context of research on intimate relationships, psychologists’ hypotheses are often empirically tested by video recording interactions of couples and manually coding relevant behaviors using standardized coding systems. This coding process can be time-consuming, and the resulting coded data may have a high degree of variability because of a number of factors (e.g., inter-evaluator differences). These challenges provide an opportunity to employ engineering methods to aid in automatically coding human behavioral data. In this work, we analyzed a large corpus of married couples’ problem-solving interactions. Each spouse was manually coded with multiple session-level behavioral observations (e.g., level of blame toward other spouse), and we used acoustic speech features to automatically classify extreme instances for six selected codes (e.g., “low” vs. “high” blame). Specifically, we extracted prosodic, spectral, and voice quality features to capture global acoustic properties for each spouse and trained gender-specific and gender-independent classifiers. The best overall automatic system correctly classified 74.1% of the instances, an improvement of 3.95% absolute (5.63% relative) over our previously reported best results. We compare performance for the various factors: across codes, gender, classifier type, and feature type. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "544213a66bf5a42906cb69bc49f59b21", "text": "Smooth pursuit eye movements provide meaningful insights and information on subject’s behavior and health and may, in particular situations, disturb the performance of typical fixation/saccade classification algorithms. Thus, an automatic and efficient algorithm to identify these eye movements is paramount for eye-tracking research involving dynamic stimuli. In this paper, we propose the Bayesian Decision Theory Identification (I-BDT) algorithm, a novel algorithm for ternary classification of eye movements that is able to reliably separate fixations, saccades, and smooth pursuits in an online fashion, even for low-resolution eye trackers. The proposed algorithm is evaluated on four datasets with distinct mixtures of eye movements, including fixations, saccades, as well as straight and circular smooth pursuits; data was collected with a sample rate of 30 Hz from six subjects, totaling 24 evaluation datasets. The algorithm exhibits high and consistent performance across all datasets and movements relative to a manual annotation by a domain expert (recall: μ = 91.42%, σ = 9.52%; precision: μ = 95.60%, σ = 5.29%; specificity μ = 95.41%, σ = 7.02%) and displays a significant improvement when compared to I-VDT, an state-of-the-art algorithm (recall: μ = 87.67%, σ = 14.73%; precision: μ = 89.57%, σ = 8.05%; specificity μ = 92.10%, σ = 11.21%). For the algorithm implementation and annotated datasets, please contact the first author. CR Categories: I.5.1 [Computing Methodologies]: Pattern Recognition – Models; I.6.4 [Computing Methodologies]: Simulation and Modeling – Model Validation and Analysis; J.7 [Computer Applications]: Computers in Other Systems – Real Time;", "title": "" }, { "docid": "45a98a82d462d8b12445cbe38f20849d", "text": "Proliferative verrucous leukoplakia (PVL) is an aggressive form of oral leukoplakia that is persistent, often multifocal, and refractory to treatment with a high risk of recurrence and malignant transformation. This article describes the clinical aspects and histologic features of a case that demonstrated the typical behavior pattern in a long-standing, persistent lesion of PVL of the mandibular gingiva and that ultimately developed into squamous cell carcinoma. Prognosis is poor for this seemingly harmless-appearing white lesion of the oral mucosa.", "title": "" }, { "docid": "01d741911809305e0b03dad92f5accd2", "text": "Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for handcoded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.", "title": "" }, { "docid": "e632895c1ab1b994f64ef03260b91acb", "text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.", "title": "" }, { "docid": "1d606f39d429c5f344d5d3bc6810f2f9", "text": "Cryptography is increasingly applied to the E-commerce world, especially to the untraceable payment system and the electronic voting system. Protocols for these systems strongly require the anonymous digital signature property, and thus a blind signature strategy is the answer to it. Chaum stated that every blind signature protocol should hold two fundamental properties, blindness and intractableness. All blind signature schemes proposed previously almost are based on the integer factorization problems, discrete logarithm problems, or the quadratic residues, which are shown by Lee et al. that none of the schemes is able to meet the two fundamental properties above. Therefore, an ECC-based blind signature scheme that possesses both the above properties is proposed in this paper.", "title": "" }, { "docid": "fbf30d2032b0695b5ab2d65db2fe8cbc", "text": "Artificial Intelligence for computer games is an interesting topic which attracts intensive attention recently. In this context, Mario AI Competition modifies a Super Mario Bros game to be a benchmark software for people who program AI controller to direct Mario and make him overcome the different levels. This competition was handled in the IEEE Games Innovation Conference and the IEEE Symposium on Computational Intelligence and Games since 2009. In this paper, we study the application of Reinforcement Learning to construct a Mario AI controller that learns from the complex game environment. We train the controller to grow stronger for dealing with several difficulties and types of levels. In controller developing phase, we design the states and actions cautiously to reduce the search space, and make Reinforcement Learning suitable for the requirement of online learning.", "title": "" }, { "docid": "b7e1ec816feb41738140914d766d96e3", "text": "This paper describes an occupational therapy independent living skills programs for adults with developmental disabilities living in group homes. Four clients have participated in this program for 1 year. Verbal reports from house and workshop staffs and written documentation in the clients' records were examined to see if the clients' behaviors changed over the course of their first year in the program. These reports indicate that the clients have moved toward increased independence by showing greater initiative in directing their own care. Treatment issues in group home systems are also discussed.", "title": "" }, { "docid": "e777ccaaeade3c4fe66c2bd23dec920b", "text": "Text classiication is becoming more important with the proliferation of the Internet and the huge amount of data it transfers. We present an eecient algorithm for text classiication using hierarchical classiiers based on a concept hierarchy. The simple TFIDF classiier is chosen to train sample data and to classify other new data. Despite its simplicity, results of experiments on Web pages and TV closed captions demonstrate high classiication accuracy. Application of feature subset selection techniques improves the performance. Our algorithm is compu-tationally eecient being bounded by O(n log n) for n samples.", "title": "" }, { "docid": "74b163a2c2f149dce9850c6ff5d7f1f6", "text": "The vast majority of cutaneous canine nonepitheliotropic lymphomas are of T cell origin. Nonepithelial Bcell lymphomas are extremely rare. The present case report describes a 10-year-old male Golden retriever that was presented with slowly progressive nodular skin lesions on the trunk and limbs. Histopathology of skin biopsies revealed small periadnexal dermal nodules composed of rather pleomorphic round cells with round or contorted nuclei. The diagnosis of nonepitheliotropic cutaneous B-cell lymphoma was based on histopathological morphology and case follow-up, and was supported immunohistochemically by CD79a positivity.", "title": "" }, { "docid": "efd2843175ad0b860ad1607f337addc5", "text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.", "title": "" }, { "docid": "8830ac3811056de2e5a9656504c7aa0c", "text": "Mobile Music Touch (MMT) helps teach users to play piano melodies while they perform other tasks. MMT is a lightweight, wireless haptic music instruction system consisting of fingerless gloves and a mobile Bluetooth enabled computing device, such as a mobile phone. Passages to be learned are loaded into the mobile phone and are played repeatedly while the user performs other tasks. As each note of the music plays, vibrators on each finger in the gloves activate, indicating which finger is used to play each note. We present two studies on the efficacy of MMT. The first measures 16 subjects' ability to play a passage after using MMT for 30 minutes while performing a reading comprehension test. The MMT system was significantly more effective than a control condition where the passage was played repeatedly but the subjects' fingers were not vibrated. The second study compares the amount of time required for 10 subjects to replay short, randomly generated passages using passive training versus active training. Participants with no piano experience could repeat the passages after passive training while subjects with piano experience often could not.", "title": "" }, { "docid": "4fd23e5498d3699ab771f705efacc340", "text": "Estimating the cardinality of unions and intersections of sets is a problem of interest in OLAP. Large data applications often require the use of approximate methods based on small sketches of the data. We give new estimators for the cardinality of unions and intersection and show they approximate an optimal estimation procedure. These estimators enable the improved accuracy of the streaming MinCount sketch to be exploited in distributed settings. Both theoretical and empirical results demonstrate substantial improvements over existing methods.", "title": "" }, { "docid": "39ccabf465c39547852e58f7a691e88a", "text": "Warburg's observation that cancer cells exhibit a high rate of glycolysis even in the presence of oxygen (aerobic glycolysis) sparked debate over the role of glycolysis in normal and cancer cells. Although it has been established that defects in mitochondrial respiration are not the cause of cancer or aerobic glycolysis, the advantages of enhanced glycolysis in cancer remain controversial. Many cells ranging from microbes to lymphocytes use aerobic glycolysis during rapid proliferation, which suggests it may play a fundamental role in supporting cell growth. Here, we review how glycolysis contributes to the metabolic processes of dividing cells. We provide a detailed accounting of the biosynthetic requirements to construct a new cell and illustrate the importance of glycolysis in providing carbons to generate biomass. We argue that the major function of aerobic glycolysis is to maintain high levels of glycolytic intermediates to support anabolic reactions in cells, thus providing an explanation for why increased glucose metabolism is selected for in proliferating cells throughout nature.", "title": "" } ]
scidocsrr
8a3436f291c7d1246a52f2f973dbfa24
Learning Semantic Patterns for Question Generation and Question Answering
[ { "docid": "fead5d31f441dd95ce3ec0fafab4e3e7", "text": "Texts that convey the same or close meaning can be written in many different ways. On the other hand, computer programs are not good at algorithmically processing meaning equivalence of short texts, without relying on knowledge. Toward addressing this problem, researchers have been investigating methods for automatically acquiring paraphrase templates from a corpus. The goal of this thesis work is to develop a paraphrase acquisition framework that can acquire lexically-diverse paraphrase templates, given small (5-20) seed instances and a small (1-10GB) plain monolingual corpus. The framework works in an iterative fashion where the seed instances are used to find paraphrase patterns from the corpus, and the patterns are used to harvest more seed instances to be used in the next iteration. Unlike previous works, lexical diversity of resulting paraphrase patterns can be controlled with a parameter. Our corpus requirement is decent as compared to previous works that require a parallel/comparable corpus or a huge parsed monolingual corpus, which is ideal for languageand domain-portability.", "title": "" } ]
[ { "docid": "eb06c0af1ea9de72f27f995d54590443", "text": "Random acceleration vibration specifications for subsystems, i.e. instruments, equipment, are most times based on measurement during acoustic noise tests on system level, i.e. a spacecraft and measured by accelerometers, placed in the neighborhood of the interface between spacecraft and subsystem. Tuned finite element models can be used to predict the random acceleration power spectral densities at other locations than available via the power spectral density measurements of the acceleration. The measured and predicted power spectral densities do represent the modal response characteristics of the system and show many peaks and valleys. The equivalent random acceleration vibration test specification is a smoothed, enveloped, peak-clipped version of the measured and predicted power spectral densities of the acceleration spectrum. The original acceleration vibration spectrum can be characterized by a different number response spectra: Shock Response Spectrum (SRS) , Extreme Response Spectrum (ERS), Vibration Response Spectrum (VRS), and Fatigue Damage Spectrum (FDS). An additional method of non-stationary random vibrations is based on the Rayleigh distribution of peaks. The response spectra represent the responses of series of SDOF systems excited at the base by random acceleration, both in time and frequency domain. The synthesis of equivalent random acceleration vibration specifications can be done in a very structured manner and are more suitable than equivalent random acceleration vibration specifications obtained by simple enveloping. In the synthesis process Miles’ equation plays a dominant role to invert the response spectra into equivalent random acceleration vibration spectra. A procedure is proposed to reduce the number of data point in the response spectra curve by dividing the curve in a numbers of fields. The synthesis to an equivalent random acceleration J.J. Wijker, M.H.M. Ellenbroek, and A. de Boer spectrum is performed on a reduced selected set of data points. The recalculated response spectra curve envelops the original response spectra curves. A real life measured random acceleration spectrum (PSD) with quite a number of peaks and valleys is taken to generate, applying response spectra SRS, ERS, VRS, FDS and the Rayleigh distribution of peaks, equivalent random acceleration vibration specifications. Computations are performed both in time and frequency domain. J.J. Wijker, M.H.M. Ellenbroek, and A. de Boer", "title": "" }, { "docid": "05527d807914ad45b321c8e512fbd346", "text": "www.frontiersinecology.org © The Ecological Society of America S research can provide important and timely insights into environmental issues, but scientists face many personal and institutional challenges to effectively synthesize and transmit their findings to relevant stakeholders. In this paper, we address how “interface” or “boundary” organizations – organizations created to foster the use of science knowledge in environmental policy making and environmental management, as well as to encourage changes in behavior, further learning, inquiry, discovery, or enjoyment – can help scientists improve and facilitate effective communication and the application of scientific information (Gieryn 1999). Interface organizations are synergistic and operate across a range of scales, purposes, and intensities of information flow between scientists and audiences. Considerable attention has focused on how to involve scientists in the decision-making process regarding natural resource management issues related to their area of expertise (Andersson 2004; Roth et al. 2004; Rinaudo and Garin 2005; Bacic et al. 2006; Olsson and Andersson 2007). These efforts have resulted in scientific input to environmental issues, including ecosystem management (Meffe et al. 2002), adaptive collaborative management (Buck et al. 2001; Colfer 2005), and integrated watershed management (Jeffrey and Gearey 2006). A common element of many of these approaches is the use of an organization or group to manage and facilitate the interaction between the scientists and the “users” or “managers” of a natural resource. Cash et al. (2003) identified key functions of successful “boundary management” organizations. These functions include communication, translation, and mediation (convening groups, as well as resolving differences). Successful efforts are characterized by having clear lines of responsibility and accountability on both sides of the boundary, and by providing a forum in which information can be co-produced by scientists and information users. Interface organizations typically: (1) Engage: seeking out scientists with important findings and then building or filling a demand for their insights among different communities and for various niches, contexts, and scales. The organization usually serves as a convener. SCIENCE, COMMUNICATION, AND CONTROVERSIES", "title": "" }, { "docid": "058515182c568c8df202542f28c15203", "text": "Plant diseases have turned into a dilemma as it can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and classification of plant leaf diseases. The developed processing scheme consists of four main steps, first a color transformation structure for the input RGB image is created, then the green pixels are masked and removed using specific threshold value followed by segmentation process, the texture statistics are computed for the useful segments, finally the extracted features are passed through the classifier. The proposed algorithm’s efficiency can successfully detect and classify the examined diseases with an accuracy of 94%. Experimental results on a database of about 500 plant leaves confirm the robustness of the proposed approach.", "title": "" }, { "docid": "05e754e0567bf6859d7a68446fc81bad", "text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?", "title": "" }, { "docid": "41f9b137b7e7a0b1b02c45d1eef216f1", "text": "Personality is an important psychological construct accounting for individual differences in people. Computational personality recognition from online social networks is gaining increased research attention in recent years. However, the majority of existing methodologies mainly focused on human-designed shallow statistical features and didn’t make full use of the rich semantic information in user-generated texts, while those texts are exactly the most direct way for people to translate their internal thoughts and emotions into a form that others can understand. This paper proposes a deep learning-based approach for personality recognition from text posts of online social network users. We first utilize a hierarchical deep neural network composed of our newly designed AttRCNN structure and a variant of the Inception structure to learn the deep semantic features of each user’s text posts. Then we concatenate the deep semantic features with the statistical linguistic features obtained directly from the text posts, and feed them into traditional regression algorithms to predict the real-valued Big Five personality scores. Experimental results show that the deep semantic feature vectors learned from our proposed neural network are more effective than the other four kinds of non-trivial baseline features; the approach that utilizes the concatenation of our deep semantic features and the statistical linguistic features as the input of the gradient boosting regression algorithm achieves the lowest average prediction error among all the approaches tested by us.", "title": "" }, { "docid": "3e6e72747036ca7255b449f4c93e15f7", "text": "In this paper a planar antenna is studied for ultrawide-band (UWB) applications. This antenna consists of a wide-band tapered-slot feeding structure, curved radiators and a parasitic element. It is a modification of the conventional dual exponential tapered slot antenna and can be viewed as a printed dipole antenna with tapered slot feed. The design guideline is introduced, and the antenna parameters including return loss, radiation patterns and gain are investigated. To demonstrate the applicability of the proposed antenna to UWB applications, the transfer functions of a transmitting-receiving system with a pair of identical antennas are measured. Transient waveforms as the transmitting-receiving system being excited by a simulated pulse are discussed at the end of this paper.", "title": "" }, { "docid": "25751673cedf36c5e8b7ae310b66a8f2", "text": "BACKGROUND\nMuscle dysmorphia (MD) describes a condition characterised by a misconstrued body image in which individuals who interpret their body size as both small or weak even though they may look normal or highly muscular.MD has been conceptualized as a type of body dysmorphic disorder, an eating disorder, and obsessive–compulsive disorder symptomatology. METHOD AND AIM: Through a review of the most salient literature on MD, this paper proposes an alternative classification of MD--the ‘Addiction to Body Image’ (ABI) model--using Griffiths (2005)addiction components model as the framework in which to define MD as an addiction.\n\n\nRESULTS\nIt is argued the addictive activity in MD is the maintaining of body image via a number of different activities such as bodybuilding, exercise,eating certain foods, taking specific drugs (e.g., anabolic steroids), shopping for certain foods, food supplements,and the use or purchase of physical exercise accessories). In the ABI model, the perception of the positive effects on the self-body image is accounted for as a critical aspect of the MD condition (rather than addiction to exercise or certain types of eating disorder).\n\n\nCONCLUSIONS\nBased on empirical evidence to date, it is proposed that MD could be re-classified as an addiction due to the individual continuing to engage in maintenance behaviours that may cause long-term harm.", "title": "" }, { "docid": "1d64f04b9c3d1579cbff94a2d8dce623", "text": "In the present work, the performance of indoor deployment solutions based on the combination of Distributed Antenna Systems (DAS) and MIMO transmission techniques (Interleaved-MIMO DAS solutions) is investigated for high-order MIMO schemes with the aid of LTE link level simulations. Planning guidelines for linear and 2D coverage solutions based on Interleaved-MIMO DAS are then derived.", "title": "" }, { "docid": "c20da334e799139e08c6ce9c4cac6cee", "text": "BACKGROUND\nAdvanced lung cancer is indicated with rapid disease development. Interleukin 27 (IL-27) is regarded as a cytokine with anti-tumour activities.\n\n\nAIM\nSince, the impact of type of lung cancer on the level of IL-27 in patient's serum has not yet been investigated; current study evaluated the clinical stages according to American Joint Committee on Cancer (AJCC) criteria, Tumor-Node-Metastasis (TNM) stage and the lung cancer spread (localized or widespread) and it's correlation with serum IL-27.MATERIAL AND METHODS: Thirty patients with confirmed histopathological lung cancer and 30 cancer-free healthy individuals as the control group were included in the current study. Patients group were assigned to either small cell lung cancer group (SCLC) or non-small cell lung cancer (NSCLC) according to the clinical features and the results of lung biopsy specimens. Level of IL-27 was quantified with enzyme-linked immunosorbent assay (ELISA) test in serum samples.\n\n\nRESULTS\nA significant increase in serum IL-27 level was noticed in individuals with lung cancer in comparison with the control group. The level of serum IL-27 in the NSCL squamous carcinoma (NSCLC-Sc) type was significantly greater than in the NSCLC adenocarcinoma (NSCLC-Ad) type, and in both groups, this variable was more than the control group. The serum IL-27 content level was greater in stage III versus stage IV.\n\n\nCONCLUSION\nThe current research confirmed the existence of the anti-tumour components in patients with NSCLC. IL-27 can be utilised in diagnosis and screening in early stages of lung cancer along with the management of patients. Different levels of IL-27 in different types of lung cancers in the current study can lead to design more comprehensive studies in the future.", "title": "" }, { "docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd", "text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.", "title": "" }, { "docid": "497088def9f5f03dcb32e33d1b6fcb64", "text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.", "title": "" }, { "docid": "ffc46e7c94a6b97313ad5c29618e8396", "text": "Magnetic Induction (MI) techniques enable efficient wireless communications in dense media with high material absorptions, such as underground soil medium and oil reservoirs. A wide range of novel and important applications in such RF-challenged environments can be realized based on the MI communication mechanism. Despite the potential advantages, the major bottleneck of the MI communication is the limited channel capacity due to the low MI bandwidth. In this paper, the Spread Resonance (RS) strategy is developed for the MI communication in RF-challenged environments which greatly increases the MI channel capacity. Specifically, instead of using the same resonant frequency for all the MI coils, the spread resonance strategy allocates different resonant frequencies for different MI relay and transceiver coils. An optimization solution for the resonant frequency allocation is formulated to maximize the MI channel capacity which captures multiple unique MI effects, including the parasitic capacitor in each MI coil, the Eddy currents in various transmission media with limited conductivities, and the random direction of each coil. Numerical evaluations are provided to validate the significant channel capacity improvements by the proposed SR strategy for MI communication systems.", "title": "" }, { "docid": "dc41ef183ec93f9baae8c0fcee6b979a", "text": "PATRICE GENEVET, FEDERICO CAPASSO,* FRANCESCO AIETA, MOHAMMADREZA KHORASANINEJAD, AND ROBERT DEVLIN Université Côte d’Azur, CNRS, CRHEA, rue Bernard Gregory, Sophia Antipolis 06560 Valbonne, France John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA Hewlett-Packard Laboratories, Palo Alto, California 94304, USA e-mail: Patrice.genevet@crhea.cnrs.fr *Corresponding author: capasso@seas.harvard.edu", "title": "" }, { "docid": "b4166b57419680e348d7a8f27fbc338a", "text": "OBJECTIVES\nTreatments of female sexual dysfunction have been largely unsuccessful because they do not address the psychological factors that underlie female sexuality. Negative self-evaluative processes interfere with the ability to attend and register physiological changes (interoceptive awareness). This study explores the effect of mindfulness meditation training on interoceptive awareness and the three categories of known barriers to healthy sexual functioning: attention, self-judgment, and clinical symptoms.\n\n\nMETHODS\nForty-four college students (30 women) participated in either a 12-week course containing a \"meditation laboratory\" or an active control course with similar content or laboratory format. Interoceptive awareness was measured by reaction time in rating physiological response to sexual stimuli. Psychological barriers were assessed with self-reported measures of mindfulness and psychological well-being.\n\n\nRESULTS\nWomen who participated in the meditation training became significantly faster at registering their physiological responses (interoceptive awareness) to sexual stimuli compared with active controls (F(1,28) = 5.45, p = .03, η(p)(2) = 0.15). Female meditators also improved their scores on attention (t = 4.42, df = 11, p = .001), self-judgment, (t = 3.1, df = 11, p = .01), and symptoms of anxiety (t = -3.17, df = 11, p = .009) and depression (t = -2.13, df = 11, p < .05). Improvements in interoceptive awareness were correlated with improvements in the psychological barriers to healthy sexual functioning (r = -0.44 for attention, r = -0.42 for self-judgment, and r = 0.49 for anxiety; all p < .05).\n\n\nCONCLUSIONS\nMindfulness-based improvements in interoceptive awareness highlight the potential of mindfulness training as a treatment of female sexual dysfunction.", "title": "" }, { "docid": "37d00c781e463dc3d908fb1bbfcd36de", "text": "OBJECTIVE\nIn both dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), attentional dysfunction is a core clinical feature together with disrupted episodic memory. This study evaluated the cognitive effects of memantine in DLB and PDD using automated tests of attention and episodic memory.\n\n\nMETHODS\nA randomised double-blind, placebo-controlled, 24-week three centre trial of memantine (20 mg/day) was conducted in which tests of attention (simple and choice reaction time) and word recognition (immediate and delayed) from the CDR System were administered prior to dosing and again at 12 and 24 weeks. Although other results from this study have been published, the data from the CDR System tests were not included and are presented here for the first time.\n\n\nRESULTS\nData were available for 51 patients (21 DLB and 30 PDD). In both populations, memantine produced statistically significant medium to large effect sized improvements to choice reaction time, immediate and delayed word recognition.\n\n\nCONCLUSIONS\nThese are the first substantial improvements on cognitive tests of attention and episodic recognition memory identified with memantine in either DLB or PDD.", "title": "" }, { "docid": "f4616ce19907f8502fb7520da68c6852", "text": "Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2, 11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers.", "title": "" }, { "docid": "013325b5f83e73efdbaa2d0b9ac14afb", "text": "Electricity prices are known to be very volatile and subject to frequent jumps due to system breakdown, demand shocks, and inelastic supply. Appropriate pricing, portfolio, and risk management models should incorporate these spikes. We develop a framework to price European-style options that are consistent with the possibility of market spikes. The pricing framework is based on a regime jump model that disentangles mean-reversion from the spikes. In the model the spikes are truly time-specific events and therefore independent from the meanreverting price process. This closely resembles the characteristics of electricity prices, as we show with Dutch APX spot price data in the period January 2001 till June 2002. Thanks to the independence of the two price processes in the model, we break derivative prices down in a mean-reverting value and a spike value. We use this result to show how the model can be made consistent with forward prices in the market and present closed-form formulas for European-style options. 5001-6182 Business 5601-5689 4001-4280.7 Accountancy, Bookkeeping Finance Management, Business Finance, Corporation Finance Library of Congress Classification (LCC) HG 6024+ Options M Business Administration and Business Economics M 41 G 3 Accounting Corporate Finance and Governance Journal of Economic Literature (JEL) G 19 General Financial Markets: Other 85 A Business General 225 A 220 A Accounting General Financial Management European Business Schools Library Group (EBSLG) 220 R Options market Gemeenschappelijke Onderwerpsontsluiting (GOO) 85.00 Bedrijfskunde, Organisatiekunde: algemeen 85.25 85.30 Accounting Financieel management, financiering Classification GOO 85.30 Financieel management, financiering Bedrijfskunde / Bedrijfseconomie Accountancy, financieel management, bedrijfsfinanciering, besliskunde", "title": "" }, { "docid": "815950cb5c3d3c8bc489c34c2598c626", "text": "In four studies, the authors investigated the proposal that in the context of an elite university, individuals from relatively lower socioeconomic status (SES) backgrounds possess a stigmatized identity and, as such, experience (a) concerns regarding their academic fit and (b) self-regulatory depletion as a result of managing these concerns. Study 1, a correlational study, revealed the predicted associations between SES, concerns about academic fit, and self-regulatory strength. Results from Studies 2 and 3 suggested that self-presentation involving the academic domain is depleting for lower (but not higher) SES students: After a self-presentation task about academic achievement, lower SES students consumed more candy (Study 2) and exhibited poorer Stroop performance (Study 3) relative to their higher SES peers; in contrast, the groups did not differ after discussing a nonacademic topic (Study 3). Study 4 revealed the potential for eliminating the SES group difference in depletion via a social comparison manipulation. Taken together, these studies support the hypothesis that managing concerns about marginality can have deleterious consequences for self-regulatory resources.", "title": "" }, { "docid": "5552216832bb7315383d1c4f2bfe0635", "text": "Semantic parsing maps sentences to formal meaning representations, enabling question answering, natural language interfaces, and many other applications. However, there is no agreement on what the meaning representation should be, and constructing a sufficiently large corpus of sentence-meaning pairs for learning is extremely challenging. In this paper, we argue that both of these problems can be avoided if we adopt a new notion of semantics. For this, we take advantage of symmetry group theory, a highly developed area of mathematics concerned with transformations of a structure that preserve its key properties. We define a symmetry of a sentence as a syntactic transformation that preserves its meaning. Semantically parsing a sentence then consists of inferring its most probable orbit under the language’s symmetry group, i.e., the set of sentences that it can be transformed into by symmetries in the group. The orbit is an implicit representation of a sentence’s meaning that suffices for most applications. Learning a semantic parser consists of discovering likely symmetries of the language (e.g., paraphrases) from a corpus of sentence pairs with the same meaning. Once discovered, symmetries can be composed in a wide variety of ways, potentially resulting in an unprecedented degree of immunity to syntactic variation.", "title": "" }, { "docid": "741d08b1527dc07a3dc175a20914f1a8", "text": "Knowledge is considered to be an economic driver in today's economy. It has become a commodity, a resource that can be packed and transferred. The objective of this paper is to provide a comprehensive review of the scope, trends and major actors (firms, organizations, government, consultants, academia, etc.) in the development and use of methods to manage innovation in a knowledge-driven economy. The paper identifies the main innovation management techniques (IMTs) aiming at the improvement of firm competitiveness by means of knowledge management. It will specifically focus on those IMTs for which knowledge is a relevant part of the innovation process. The research study, based on a survey at the European level, concludes that a knowledge-driven economy affects the innovation process and approach. The traditional idea that innovation is based on research (technology-push theory) and interaction between firms and other actors has been replaced by the current social network theory of innovation, where knowledge plays a crucial role in fostering innovation. Simultaneously, organizations in both public and private sectors have launched initiatives to develop methodologies and tools to support business innovation management. Higher education establishments, business schools and consulting companies are developing innovative and adequate methodologies and tools, while public authorities are designing and setting up education and training schemes aimed at disseminating best practices among all kinds of businesses.", "title": "" } ]
scidocsrr
b0175e25c637596bb8630f7636b325d7
Did the Model Understand the Question?
[ { "docid": "2be66aab202c50a35c1e98fe16442ab7", "text": "Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy has been the main focus of research and now there is a huge trend toward assessing the robustness of these models against adversarial attacks by evaluating the accuracy of these models under increasing levels of noisiness. In VQA, the attack can target the image and/or the proposed main question and yet there is a lack of proper analysis of this aspect of VQA. In this work, we propose a new framework that uses semantically relevant questions, dubbed basic questions, acting as noise to evaluate the robustness of VQA models. We hypothesize that as the similarity of a basic question to the main question decreases, the level of noise increases. So, to generate a reasonable noise level for a given main question, we rank a pool of basic questions based on their similarity with this main question. We cast this ranking problem as a LASSO optimization problem. We also propose a novel robustness measure Rscore and two large-scale question datasets, General Basic Question Dataset and Yes/No Basic Question Dataset in order to standardize robustness analysis of VQA models. We analyze the robustness of several state-of-the-art VQA models and show that attention-based VQA models are more robust than other methods in general. The main goal of this framework is to serve as a benchmark to help the community in building more accurate and robust VQA models.", "title": "" }, { "docid": "cdad4ee7017fb232425aceff8b50dca4", "text": "At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model’s behavior.", "title": "" }, { "docid": "71b5c8679979cccfe9cad229d4b7a952", "text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "title": "" } ]
[ { "docid": "b45a9d29d055b021df3b14609124b39c", "text": "The Graphics Processing Unit (GPU) is now commonly used for graphics and data-parallel computing. As more and more applications tend to accelerate on the GPU in multi-tasking environments where multiple tasks access the GPU concurrently, operating systems must provide prioritization and isolation capabilities in GPU resource management, particularly in real-time setups. We present TimeGraph, a real-time GPU scheduler at the device-driver level for protecting important GPU workloads from performance interference. TimeGraph adopts a new event-driven model that synchronizes the GPU with the CPU to monitor GPU commands issued from the user space and control GPU resource usage in a responsive manner. TimeGraph supports two prioritybased scheduling policies in order to address the tradeoff between response times and throughput introduced by the asynchronous and non-preemptive nature of GPU processing. Resource reservation mechanisms are also employed to account and enforce GPU resource usage, which prevent misbehaving tasks from exhausting GPU resources. Prediction of GPU command execution costs is further provided to enhance isolation. Our experiments using OpenGL graphics benchmarks demonstrate that TimeGraph maintains the frame-rates of primary GPU tasks at the desired level even in the face of extreme GPU workloads, whereas these tasks become nearly unresponsive without TimeGraph support. Our findings also include that the performance overhead imposed on TimeGraph can be limited to 4-10%, and its event-driven scheduler improves throughput by about 30 times over the existing tick-driven scheduler.", "title": "" }, { "docid": "77d0786af4c5eee510a64790af497e25", "text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.", "title": "" }, { "docid": "7eff56a5f17cef0b15b4ddc737ceeeed", "text": "Many analysis tasks involve linked nodes, such as people connected by friendship links. Research on link-based classification (LBC) has studied how to leverage these connections to improve classification accuracy. Most such prior research has assumed the provision of a densely labeled training network. Instead, this article studies the common and challenging case when LBC must use a single sparsely labeled network for both learning and inference, a case where existing methods often yield poor accuracy. To address this challenge, we introduce a novel method that enables prediction via “neighbor attributes,” which were briefly considered by early LBC work but then abandoned due to perceived problems. We then explain, using both extensive experiments and loss decomposition analysis, how using neighbor attributes often significantly improves accuracy. We further show that using appropriate semi-supervised learning (SSL) is essential to obtaining the best accuracy in this domain and that the gains of neighbor attributes remain across a range of SSL choices and data conditions. Finally, given the challenges of label sparsity for LBC and the impact of neighbor attributes, we show that multiple previous studies must be re-considered, including studies regarding the best model features, the impact of noisy attributes, and strategies for active learning.", "title": "" }, { "docid": "1183b3ea7dd929de2c18af49bf549ceb", "text": "Robust and time-efficient skeletonization of a (planar) shape, which is connectivity preserving and based on Euclidean metrics, can be achieved by first regularizing the Voronoi diagram (VD) of a shape’s boundary points, i.e., by removal of noise-sensitive parts of the tessellation and then by establishing a hierarchic organization of skeleton constituents . Each component of the VD is attributed with a measure of prominence which exhibits the expected invariance under geometric transformations and noise. The second processing step, a hierarchic clustering of skeleton branches, leads to a multiresolution representation of the skeleton, termed skeleton pyramid. Index terms — Distance transform, hierarchic skeletons, medial axis, regularization, shape description, thinning, Voronoi tessellation.", "title": "" }, { "docid": "f24ae7417397656dc5f2c6e14d8e33ba", "text": "Recent developments indicate that the forces acting on the papillary muscles can be a measure of the severity of mitral valve regurgitation. Pathological conditions, such as ischemic heart disease, cause changes in the geometry of the left ventricle and the mitral valve annulus, often resulting in displacement of the papillary muscles relative to the annulus. This can lead to increased tension in the chordae tendineae. This increased tension is transferred to the leaflets, and can disturb the coaptation pattern of the mitral valve. The force balance on the individual components governs the function of the mitral valve. The ability to measure changes in the force distribution from normal to pathological conditions may give insight into the mechanisms of mitral valve insufficiency. A unique in vitro model has been developed that allows quantification of the papillary muscle spatial position and quantification of the three-dimensional force vector applied to the left ventricular wall by the papillary muscles. This system allows for the quantification of the global force exerted on the posterior left ventricular wall from the papillary muscles during simulation of normal and diseased conditions. © 2001 Biomedical Engineering Society. PAC01: 8719Rr, 8719Ff, 8719Hh, 8719Xx, 8710+e", "title": "" }, { "docid": "71fe54bb0b016732067f43150d6a4d0b", "text": "Context: Studies on global software development have documented severe coordination and communication problems among coworkers due to geographic dispersion and consequent dependency on technology. These problems are exacerbated by increase in the complexity of work undertaken by global teams. However, despite these problems, global software development is on the rise and firms are adopting global practices across the board, raising the question: What does successful global software development look like and what can we learn from its practitioners? Objective: This study draws on practice-based studies of work to examine successful work practices of global software developers. The primary aim of this study was to understand how workers develop practices that allow them to function effectively across geographically dispersed locations. Method: An ethnographically-informed field study was conducted with data collection at two international locations of a firm. Interview, observation and archival data were collected. A total of 42 interviews and 3 weeks of observations were conducted. Results: Teams spread across different locations around the world developed work practices through sociomaterial bricolage. Two facets of technology use were necessary for the creation of these practices: multiplicity of media and relational personalization at dyadic and team levels. New practices were triggered by the need to achieve a work-life balance, which was disturbed by global development. Reflecting on my role as a researcher, I underscore the importance of understanding researchers’ own frames of reference and using research practices that mirror informants’ work practices. Conclusion: Software developers on global teams face unique challenges which necessitate a shift in their work practices. Successful teams are able to create practices that span locations while still being tied to location based practices. Inventive use of material and social resources is central to the creation of these", "title": "" }, { "docid": "179d8f41102862710595671e5a819d70", "text": "Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.", "title": "" }, { "docid": "abe5bdf6a17cf05b49ac578347a3ca5d", "text": "To realize the broad vision of pervasive computing, underpinned by the “Internet of Things” (IoT), it is essential to break down application and technology-based silos and support broad connectivity and data sharing; the cloud being a natural enabler. Work in IoT tends toward the subsystem, often focusing on particular technical concerns or application domains, before offloading data to the cloud. As such, there has been little regard given to the security, privacy, and personal safety risks that arise beyond these subsystems; i.e., from the wide-scale, cross-platform openness that cloud services bring to IoT. In this paper, we focus on security considerations for IoT from the perspectives of cloud tenants, end-users, and cloud providers, in the context of wide-scale IoT proliferation, working across the range of IoT technologies (be they things or entire IoT subsystems). Our contribution is to analyze the current state of cloud-supported IoT to make explicit the security considerations that require further work.", "title": "" }, { "docid": "855b34b0db99446f980ddb9b96e52001", "text": "based, companies increasingly derive revenue from the creation and sustenance of long-term relationships with their customers. In such an environment, marketing serves the purpose of maximizing customer lifetime value (CLV) and customer equity, which is the sum of the lifetime values of the company’s customers. This article reviews a number of implementable CLV models that are useful for market segmentation and the allocation of marketing resources for acquisition, retention, and crossselling. The authors review several empirical insights that were obtained from these models and conclude with an agenda of areas that are in need of further research.", "title": "" }, { "docid": "e3bc9bbc48115120af885127e644153f", "text": "We present a neural network technique for the analysis and extrapolation of time-series data called neural decomposition (ND). Units with a sinusoidal activation function are used to perform a Fourier-like decomposition of training samples into a sum of sinusoids, augmented by units with nonperiodic activation functions to capture linear trends and other nonperiodic components. We show how careful weight initialization can be combined with regularization to form a simple model that generalizes well. Our method generalizes effectively on the Mackey–Glass series, a data set of unemployment rates as reported by the U.S. Department of Labor Statistics, a time-series of monthly international airline passengers, the monthly ozone concentration in downtown Los Angeles, and an unevenly sampled time series of oxygen isotope measurements from a cave in north India. We find that ND outperforms popular time-series forecasting techniques, including long short-term memory network, echo-state networks, autoregressive integrated moving average (ARIMA), seasonal ARIMA, support vector regression with a radial basis function, and Gashler and Ashmore’s model.", "title": "" }, { "docid": "bca8e93c1fc728fb3187556b3241520c", "text": "For decades, intelligent tutoring systems researchers have been developing various methods of student modeling. Most of the models, including two of the most popular approaches: Knowledge Tracing model and Performance Factor Analysis, all have similar assumption: the information needed to model the student is the student’s performance. However, there are other sources of information that are not utilized, such as the performance on other students in same class. This paper extends the Student-Skill extension of Knowledge Tracing, to take into account the class information, and learns four parameters: prior knowledge, learn, guess and slip for each class of students enrolled in the system. The paper then compares the accuracy using the four parameters for each class versus the four parameters for each student to find out which parameter set works better in predicting student performance. The result shows that modeling at coarser grain sizes can actually result in higher predictive accuracy, and data about classmates’ performance is results in a higher predictive accuracy on unseen test data.", "title": "" }, { "docid": "5275184686a8453a1922cec7a236b66d", "text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.", "title": "" }, { "docid": "cbb538ec8db6575200828d3119027960", "text": "Grammar induction is the task of learning a grammar from a set of examples. Recently, neural networks have been shown to be powerful learning machines that can identify patterns in streams of data. In this work we investigate their effectiveness in inducing a regular grammar from data, without any assumptions about the grammar. We train a recurrent neural network to distinguish between strings that are in or outside a regular language, and utilize an algorithm for extracting the learned finitestate automaton. We apply this method to several regular languages and find unexpected results regarding the connections between the network’s states that may be regarded as evidence for generalization.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "de721f4b839b0816f551fa8f8ee2065e", "text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", "title": "" }, { "docid": "161fad0fa7430ccc5ebf0aca146e624d", "text": "The medical literature describes lipoatrophia semicircularis as a rare, idiopathic condition, that consists clinically of a semicircular zone of atrophy of the subcutaneous fatty tissue located mostly on the front of the thighs. The disorder is mainly afflicting office workers. Since 1995, we have diagnosed more than 900 cases in our company. Also in other companies (national and international) lipoatrophia semicircularis is diagnosed.", "title": "" }, { "docid": "15a804f7495689caba758d1f418a54ed", "text": "Replay attacks presents a great risk for Automatic Speaker Verification (ASV) system. In this paper, we propose a novel replay detector based on Variable length Teager Energy OperatorEnergy Separation Algorithm-Instantaneous Frequency Cosine Coefficients (VESA-IFCC) for the ASV spoof 2017 challenge. The key idea here is to exploit the contribution of IF in each subband energy via ESA to capture possible changes in spectral envelope (due to transmission and channel characteristics of replay device) of replayed speech. The IF is computed from narrowband components of speech signal, and DCT is applied in IF to get proposed feature set. We compare the performance of the proposed VESA-IFCC feature set with the features developed for detecting synthetic and voice converted speech. This includes the CQCC, CFCCIF and prosody-based features. On the development set, the proposed VESA-IFCC features when fused at score-level with a variant of CFCCIF and prosodybased features gave the least EER of 0.12 %. On the evaluation set, this combination gave an EER of 18.33 %. However, post-evaluation results of challenge indicate that VESA-IFCC features alone gave the relatively least EER of 14.06 % (i.e., relatively 16.11 % less compared to baseline CQCC) and hence, is a very useful countermeasure to detect replay attacks.", "title": "" }, { "docid": "25c3121a0a482b4cc3e670db49ddb10c", "text": "Acquired real-time image sequences, in their original form may not have good viewing quality due to lack of proper lighting or inherent noise. For example, in X-ray imaging, when continuous exposure is used to obtain an image sequence or video, usually low-level exposure is administered until the region of interest is identified. In this case, and many other similar situations, it is desired to improve the image quality in real-time. One particular method of interest, which extensively is used for enhancement of still images, is Contrast Limited Adaptive Histogram Equalization (CLAHE) proposed in [1] and summarized in [2]. This approach is computationally extensive and it is usually used for off-line image enhancement. Because of its performance, hardware implementation of this algorithm for enhancement of real-time image sequences is sought. In this paper, a system level realization of CLAHE is proposed, which is suitable for VLSI or FPGA implementation. The goal for this realization is to minimize the latency without sacrificing precision.", "title": "" }, { "docid": "5bf2662b043011999fa0c1cbb5099387", "text": "With the introduction of new technology in our daily life, it is essential that this technology is used for the aid of the elderly. Falls cause a very high risk to the elderly's life. Accordingly, this paper's focus is on technology that would aid the elderly. These technologies include: Wearable- based, audio- based, and video-based fall detection systems. This paper surveys the literature regarding fall detection algorithms using those three branches and the various sensors they employ. Looking at wearable technology, the technology is cheap and accurate but inconvenient. Audio-based technology on the other hand is more convenient and is cheaper than video-based technology. However audio-based technology is hard to set up compared to video and wearable-based technologies. Video- based technology is accurate and easy to set up. At the moment, video-based technology is the most expensive compared to the other two, and it is also prone to occlusion. However as homes become smarter and prices for cameras continue to drop, it is expected that this technology will be the best of the three due to its versatility.", "title": "" }, { "docid": "6349e0444220d4a8ea3c34755954a58a", "text": "We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other “fast” deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference “Darknet” model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7% on the CIFAR-10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized.", "title": "" } ]
scidocsrr
0f150c6df8452b9207e6a2a425d6c8be
An Overview of Localization Methods for Multi-Agent Systems
[ { "docid": "55160cc3013b03704555863c710e6d21", "text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.", "title": "" } ]
[ { "docid": "46a1dd05e29e206b9744bf15d48f5a5e", "text": "In this paper, we propose a distributed version of the Hungarian method to solve the well-known assignment problem. In the context of multirobot applications, all robots cooperatively compute a common assignment that optimizes a given global criterion (e.g., the total distance traveled) within a finite set of local computations and communications over a peer-to-peer network. As a motivating application, we consider a class of multirobot routing problems with “spatiotemporal” constraints, i.e., spatial targets that require servicing at particular time instants. As a means of demonstrating the theory developed in this paper, the robots cooperatively find online suboptimal routes by applying an iterative version of the proposed algorithm in a distributed and dynamic setting. As a concrete experimental test bed, we provide an interactive “multirobot orchestral” framework, in which a team of robots cooperatively plays a piece of music on a so-called orchestral floor.", "title": "" }, { "docid": "38666c5299ee67e336dc65f23f528a56", "text": "Different modalities of magnetic resonance imaging (MRI) can indicate tumor-induced tissue changes from different perspectives, thus benefit brain tumor segmentation when they are considered together. Meanwhile, it is always interesting to examine the diagnosis potential from single modality, considering the cost of acquiring multi-modality images. Clinically, T1-weighted MRI is the most commonly used MR imaging modality, although it may not be the best option for contouring brain tumor. In this paper, we investigate whether synthesizing FLAIR images from T1 could help improve brain tumor segmentation from the single modality of T1. This is achieved by designing a 3D conditional Generative Adversarial Network (cGAN) for FLAIR image synthesis and a local adaptive fusion method to better depict the details of the synthesized FLAIR images. The proposed method can effectively handle the segmentation task of brain tumors that vary in appearance, size and location across samples.", "title": "" }, { "docid": "1dac710a7c845bd3a55d8d92c18e3648", "text": "PURPOSE\nWe have conducted experiments with an innovatively designed robot endoscope holder for laparoscopic surgery that is small and low cost.\n\n\nMATERIALS AND METHODS\nA compact light endoscope robot (LER) that is placed on the patient's skin and can be used with the patient in the lateral or dorsal supine position was tested on cadavers and laboratory pigs in order to allow successive modifications. The current control system is based on voice recognition. The range of vision is 360 degrees with an angle of 160 degrees . Twenty-three procedures were performed.\n\n\nRESULTS\nThe tests made it possible to advance the prototype on a variety of aspects, including reliability, steadiness, ergonomics, and dimensions. The ease of installation of the robot, which takes only 5 minutes, and the easy handling made it possible for 21 of the 23 procedures to be performed without an assistant.\n\n\nCONCLUSION\nThe LER is a camera holder guided by the surgeon's voice that can eliminate the need for an assistant during laparoscopic surgery. The ease of installation and manufacture should make it an effective and inexpensive system for use on patients in the lateral and dorsal supine positions. Randomized clinical trials will soon validate a new version of this robot prior to marketing.", "title": "" }, { "docid": "c0b67f38519fa37f9bf13dddd421b82d", "text": "A humanoid robot navigating in an unstructured environment requires knowledge of the affordances which allow it to make contact with the environment. This knowledge often comes from a perception system, which processes data from 3D sensors such as LIDAR and extracts available areas for the robot to make contact. Because perception systems run independently of the robot's planner, without knowledge of the robot's goal, they must process the entire visible area. In large environments, or those with complex geometry, the perception system may spend significant time processing areas of the environment that the planner will never consider visiting. By integrating the perception process with the planner, we are able to improve the speed with which the robot can compute a motion plan by only processing those areas of the environment which are considered by the planner for navigation. Two experiments with simulated and real-world point cloud data suggest that our framework can produce comparable plans up to seven times faster than the perceive-then-plan approach.", "title": "" }, { "docid": "f01e41cda3fc8dc0385a1d376cd887ce", "text": "This paper reports a planar induction motor that can output 70 N translational thrust and 9 Nm torque with a response time of 10 ms. The motor consists of three linear induction armatures with vector control drivers and three optical mouse sensors. First, an idea to combine multiple linear induction elements is proposed. The power distribution to each element is derived from the position and orientation of that element. A discussion of the developed system and its measured characteristics follow. The experimental results highlight the potential of its direct drive features.", "title": "" }, { "docid": "4d8f38413169a572c0087fd180a97e44", "text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.", "title": "" }, { "docid": "7888fdb4698faca5c4b2dd8c79932df2", "text": "A quadruped robot “Baby Elephant” with parallel legs has been developed. It is about 1 m tall, 1.2 m long and 0.5m wide. It weighs about 130 kg. Driven by a new type of hydraulic actuation system, the Baby Elephant is designed to work as a mechanical carrier. It can carry a payload more than 50 kg. In this study, the structure of the legs is introduced first. Then the effect of the springs for increasing the loading capability is explained and discovered. The design problem of the spring parameters is also discussed. Finally, simulations and experiments are carried out to confirm the effect.", "title": "" }, { "docid": "a0d2ea9b5653d6ca54983bb3d679326e", "text": "A dynamic reasoning system (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct timestep. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, that is, that they serve to (1) derive all salient information and (2) preserve the consistency of the belief set. This article illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and two examples. The latter example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.", "title": "" }, { "docid": "9ed31c8a584fdc5548b3aa2df10ba30b", "text": "This paper investigates if the Activity-Theoretical methods of work development used by Engeström and others can be transformed into a day-to-day methodology for information systems practitioners. We first present and justify our theoretical framework of Activity Analysis and Development fairly extensively. In the second part we compare work development with information systems development and argue that in its less technological areas, the latter can potentially use the same methodologies as the former. In the third part, small experiments on using Activity Analysis during the earliest phases of information systems development in Nigeria and Finland are reported. In conclusion, we argue that the experiments were encouraging, but the methodology needs to be supported by further illustrative examples and training material. We argue that compared to currently used methods in the earliest and latest “phases” of systems development, Activity Analysis and Development is comprehensive, theoretically well founded, detailed and practicable. ©Scandinavian Journal of Information Systems, 2000, 12: 191191", "title": "" }, { "docid": "96dd59674377d7d19adc36b2ad87fc43", "text": "Two section wideband hybrid coupler with short-circuited coupled lines in the middle branch is demonstrated. The coupled lines are designed with a parallel-coupled 3-line, which has tight coupling and symmetric transmission phase over the center frequency. Without defected ground planes and a multilayer process, the proposed two-section coupler has the widest fractional bandwidth (FBW) than other two or even three section hybrid couplers modified from branch-line couplers. The designed wideband coupler has 55% FBW at the center frequency of 1.9 GHz. The bandwidth is limited by 1-dB power imbalance and the worst return loss, isolation, and phase imbalance within the bandwidth are 20.1 dB, 20.8 dB, and 3.2°, respectively.", "title": "" }, { "docid": "9772d6f0173d88ce000974f912e458f5", "text": "To be tractable and robust to data noise, existing metric learning algorithms commonly rely on PCA as a pre-processing step. How can we know, however, that PCA, or any other specific dimensionality reduction technique, is the method of choice for the problem at hand? The answer is simple: We cannot! To address this issue, in this paper, we develop a Riemannian framework to jointly learn a mapping performing dimensionality reduction and a metric in the induced space. Our experiments evidence that, while we directly work on high-dimensional features, our approach yields competitive runtimes with and higher accuracy than state-of-the-art metric learning algorithms.", "title": "" }, { "docid": "c67a7eab2370315159200ac65c3fe52b", "text": "Convolutional neural networks (CNNs) are the core of most state-of-the-art deep learning algorithms specialized for object detection and classification. CNNs are both computationally complex and embarrassingly parallel. Two properties that leave room for potential software and hardware optimizations for embedded systems. Given a programmable hardware accelerator with a CNN oriented custom instructions set, the compiler’s task is to exploit the hardware’s full potential, while abiding with the hardware constraints and maintaining generality to run different CNN models with varying workload properties. Snowflake is an efficient and scalable hardware accelerator implemented on programmable logic devices. It implements a control pipeline for a custom instruction set. The goal of this paper is to present Snowflake’s compiler that generates machine level instructions from Torch7 model description files. The main software design points explored in this work are: model structure parsing, CNN workload breakdown, loop rearrangement for memory bandwidth optimizations and memory access balancing. The performance achieved by compiler generated instructions matches against hand optimized code for convolution layers. Generated instructions also efficiently execute AlexNet and ResNet18 inference on Snowflake. Snowflake with 256 processing units was synthesized on Xilinx’s Zynq XC7Z045 FPGA. At 250 MHz, AlexNet achieved in 93.6 frames/s and 1.2 GB/s of off-chip memory bandwidth, and 21.4 frames/s and 2.2 GB/s for ResNet18. Total on-chip power is 5 W.", "title": "" }, { "docid": "8a560246be1a816b232415fa237499f9", "text": "Analytical SQL queries are a valuable source of information. Query log analysis can provide insight into the usage of datasets and uncover knowledge that cannot be inferred from source schemas or content alone. To unlock this potential, flexible mechanisms for meta-querying are required. Syntactic and semantic aspects of queries must be considered along with contextual information.\n We present an extensible framework for analyzing SQL query logs. Query logs are mapped to a multi-relational graph model and queried using domain-specific traversal expressions. To enable concise and expressive meta-querying, semantic analyses are conducted on normalized relational algebra trees with accompanying schema lineage graphs. Syntactic analyses can be conducted on corresponding query texts and abstract syntax trees. Additional metadata allows to inspect the temporal and social context of each query.\n In this demonstration, we show how query log analysis with our framework can support data source discovery and facilitate collaborative data science. The audience can explore an exemplary query log to locate queries relevant to a data analysis scenario, conduct graph analyses on the log and assemble a customized logmonitoring dashboard.", "title": "" }, { "docid": "d8f21e77a60852ea83f4ebf74da3bcd0", "text": "In recent years different lines of evidence have led to the idea that motor actions and movements in both vertebrates and invertebrates are composed of elementary building blocks. The entire motor repertoire can be spanned by applying a well-defined set of operations and transformations to these primitives and by combining them in many different ways according to well-defined syntactic rules. Motor and movement primitives and modules might exist at the neural, dynamic and kinematic levels with complicated mapping among the elementary building blocks subserving these different levels of representation. Hence, while considerable progress has been made in recent years in unravelling the nature of these primitives, new experimental, computational and conceptual approaches are needed to further advance our understanding of motor compositionality.", "title": "" }, { "docid": "4c603b3e490cfce2180189e6bc972a28", "text": "Currently, multi-organ segmentation (MOS) in abdominal CT can fail to handle clinical patient population with missing organs due to surgical resection. In order to enable the state-of-the-art MOS for these clinically important cases, we propose (1) automatic missing organ detection (MOD) by testing abnormality of post-surgical organ motion and organ-specific intensity homogeneity, and (2) atlas-based MOS of 10 abdominal organs that handles missing organs automatically. The proposed methods are validated with 44 abdominal CT scans including 9 diseased cases with surgical organ resections, resulting in 93.3% accuracy for MOD and improved overall segmentation accuracy by the proposed MOS method when tested on difficult diseased cases,", "title": "" }, { "docid": "42ea7c0ba51c3d0da09e15b61592eb86", "text": "While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer vision, natural language processing, and other areas of Artificial Intelligence. Recognizing this promising and emerging area of research, this synthesis lecture focuses on graphbased SSL algorithms (e.g., label propagation methods). Our hope is that after reading this book, the reader will walk away with the following: (1) an in-depth knowledge of the current stateof-the-art in graph-based SSL algorithms, and the ability to implement them; (2) the ability to decide on the suitability of graph-based SSL methods for a problem; and (3) familiarity with different applications where graph-based SSL methods have been successfully applied.", "title": "" }, { "docid": "4b156066e72d0e8bf220c3e13738d91c", "text": "We present an unsupervised approach for abnormal event detection in videos. We propose, given a dictionary of features learned from local spatiotemporal cuboids using the sparse coding objective, the abnormality of an event depends jointly on two factors: the frequency of each feature in reconstructing all events (or, rarity of a feature) and the strength by which it is used in reconstructing the current event (or, the absolute coefficient). The Incremental Coding Length (ICL) of a feature is a measure of its entropy gain. Given a dictionary, the ICL computation does not involve any parameter, is computationally efficient and has been used for saliency detection in images with impressive results. In this paper, the rarity of a dictionary feature is learned online as its average energy, a function of its ICL. The proposed approach is applicable to real world streaming videos. Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art.", "title": "" }, { "docid": "49e1dc71e71b45984009f4ee20740763", "text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.", "title": "" } ]
scidocsrr
5574ad33afab241f5cd03373eaea39fc
AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes
[ { "docid": "388ce05c20f725de6b620b1efea546df", "text": "Recent work has shown success in learning word embeddings with neural network language models (NNLM). However, the majority of previous NNLMs represent each word with a single embedding, which fails to capture polysemy. In this paper, we address this problem by representing words with multiple and sense-specific embeddings, which are learned from bilingual parallel data. We evaluate our embeddings using the word similarity measurement and show that our approach is significantly better in capturing the sense-level word similarities. We further feed our embeddings as features in Chinese named entity recognition and obtain noticeable improvements against single embeddings.", "title": "" }, { "docid": "d2a1ecb8ad28ed5ba75460827341f741", "text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.", "title": "" } ]
[ { "docid": "bc07015b2a2624a75a656ae50d3b4e07", "text": "Current NAC technologies implement a pre-connect phase whe re t status of a device is checked against a set of policies before being granted access to a network, an d a post-connect phase that examines whether the device complies with the policies that correspond to its rol e in the network. In order to enhance current NAC technologies, we propose a new architecture based on behaviorsrather thanrolesor identity, where the policies are automatically learned and updated over time by the membe rs of the network in order to adapt to behavioral changes of the devices. Behavior profiles may be presented as identity cards that can change over time. By incorporating an Anomaly Detector (AD) to the NAC server or t each of the hosts, their behavior profile is modeled and used to determine the type of behaviors that shou ld be accepted within the network. These models constitute behavior-based policies. In our enhanced NAC ar chitecture, global decisions are made using a group voting process. Each host’s behavior profile is used to compu te a partial decision for or against the acceptance of a new profile or traffic. The aggregation of these partial vote s amounts to the model-group decision. This voting process makes the architecture more resilient to attacks. E ven after accepting a certain percentage of malicious devices, the enhanced NAC is able to compute an adequate deci sion. We provide proof-of-concept experiments of our architecture using web traffic from our department netwo rk. Our results show that the model-group decision approach based on behavior profiles has a 99% detection rate o f nomalous traffic with a false positive rate of only 0.005%. Furthermore, the architecture achieves short latencies for both the preand post-connect phases.", "title": "" }, { "docid": "689c2bac45b0933994337bd28ce0515d", "text": "Jealousy is a powerful emotional force in couples' relationships. In just seconds it can turn love into rage and tenderness into acts of control, intimidation, and even suicide or murder. Yet it has been surprisingly neglected in the couples therapy field. In this paper we define jealousy broadly as a hub of contradictory feelings, thoughts, beliefs, actions, and reactions, and consider how it can range from a normative predicament to extreme obsessive manifestations. We ground jealousy in couples' basic relational tasks and utilize the construct of the vulnerability cycle to describe processes of derailment. We offer guidelines on how to contain the couple's escalation, disarm their ineffective strategies and power struggles, identify underlying vulnerabilities and yearnings, and distinguish meanings that belong to the present from those that belong to the past, or to other contexts. The goal is to facilitate relational and personal changes that can yield a better fit between the partners' expectations.", "title": "" }, { "docid": "59d1d3073d2f56b35c6c54bc034d3f1a", "text": "Nowadays, many new social networks offering specific services spring up overnight. In this paper, we want to detect communities for emerging networks. Community detection for emerging networks is very challenging as information in emerging networks is usually too sparse for traditional methods to calculate effective closeness scores among users and achieve good community detection results. Meanwhile, users nowadays usually join multiple social networks simultaneously, some of which are developed and can share common information with the emerging networks. Based on both link and attribution information across multiple networks, a new general closeness measure, intimacy, is introduced in this paper. With both micro and macro controls, an effective and efficient method, CAD (Cold stArt community Detector), is proposed to propagate information from developed network to calculate effective intimacy scores among users in emerging networks. Extensive experiments conducted on real-world social networks demonstrate that CAD can perform very well in addressing the emerging network community detection problem.", "title": "" }, { "docid": "db2160b80dd593c33661a16ed2e404d1", "text": "Steganalysis tools play an important part in saving time and providing new angles of attack for forensic analysts. StegExpose is a solution designed for use in the real world, and is able to analyse images for LSB steganography in bulk using proven attacks in a time efficient manner. When steganalytic methods are combined intelligently, they are able generate even more accurate results. This is the prime focus of StegExpose.", "title": "" }, { "docid": "ede8a7a2ba75200dce83e17609ec4b5b", "text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.", "title": "" }, { "docid": "6e0a19a9bc744aa05a64bd7450cc4c1b", "text": "The success of deep neural networks hinges on our ability to accurately and efficiently optimize high-dimensional, non-convex functions. In this paper, we empirically investigate the loss functions of state-of-the-art networks, and how commonlyused stochastic gradient descent variants optimize these loss functions. To do this, we visualize the loss function by projecting them down to low-dimensional spaces chosen based on the convergence points of different optimization algorithms. Our observations suggest that optimization algorithms encounter and choose different descent directions at many saddle points to find different final weights. Based on consistency we observe across re-runs of the same stochastic optimization algorithm, we hypothesize that each optimization algorithm makes characteristic choices at these saddle points.", "title": "" }, { "docid": "8231458a76b5dc99b60d6a5b6ddaf5c8", "text": "In this research, we develop a fuzzy multi-objective mathematical model to identify and rank the candidate suppliers and find the optimal number of new and refurbished parts and final products in a reverse logistics network configuration. This modeling approach captures the inherent uncertainty in customers’ demand, suppliers’ capacity, and percentage of returned products as well as existence of conflicting objectives in reverse logistics systems. The objective functions in this study are defined as total profit, total defective parts, total late delivered parts, and economic risk factors associated with the candidate suppliers whereas the uncertainties are treated in a fuzzy environment. In order to avoid the subjective weighting from decision makers when solving the multi-objective model, a Monte Carlo simulation integrated with fuzzy goal programming is developed to determine the entire set of Paretooptimal solutions of the proposed model. The effectiveness of the mathematical model and the proposed solution method in obtaining Pareto-optimal solutions is demonstrated in a numerical example from a real case study. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4f64e7ff2bed569d73da9cae011e995d", "text": "Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set.", "title": "" }, { "docid": "1965bdb0db078dd618a5b69f362701e3", "text": "Action parsing in videos with complex scenes is an interesting but challenging task in computer vision. In this paper, we propose a novel deep model based on 3D CNN (convolutional neural network) and LSTM (long short-term memory) module with a multi-task learning manner for effective Deep Action Parsing (DAP3D-Net) in videos. Particularly in the training phase, each action clip, sliced to several short consecutive segments, is fed into 3D CNN followed by LSTM to model the whole action dynamic information, so that action localization, classification and attributes learning can be jointly optimized via our deep model. Once the DAP3D-Net is trained, for an upcoming test video, we can describe each individual action in the video simultaneously as: Where the action occurs; What the action is and How the action is performed. To well demonstrate the effectiveness of the proposed DAP3D-Net, we also contribute a new Numerous-category Aligned Synthetic Action dataset, i.e., NASA, which consists of 200,000 action clips of 300 categories and with 33 pre-defined action attributes in two hierarchical levels (i.e., low-level attributes of basic body part movements and high-level attributes related to action motion). We learn DAP3D-Net using the NASA dataset and then evaluate it on our collected Human Action Understanding (HAU) dataset and the public THUMOS dataset. Experimental results show that our approach can accurately localize, categorize and describe multiple actions in realistic videos.", "title": "" }, { "docid": "2c19e34ba53e7eb8631d979c83ee3e55", "text": "This paper is the first attempt to learn the policy of an inquiry dialog system (IDS) by using deep reinforcement learning (DRL). Most IDS frameworks represent dialog states and dialog acts with logical formulae. In order to make learning inquiry dialog policies more effective, we introduce a logical formula embedding framework based on a recursive neural network. The results of experiments to evaluate the effect of 1) the DRL and 2) the logical formula embedding framework show that the combination of the two are as effective or even better than existing rule-based methods for inquiry dialog policies.", "title": "" }, { "docid": "ac4b6ec32fe607e5e9981212152901f5", "text": "As an important matrix factorization model, Nonnegative Matrix Factorization (NMF) has been widely used in information retrieval and data mining research. Standard Nonnegative Matrix Factorization is known to use the Frobenius norm to calculate the residual, making it sensitive to noises and outliers. It is desirable to use robust NMF models for practical applications, in which usually there are many data outliers. It has been studied that the 2,1, or 1-norm can be used for robust NMF formulations to deal with data outliers. However, these alternatives still suffer from the extreme data outliers. In this paper, we present a novel robust capped norm orthogonal Nonnegative Matrix Factorization model, which utilizes the capped norm for the objective to handle these extreme outliers. Meanwhile, we derive a new efficient optimization algorithm to solve the proposed non-convex non-smooth objective. Extensive experiments on both synthetic and real datasets show our proposed new robust NMF method consistently outperforms related approaches.", "title": "" }, { "docid": "1bd058af9437119fc2aee4678c848802", "text": "In this article we gave an overview of vision-based measurement (VBM), its various components, and uncertainty in the correct IM (instrumentation and measurement) metrological perspective. VBM is a fast rising technology due to the increasing affordability and capability of camera and computing hardware/software. While originally a specialized application, VBM is expected to become more ubiquitous in our everyday lives as apparent from the applications described in this article.", "title": "" }, { "docid": "9eee8ce8717f9a8c679d30d6d1db2b25", "text": "Contactless dosing of minute amounts of molten metals in the form of micron sized droplets is a promising technology with applications in the area of three dimensional printing (3DP) and electronics manufacturing. However the generation of droplets of higher melting point metals, such as aluminum and its alloys, has proven to be a challenging task. Difficulties arise mainly due to the high temperatures needed to get the metal into its liquid state and the inherent chemical aggressiveness of fluids like molten aluminum with its high reduction potential. Those conditions preclude the use of most of the common drop-on-demand (DoD) operating principles for the generation of metal droplets.", "title": "" }, { "docid": "1bdd7392e4fc5d78c7976bd3497cce64", "text": "PURPOSE\nInterests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images.\n\n\nMETHODS\nThe proposed method builds upon recent developments of deep learning and convolutional neural networks in the computer vision literature. The proposed DCNN model has 27 convolutional layers interleaved with pooling and unpooling layers and 35 million free parameters, which can be trained to learn a direct end-to-end mapping from MR images to their corresponding CTs. Training such a large model on our limited data is made possible through the principle of transfer learning and by initializing model weights from a pretrained model. Eighteen brain tumor patients with both CT and T1-weighted MR images are used as experimental data and a sixfold cross-validation study is performed. Each sCT generated is compared against the real CT image of the same patient on a voxel-by-voxel basis. Comparison is also made with respect to an atlas-based approach that involves deformable atlas registration and patch-based atlas fusion.\n\n\nRESULTS\nThe proposed DCNN method produced a mean absolute error (MAE) below 85 HU for 13 of the 18 test subjects. The overall average MAE was 84.8 ± 17.3 HU for all subjects, which was found to be significantly better than the average MAE of 94.5 ± 17.8 HU for the atlas-based method. The DCNN method also provided significantly better accuracy when being evaluated using two other metrics: the mean squared error (188.6 ± 33.7 versus 198.3 ± 33.0) and the Pearson correlation coefficient(0.906 ± 0.03 versus 0.896 ± 0.03). Although training a DCNN model can be slow, training only need be done once. Applying a trained model to generate a complete sCT volume for each new patient MR image only took 9 s, which was much faster than the atlas-based approach.\n\n\nCONCLUSIONS\nA DCNN model method was developed, and shown to be able to produce highly accurate sCT estimations from conventional, single-sequence MR images in near real time. Quantitative results also showed that the proposed method competed favorably with an atlas-based method, in terms of both accuracy and computation speed at test time. Further validation on dose computation accuracy and on a larger patient cohort is warranted. Extensions of the method are also possible to further improve accuracy or to handle multi-sequence MR images.", "title": "" }, { "docid": "ee9cb495280dc6e252db80c23f2f8c2b", "text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.", "title": "" }, { "docid": "61165d1a6a76233fca33e61b81c33da6", "text": "In recent years, deep convolutional neural networks have achieved state of the art performance in various computer vision tasks such as classification, detection or segmentation. Due to their outstanding performance, CNNs are more and more used in the field of document image analysis as well. In this work, we present a CNN architecture that is trained with the recently proposed PHOC representation. We show empirically that our CNN architecture is able to outperform state-of-the-art results for various word spotting benchmarks while exhibiting short training and test times.", "title": "" }, { "docid": "3682143e9cfe7dd139138b3b533c8c25", "text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.", "title": "" }, { "docid": "7a3f69a9da7fc754f6de7e5720147857", "text": "We compare four high-profile waterfall security-engineering processes (CLASP, Microsoft SDL, Cigital Touchpoints and Common Criteria) with the available preconditions within agile processes. Then, using a survey study, agile security activities are identified and evaluated by practitioners from large companies, e.g. software and telecommunication companies. Those activities are compared and a specific security engineering process is suggested for an agile process setting that can provide high benefit with low integration cost.", "title": "" }, { "docid": "3a3f3e1c0eac36d53a40d7639c3d65cc", "text": "The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.", "title": "" }, { "docid": "b5dcb1496143f31526b3bd07b1045add", "text": "Crowdturfing has recently been identified as a sinister counterpart to the enormous positive opportunities of crowdsourcing. Crowdturfers leverage human-powered crowdsourcing platforms to spread malicious URLs in social media, form “astroturf” campaigns, and manipulate search engines, ultimately degrading the quality of online information and threatening the usefulness of these systems. In this paper we present a framework for “pulling back the curtain” on crowdturfers to reveal their underlying ecosystem. Concretely, we analyze the types of malicious tasks and the properties of requesters and workers in crowdsourcing sites such as Microworkers.com, ShortTask.com and Rapidworkers.com, and link these tasks (and their associated workers) on crowdsourcing sites to social media, by monitoring the activities of social media participants. Based on this linkage, we identify the relationship structure connecting these workers in social media, which can reveal the implicit power structure of crowdturfers identified on crowdsourcing sites. We identify three classes of crowdturfers – professional workers, casual workers, and middlemen – and we develop statistical user models to automatically differentiate these workers and regular social media users.", "title": "" } ]
scidocsrr
134669d305324b66a4cc2d523a8c1d61
The effect of Twitter on college student engagement and grades
[ { "docid": "d083e8ebddf43bcd8f1efd05aa708658", "text": "Even a casual reading of the extensive literature on student development in higher education can create confusion and perplexity. One finds not only that the problems being studied are highly diverse but also that investigators who claim to be studying the same problem frequently do not look at the same variables or employ the same methodologies. And even when they are investigating the same variables, different investigators may use completely different terms to describe and discuss these variables. My own interest in articulating a theory of student development is partly practical—I would like to bring some order into the chaos of the literature—and partly self-protective. I and increasingly bewildered by the muddle of f indings that have emerged from my own research in student development, research that I have been engaged in for more than 20 years. The theory of student involvement that I describe in this article appeals to me for several reasons. First, it is simple: I have not needed to draw a maze consisting of dozens of boxes interconnected by two-headed arrows to explain the basic elements of the theory to others. Second, the theory can explain most of the empirical knowledge about environmental influences on student development that researchers have gained over the years. Third, it is capable of embracing principles from such widely divergent sources as psychoanalysis and classical learning theory. Finally, this theory of student involvement can be used both by researchers to guide their investigation of student development—and by college administrators and", "title": "" } ]
[ { "docid": "b6d5849d7950438716e31880860f835c", "text": "The promotion of reflective capacity within the teaching of clinical skills and professionalism is posited as fostering the development of competent health practitioners. An innovative approach combines structured reflective writing by medical students and individualized faculty feedback to those students to augment instruction on reflective practice. A course for preclinical students at the Warren Alpert Medical School of Brown University, entitled \"Doctoring,\" combined reflective writing assignments (field notes) with instruction in clinical skills and professionalism and early clinical exposure in a small-group format. Students generated multiple e-mail field notes in response to structured questions on course topics. Individualized feedback from a physician-behavioral scientist dyad supported the students' reflective process by fostering critical-thinking skills, highlighting appreciation of the affective domain, and providing concrete recommendations. The development and implementation of this innovation are presented, as is an analysis of the written evaluative comments of students taking the Doctoring course. Theoretical and clinical rationales for features of the innovation and supporting evidence of their effectiveness are presented. Qualitative analyses of students' evaluations yielded four themes of beneficial contributions to their learning experience: promoting deeper and more purposeful reflection, the value of (interdisciplinary) feedback, the enhancement of group process, and personal and professional development. Evaluation of the innovation was the fifth theme; some limitations are described, and suggestions for improvement are provided. Issues of the quality of the educational paradigm, generalizability, and sustainability are addressed.", "title": "" }, { "docid": "1dcbd0c9fad30fcc3c0b6f7c79f5d04c", "text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.", "title": "" }, { "docid": "b3e90fdfda5346544f769b6dd7c3882b", "text": "Bromelain is a complex mixture of proteinases typically derived from pineapple stem. Similar proteinases are also present in pineapple fruit. Beneficial therapeutic effects of bromelain have been suggested or proven in several human inflammatory diseases and animal models of inflammation, including arthritis and inflammatory bowel disease. However, it is not clear how each of the proteinases within bromelain contributes to its anti-inflammatory effects in vivo. Previous in vivo studies using bromelain have been limited by the lack of assays to control for potential differences in the composition and proteolytic activity of this naturally derived proteinase mixture. In this study, we present model substrate assays and assays for cleavage of bromelain-sensitive cell surface molecules can be used to assess the activity of constituent proteinases within bromelain without the need for biochemical separation of individual components. Commercially available chemical and nutraceutical preparations of bromelain contain predominately stem bromelain. In contrast, the proteinase activity of pineapple fruit reflects its composition of fruit bromelain>ananain approximately stem bromelain. Concentrated bromelain solutions (>50 mg/ml) are more resistant to spontaneous inactivation of their proteolytic activity than are dilute solutions, with the proteinase stability in the order of stem bromelain>fruit bromelain approximately ananain. The proteolytic activity of concentrated bromelain solutions remains relatively stable for at least 1 week at room temperature, with minimal inactivation by multiple freeze-thaw cycles or exposure to the digestive enzyme trypsin. The relative stability of concentrated versus dilute bromelain solutions to inactivation under physiologically relevant conditions suggests that delivery of bromelain as a concentrated bolus would be the preferred method to maximize its proteolytic activity in vivo.", "title": "" }, { "docid": "9d9e06e02465fbe1a0dbbc62cf17c9cb", "text": "We present work-optimal PRAM algorithms for Burrows-Wheeler compression and decompression of strings over a constant alphabet. For a string of length n, the depth of the compression algorithm is O(log n), and the depth of the the corresponding decompression algorithm is O(log n). These appear to be the first polylogarithmic-time work-optimal parallel algorithms for any standard lossless compression scheme. The algorithms for the individual stages of compression and decompression may also be of independent interest: 1. a novel O(log n)-time, O(n)-work PRAM algorithm for Huffman decoding; 2. original insights into the stages of the BW compression and decompression problems, bringing out parallelism that was not readily apparent, allowing them to be mapped to elementary parallel routines that have O(log n)-time, O(n)-work solutions, such as: (i) prefix-sums problems with an appropriately-defined associative binary operator for several stages, and (ii) list ranking for the final stage of decompression.", "title": "" }, { "docid": "e13dd00f1bb5ba3a83caf8830714bc79", "text": "The concensus view has traditionally been that brains evolved to process information of ecological relevance. This view, however, ignores an important consideration: Brains are exceedingly expensive both to evolve and to maintain. The adult human brain weighs about 2% of body weight but consumes about 20% of total energy intake.2 In the light of this, it is difficult to justify the claim that primates, and especially humans, need larger brains than other species merely to do the same ecological job. Claims that primate ecological strategies involve more complex problem-solving3,4 are plausible when applied to the behaviors of particular species, such as termite-extraction by chimpanzees and nut-cracking by Cebus monkeys, but fail to explain why all primates, including those that are conventional folivores, require larger brains than those of all other mammals. An alternative hypothesis offered during the late 1980s was that primates’ large brains reflect the computational demands of the complex social systems that characterize the order.5,6 Prima facie, this suggestion seems plausible: There is ample evidence that primate social systems are more complex than those of other species. These systems can be shown to involve processes such as tactical deception5 and coalition-formation,7,8 which are rare or occur only in simpler forms in other taxonomic groups. Because of this, the suggestion was rapidly dubbed the Machiavellian intelligence hypothesis, although there is a growing preference to call it the social brain hypothesis.9,10 Plausible as it seems, the social brain hypothesis faced a problem that was recognized at an early date. Specifically, what quantitative empirical evidence there was tended to favor one or the other of the ecological hypotheses,1 whereas the evidence adduced in favor of the social brain hypothesis was, at best, anecdotal.6 In this article, I shall first show how we can test between the competing hypotheses more conclusively and then consider some of the implications of the social brain hypothesis for humans. Finally, I shall briefly consider some of the underlying cognitive mechanisms that might be involved.", "title": "" }, { "docid": "6c3be94fe73ef79d711ef5f8b9c789df", "text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?", "title": "" }, { "docid": "f0a36e965ed2aa24add239e6498a0fd0", "text": "Currently two models of innovation are prevalent in organization science. The \"private investment\" model assumes returns to the innovator results from private goods and efficient regimes of intellectual property protection. The \"collective action\" model assumes that under conditions of market failure, innovators collaborate in order to produce a public good. The phenomenon of open source software development shows that users program to solve their own as well as shared technical problems, and freely reveal their innovations without appropriating private returns from selling the software. In this paper we propose that open source software development is an exemplar of a compound model of innovation that contains elements of both the private investment and the collective action models. We describe a new set of research questions this model raises for scholars in organization science. We offer some details regarding the types of data available for open source projects in order to ease access for researchers who are unfamiliar with these, and also offer some advice on conducting empirical studies on open source software development processes.", "title": "" }, { "docid": "7d896fc0defac1bd5f11d19f555536cc", "text": "Distributed processing frameworks, such as Yahoo!'s Hadoop and Google's MapReduce, have been successful at harnessing expansive datacenter resources for large-scale data analysis. However, their effect on datacenter energy efficiency has not been scrutinized. Moreover, the filesystem component of these frameworks effectively precludes scale-down of clusters deploying these frameworks (i.e. operating at reduced capacity). This paper presents our early work on modifying Hadoop to allow scale-down of operational clusters. We find that running Hadoop clusters in fractional configurations can save between 9% and 50% of energy consumption, and that there is a tradeoff between performance energy consumption. We also outline further research into the energy-efficiency of these frameworks.", "title": "" }, { "docid": "12a8d007ca4dce21675ddead705c7b62", "text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.", "title": "" }, { "docid": "00de76b9a27182c5551598871326f6b2", "text": "The development of computational thinking skills through computer programming is a major topic in education, as governments around the world are introducing these skills in the school curriculum. In consequence, educators and students are facing this discipline for the first time. Although there are many technologies that assist teachers and learners in the learning of this competence, there is a lack of tools that support them in the assessment tasks. This paper compares the computational thinking score provided by Dr. Scratch, a free/libre/open source software assessment tool for Scratch, with McCabe's Cyclomatic Complexity and Halstead's metrics, two classic software engineering metrics that are globally recognized as a valid measurement for the complexity of a software system. The findings, which prove positive, significant, moderate to strong correlations between them, could be therefore considered as a validation of the complexity assessment process of Dr. Scratch.", "title": "" }, { "docid": "438313d0634a4ac173b9fe2f2324e975", "text": "This study examines how a firm’s overall reputation status (reputation hereafter) affects its tax planning. Drawing on the moral licensing theory, we posit that managers’ and other stakeholders’ perception of a firm’s questionable behavior may be affected by the firm’s reputation and that a good reputation may help a firm to justify, or “license”, such behavior. This licensing effect may reduce a firm’s concerns about its tax avoidance behavior and incentivize reputable firms to engage in more tax reduction activities that have ambiguities in transgression. The empirical findings support our conjecture. Specifically, we test the association between a firm’s established reputation and its tax planning using multiple tax avoidance measures, which capture different tax reduction technologies that either fall into the gray area or violate tax and financial reporting rules. Relative to less reputable firms, more reputable firms on average avoid more taxes by using tax reduction technologies that have ambiguity in transgression, but are less likely to engage in tax-related activities that are blatant transgressions. We further investigate whether the licensing effect of reputation is more pronouced under the more principles-based or rules-based standards. Our findings suggest that the licensing effect is more pronounced under the more principle-based standards.", "title": "" }, { "docid": "17055a66f80354bf5a614a510a4ef689", "text": "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.", "title": "" }, { "docid": "0d8c6d7637582f4dfa05cca611c4736f", "text": "Because of their continuous and natural motion, fluidically powered soft actuators have shown potential in a range of robotic applications, including prosthetics and orthotics. Despite these advantages, robots using these actuators require stretchable sensors that can be embedded in their bodies for sophisticated functions. Presently, stretchable sensors usually rely on the electrical properties of materials and composites for measuring a signal; many of these sensors suffer from hysteresis, fabrication complexity, chemical safety and environmental instability, and material incompatibility with soft actuators. Many of these issues are solved if the optical properties of materials are used for signal transduction. We report the use of stretchable optical waveguides for strain sensing in a prosthetic hand. These optoelectronic strain sensors are easy to fabricate, are chemically inert, and demonstrate low hysteresis and high precision in their output signals. As a demonstration of their potential, the photonic strain sensors were used as curvature, elongation, and force sensors integrated into a fiber-reinforced soft prosthetic hand. The optoelectronically innervated prosthetic hand was used to conduct various active sensation experiments inspired by the capabilities of a real hand. Our final demonstration used the prosthesis to feel the shape and softness of three tomatoes and select the ripe one.", "title": "" }, { "docid": "cffca9fbd3a5c93175e06547831755e2", "text": "Many challenges in natural language processing require generating text, including language translation, dialogue generation, and speech recognition. For all of these problems, text generation becomes more difficult as the text becomes longer. Current language models often struggle to keep track of coherence for long pieces of text. Here, we attempt to have the model construct and use an outline of the text it generates to keep it focused. We find that the usage of an outline improves perplexity. We do not find that using the outline improves human evaluation over a simpler baseline, revealing a discrepancy in perplexity and human perception. Similarly, hierarchical generation is not found to improve human evaluation scores.", "title": "" }, { "docid": "3392de7e3182420e882617f0baff389a", "text": "BACKGROUND\nIndividuals who initiate cannabis use at an early age, when the brain is still developing, might be more vulnerable to lasting neuropsychological deficits than individuals who begin use later in life.\n\n\nMETHODS\nWe analyzed neuropsychological test results from 122 long-term heavy cannabis users and 87 comparison subjects with minimal cannabis exposure, all of whom had undergone a 28-day period of abstinence from cannabis, monitored by daily or every-other-day observed urine samples. We compared early-onset cannabis users with late-onset users and with controls, using linear regression controlling for age, sex, ethnicity, and attributes of family of origin.\n\n\nRESULTS\nThe 69 early-onset users (who began smoking before age 17) differed significantly from both the 53 late-onset users (who began smoking at age 17 or later) and from the 87 controls on several measures, most notably verbal IQ (VIQ). Few differences were found between late-onset users and controls on the test battery. However, when we adjusted for VIQ, virtually all differences between early-onset users and controls on test measures ceased to be significant.\n\n\nCONCLUSIONS\nEarly-onset cannabis users exhibit poorer cognitive performance than late-onset users or control subjects, especially in VIQ, but the cause of this difference cannot be determined from our data. The difference may reflect (1). innate differences between groups in cognitive ability, antedating first cannabis use; (2). an actual neurotoxic effect of cannabis on the developing brain; or (3). poorer learning of conventional cognitive skills by young cannabis users who have eschewed academics and diverged from the mainstream culture.", "title": "" }, { "docid": "c02e7ece958714df34539a909c2adb7d", "text": "Despite the growing evidence of the association between shame experiences and eating psychopathology, the specific effect of body image-focused shame memories on binge eating remains largely unexplored. The current study examined this association and considered current body image shame and self-criticism as mediators. A multi-group path analysis was conducted to examine gender differences in these relationships. The sample included 222 women and 109 men from the Portuguese general and college student populations who recalled an early body image-focused shame experience and completed measures of the centrality of the shame memory, current body image shame, binge eating symptoms, depressive symptoms, and self-criticism. For both men and women, the effect of the centrality of shame memories related to body image on binge eating symptoms was fully mediated by body image shame and self-criticism. In women, these effects were further mediated by self-criticism focused on a sense of inadequacy and also on self-hatred. In men, only the form of self-criticism focused on a sense of inadequacy mediated these associations. The present study has important implications for the conceptualization and treatment of binge eating symptoms. Findings suggest that, in both genders, body image-focused shame experiences are associated with binge eating symptoms via their effect on current body image shame and self-criticism.", "title": "" }, { "docid": "d719fb1fe0faf76c14d24f7587c5345f", "text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †", "title": "" }, { "docid": "5b73883a0bec8434fef8583143dac645", "text": "RC4 is the most widely deployed stream cipher in software applications. In this paper we describe a major statistical weakness in RC4, which makes it trivial to distinguish between short outputs of RC4 and random strings by analyzing their second bytes. This weakness can be used to mount a practical ciphertext-only attack on RC4 in some broadcast applications, in which the same plaintext is sent to multiple recipients under different keys.", "title": "" }, { "docid": "1378ab6b9a77dba00beb63c27b1addf6", "text": "Whenever we listen to or meet a new person we try to predict personality attributes of the person. Our behavior towards the person is hugely influenced by the predictions we make. Personality is made up of the characteristic patterns of thoughts, feelings and behaviors that make a person unique. Your personality affects your success in the role. Recognizing about yourself and reflecting on your personality can help you to understand how you might shape your future. Various approaches like personality prediction through speech, facial expression, video, and text are proposed in literature to recognize personality. Personality predictions can be made out of one’s handwriting as well. The objective of this paper is to discuss methodology used to identify personality through handwriting analysis and present current state-of-art related to it.", "title": "" }, { "docid": "b6a5cb59faea3e32d0046c0809ff715b", "text": "This paper discusses a novel fast approach for moving object detection in H.264/AVC compressed domain for video surveillance applications. The proposed algorithm initially segments out edges from regions with motion at macroblock level by utilizing the gradient of quantization parameter over 2D-image space. A spatial median filtering of the segmented edges followed by weighted temporal accumulation accounts for whole object segmentation. To attain sub-macroblock (4×4) level precision, the size of macroblocks (in bits) is interpolated using a two tap filter. Partial decoding rules out the complexity involved in full decoding and gives fast foreground segmentation results. Compared to other compressed domain techniques, the proposed approach allows the video streams to be encoded with different quantization parameters across macroblocks thereby increasing flexibility in bit rate adjustment.", "title": "" } ]
scidocsrr
bd36cc3e4df180aaa44a286cb9ae0459
Learning task-specific models for dexterous, in-hand manipulation with simple, adaptive robot hands
[ { "docid": "a76826da7f077cf41aaa7c8eca9be3fe", "text": "In this paper we present an open-source design for the development of low-complexity, anthropomorphic, underactuated robot hands with a selectively lockable differential mechanism. The differential mechanism used is a variation of the whiffletree (or seesaw) mechanism, which introduces a set of locking buttons that can block the motion of each finger. The proposed design is unique since with a single motor and the proposed differential mechanism the user is able to control each finger independently and switch between different grasping postures in an intuitive manner. Anthropomorphism of robot structure and motion is achieved by employing in the design process an index of anthropomorphism. The proposed robot hands can be easily fabricated using low-cost, off-the-shelf materials and rapid prototyping techniques. The efficacy of the proposed design is validated through different experimental paradigms involving grasping of everyday life objects and execution of daily life activities. The proposed hands can be used as affordable prostheses, helping amputees regain their lost dexterity.", "title": "" }, { "docid": "9e88b710d55b90074a98ba70527e0cea", "text": "In this paper we present a series of design directions for the development of affordable, modular, light-weight, intrinsically-compliant, underactuated robot hands, that can be easily reproduced using off-the-shelf materials. The proposed robot hands, efficiently grasp a series of everyday life objects and are considered to be general purpose, as they can be used for various applications. The efficiency of the proposed robot hands has been experimentally validated through a series of experimental paradigms, involving: grasping of multiple everyday life objects with different geometries, myoelectric (EMG) control of the robot hands in grasping tasks, preliminary results on a grasping capable quadrotor and autonomous grasp planning under object position and shape uncertainties.", "title": "" } ]
[ { "docid": "8933d7d0f57a532ef27b9dbbb3727a88", "text": "All people can not do as they plan, it happens because of their habits. Therefore, habits and moods may affect their productivity. Hence, the habits and moods are the important parts of person's life. Such habits may be analyzed with various machine learning techniques as available nowadays. Now the question of analyzing the Habits and moods of a person with a goal of increasing one's productivity comes to mind. This paper discusses one such technique called HDML (Habit Detection with Machine Learning). HDML model analyses the mood which helps us to deal with a bad mood or a state of unproductivity, through suggestions about such activities that alleviate our mood. The overall accuracy of the model is about 87.5 %.", "title": "" }, { "docid": "643599f9b0dcfd270f9f3c55567ed985", "text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.", "title": "" }, { "docid": "5481f319296c007412e62129d2ec5943", "text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.", "title": "" }, { "docid": "532980d1216f9f10332cc13b6a093fb4", "text": "Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words. Current DSMs, however, represent context words as separate features, which causes the loss of important information for word expectations, such as word order and interrelations. In this paper, we present a DSM which addresses the issue by defining verb contexts as joint dependencies. We test our representation in a verb similarity task on two datasets, showing that joint contexts are more efficient than single dependencies, even with a relatively small amount of training data.", "title": "" }, { "docid": "cc99e806503b158aa8a41753adecd50c", "text": "Semantic Mutation Testing (SMT) is a technique that aims to capture errors caused by possible misunderstandings of the semantics of a description language. It is intended to target a class of errors which is different from those captured by traditional Mutation Testing (MT). This paper describes our experiences in the development of an SMT tool for the C programming language: SMT-C. In addition to implementing the essential requirements of SMT (generating semantic mutants and running SMT analysis) we also aimed to achieve the following goals: weak MT/SMT for C, good portability between different configurations, seamless integration into test routines of programming with C and an easy to use front-end.", "title": "" }, { "docid": "643358b55155cab539188423c2b92713", "text": "Recently, DevOps has emerged as an alternative for software organizations inserted into a dynamic market to handle daily software demands. As claimed, it intends to make the software development and operations teams to work collaboratively. However, it is hard to observe a shared understanding of DevOps, what potentially hinders the discussions in the literature and can confound observations when conducting empirical studies. Therefore, we performed a Multivocal Literature Review aiming at characterizing DevOps in multiple perspectives, including data sources from technical and gray literature. Grounded Theory procedures were used to rigorous analyze the collected data. It allowed us to achieve a grounded definition for DevOps, as well as to identify its recurrent principles, practices, required skills, potential benefits, challenges and what motivates the organizations to adopt it. Finally, we understand the DevOps movement has identified relevant issues in the state-of-the-practice. However, we advocate for the scientific investigations concerning the potential benefits and drawbacks as a consequence of adopting the suggested principles and practices.", "title": "" }, { "docid": "0a09f894029a0b8730918c14906dca9e", "text": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the E cient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor’s 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.", "title": "" }, { "docid": "13ab6462ca59ca8618174aa00c15ba58", "text": "In Brazil, around 2 000 000 families have not been connected to an electricity grid yet. Out of these, a significant number of villages may never be connected to the national grid due to their remoteness. For the people living in these communities, access to renewable energy sources is the only solution to meet their energy needs. In these communes, the electricity is mainly used for household purposes such as lighting. There is little scope for the productive use of energy. It is recognized that electric service contributes particularly to inclusive social development and to a lesser extent to pro-poor growth as well as to environmental sustainability. In this paper, we present the specification, design, and development of a standalone micro-grid supplied by a hybrid wind-solar generating source. The goal of the project was to provide a reliable, continuous, sustainable, and good-quality electricity service to users, as provided in bigger cities. As a consequence, several technical challenges arose and were overcome successfully as will be related in this paper, contributing to increase of confidence in renewable systems to isolated applications.", "title": "" }, { "docid": "bd32bda2e79d28122f424ec4966cde15", "text": "This paper holds a survey on plant leaf diseases classification using image processing. Digital image processing has three basic steps: image processing, analysis and understanding. Image processing contains the preprocessing of the plant leaf as segmentation, color extraction, diseases specific data extraction and filtration of images. Image analysis generally deals with the classification of diseases. Plant leaf can be classified based on their morphological features with the help of various classification techniques such as PCA, SVM, and Neural Network. These classifications can be defined various properties of the plant leaf such as color, intensity, dimensions. Back propagation is most commonly used neural network. It has many learning, training, transfer functions which is used to construct various BP networks. Characteristics features are the performance parameter for image recognition. BP networks shows very good results in classification of the grapes leaf diseases. This paper provides an overview on different image processing techniques along with BP Networks used in leaf disease classification.", "title": "" }, { "docid": "a57caf61fdae1ab9c1fc4d944ebe03cd", "text": "The handiness and ease of use of tele-technology like mobile phones has surged the growth of ICT in developing countries like India than ever. Mobile phones are showing overwhelming responses and have helped farmers to do the work on timely basis and stay connected with the outer farming world. But mobile phones are of no use when it comes to the real-time farm monitoring or accessing the accurate information because of the little research and application of mobile phone in agricultural field for such uses. The current demand of use of WSN in agricultural fields has revolutionized the farming experiences. In Precision Agriculture, the contribution of WSN are numerous staring from monitoring soil health, plant health to the storage of crop yield. Due to pressure of population and economic inflation, a lot of pressure is on farmers to produce more out of their fields with fewer resources. This paper gives brief insight into the relation of plant disease prediction with the help of wireless sensor networks. Keywords— Plant Disease Monitoring, Precision Agriculture, Environmental Parameters, Wireless Sensor Network (WSN)", "title": "" }, { "docid": "83f1fc22d029b3a424afcda770a5af23", "text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.", "title": "" }, { "docid": "22cc9e5487975f8b7ca400ad69504107", "text": "IMSI Catchers are tracking devices that break the privacy of the subscribers of mobile access networks, with disruptive effects to both the communication services and the trust and credibility of mobile network operators. Recently, we verified that IMSI Catcher attacks are really practical for the state-of-the-art 4G/LTE mobile systems too. Our IMSI Catcher device acquires subscription identities (IMSIs) within an area or location within a few seconds of operation and then denies access of subscribers to the commercial network. Moreover, we demonstrate that these attack devices can be easily built and operated using readily available tools and equipment, and without any programming. We describe our experiments and procedures that are based on commercially available hardware and unmodified open source software.", "title": "" }, { "docid": "fa065201fb8c95487eb6a55942befc41", "text": "Numerous machine learning algorithms applied on Intrusion Detection System (IDS) to detect enormous attacks. However, it is difficult for machine to learn attack properties globally since there are huge and complex input features. Feature selection can overcome this problem by selecting the most important features only to reduce the dimensionality of input features. We leverage Artificial Neural Network (ANN) for the feature selection. In addition, in order to be suitable for resource-constrained devices, we can divide the IDS into smaller parts based on TCP/IP layer since different layer has specific attack types. We show the IDS for transport layer only as a prove of concept. We apply Stacked Auto Encoder (SAE) which belongs to deep learning algorithm as a classifier for KDD99 Dataset. Our experiment shows that the reduced input features are sufficient for classification task. 한국정보보호학회 하계학술대회 논문집 Vol. 26, No. 1", "title": "" }, { "docid": "6a1a62a5c586f0abd08a94a19371004f", "text": "Tourism is perceived as an appropriate solution for pursuing sustainable economic growth due to its main characteristics. In the context of sustainable tourism, gamification can act as an interface between tourists (clients), organisations (companies, NGOs, public institutions) and community, an interface built in a responsible and ethical way. The main objective of this study is to identify gamification techniques and applications used by organisations in the hospitality and tourism industry to improve their sustainable activities. The first part of the paper examines the relationship between gamification and sustainability, highlighting the links between these two concepts. The second part identifies success stories of gamification applied in hospitality and tourism and reviews gamification benefits by analysing the relationship between tourism organisations and three main tourism stakeholders: tourists, tourism employees and local community. The analysis is made in connection with the main pillars of sustainability: economic, social and environmental. This study is positioning the role of gamification in the tourism and hospitality industry and further, into the larger context of sustainable development.", "title": "" }, { "docid": "6465b2af36350a444fbc6682540ff21d", "text": "We present an algorithm for finding an <i>s</i>-sparse vector <i>x</i> that minimizes the <i>square-error</i> ∥<i>y</i> -- Φ<i>x</i>∥<sup>2</sup> where Φ satisfies the <i>restricted isometry property</i> (RIP), with <i>isometric constant</i> Δ<sub>2<i>s</i></sub> < 1/3. Our algorithm, called <b>GraDeS</b> (Gradient Descent with Sparsification) iteratively updates <i>x</i> as: [EQUATION]\n where γ > 1 and <i>H<sub>s</sub></i> sets all but <i>s</i> largest magnitude coordinates to zero. <b>GraDeS</b> converges to the correct solution in constant number of iterations. The condition Δ<sub>2<i>s</i></sub> < 1/3 is most general for which a <i>near-linear time</i> algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is Δ<sub>2<i>s</i></sub> < √2 -- 1.\n Our Matlab implementation of <b>GraDeS</b> outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but <b>GraDeS</b> finds the correct solution.", "title": "" }, { "docid": "440e45de4d13e89e3f268efa58f8a51a", "text": "This letter describes the concept, design, and measurement of a low-profile integrated microstrip antenna for dual-band applications. The antenna operates at both the GPS L1 frequency of 1.575 GHz with circular polarization and 5.88 GHz with a vertical linear polarization for dedicated short-range communication (DSRC) application. The antenna is low profile and meets stringent requirements on pattern/polarization performance in both bands. The design procedure is discussed, and full measured data are presented.", "title": "" }, { "docid": "50c639dfa7063d77cda26666eabeb969", "text": "This paper addresses the problem of detecting people in two dimensional range scans. Previous approaches have mostly used pre-defined features for the detection and tracking of people. We propose an approach that utilizes a supervised learning technique to create a classifier that facilitates the detection of people. In particular, our approach applies AdaBoost to train a strong classifier from simple features of groups of neighboring beams corresponding to legs in range data. Experimental results carried out with laser range data illustrate the robustness of our approach even in cluttered office environments", "title": "" }, { "docid": "3ecd1c083d256c7fd88991f1e442cb8b", "text": "It has long been observed that database management systems focus on traditional business applications, and that few people use a database management system outside their workplace. Many have wondered what it will take to enable the use of data management technology by a broader class of users and for a much wider range of applications.\n Google Fusion Tables represents an initial answer to the question of how data management functionality that focused on enabling new users and applications would look in today's computing environment. This paper characterizes such users and applications and highlights the resulting principles, such as seamless Web integration, emphasis on ease of use, and incentives for data sharing, that underlie the design of Fusion Tables. We describe key novel features, such as the support for data acquisition, collaboration, visualization, and web-publishing.", "title": "" }, { "docid": "3c82ba94aa4d717d51c99cfceb527f22", "text": "Manipulator collision avoidance using genetic algorithms is presented. Control gains in the collision avoidance control model are selected based on genetic algorithms. A repulsive force is artificially created using the distances between the robot links and obstacles, which are generated by a distance computation algorithm. Real-time manipulator collision avoidance control has achieved. A repulsive force gain is introduced through the approaches for definition of link coordinate frames and kinematics computations. The safety distance between objects is affected by the repulsive force gain. This makes the safety zone adjustable and provides greater intelligence for robotic tasks under the ever-changing environment.", "title": "" }, { "docid": "61c4146ac8b55167746d3f2b9c8b64e8", "text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.", "title": "" } ]
scidocsrr
c228feb490a660d1769263462d214886
Are these Ads Safe: Detecting Hidden Attacks through the Mobile App-Web Interfaces
[ { "docid": "0ee09adae30459337f8e7261165df121", "text": "Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.", "title": "" } ]
[ { "docid": "7d472441fb112f0851bcfe6854b8663e", "text": "Detection and recognition of traffic sign, including various road signs and text, play an important role in autonomous driving, mapping/navigation and traffic safety. In this paper, we proposed a traffic sign detection and recognition system by applying deep convolutional neural network (CNN), which demonstrates high performance with regard to detection rate and recognition accuracy. Compared with other published methods which are usually limited to a predefined set of traffic signs, our proposed system is more comprehensive as our target includes traffic signs, digits, English letters and Chinese characters. The system is based on a multi-task CNN trained to acquire effective features for the localization and classification of different traffic signs and texts. In addition to the public benchmarking datasets, the proposed approach has also been successfully evaluated on a field-captured Chinese traffic sign dataset, with performance confirming its robustness and suitability to real-world applications.", "title": "" }, { "docid": "15999217dea6ba3ab33ed193f83a42a3", "text": "This paper describes a very low cost MMIC high power amplifier (HPA) with output power of over 7W. The MMIC was fabricated using a GaAs PHEMT process with a state-of-the-art compact die area of 13.7mm2. The HPA MMIC contains a phase and amplitude compensated output power combiner and super low loss phase compensated inter-stage matching networks. A four stage amplifier demonstrated commercially available GaN PHEMT based HPA equivalent performance with 7W saturated output power and 24dB small signal gain from 27.5GHz to 30GHz with peak output power of 8.3W and power added efficiency (PAE) of 27%. This low cost MMIC HPA achieved approximately 10-times lower production cost than GaN PHEMT based MMIC HPAs.", "title": "" }, { "docid": "8abf8ef3e789b9dc2852228dd330609f", "text": "We propose a simple and useful idea based on cross-ratio constraint for wide-baseline matching and 3D reconstruction. Most existing methods exploit feature points and planes from images. Lines have always been considered notorious for both matching and reconstruction due to the lack of good line descriptors. We propose a method to generate and match new points using virtual lines constructed using pairs of keypoints, which are obtained using standard feature point detectors. We use cross-ratio constraints to obtain an initial set of new point matches, which are subsequently used to obtain line correspondences. We develop a method that works for both calibrated and uncalibrated camera configurations. We show compelling line-matching and large-scale 3D reconstruction.", "title": "" }, { "docid": "e82c0826863ccd9cd647725fc00a2137", "text": "Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate, for instance, the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models.", "title": "" }, { "docid": "3c4e3d86df819aea592282b171191d0d", "text": "Memory forensic analysis collects evidence for digital crimes and malware attacks from the memory of a live system. It is increasingly valuable, especially in cloud computing. However, memory analysis on on commodity operating systems (such as Microsoft Windows) faces the following key challenges: (1) a partial knowledge of kernel data structures; (2) difficulty in handling ambiguous pointers; and (3) lack of robustness by relying on soft constraints that can be easily violated by kernel attacks. To address these challenges, we present MACE, a memory analysis system that can extract a more complete view of the kernel data structures for closed-source operating systems and significantly improve the robustness by only leveraging pointer constraints (which are hard to manipulate) and evaluating these constraint globally (to even tolerate certain amount of pointer attacks). We have evaluated MACE on 100 memory images for Windows XP SP3 and Windows 7 SP0. Overall, MACE can construct a kernel object graph from a memory image in just a few minutes, and achieves over 95% recall and over 96% precision. Our experiments on real-world rootkit samples and synthetic attacks further demonstrate that MACE outperforms other external memory analysis tools with respect to wider coverage and better robustness.", "title": "" }, { "docid": "2dd273dc2c5b0d849cca13187419e373", "text": "As people across the globe are becoming more interested in watching their weight, eating more healthy, and avoiding obesity, a system that can measure calories and nutrition in every day meals can be very useful. In this paper, we propose a food calorie and nutrition measurement system that can help patients and dietitians to measure and manage daily food intake. Our system is built on food image processing and uses nutritional fact tables. Recently, there has been an increase in the usage of personal mobile technology such as smartphones or tablets, which users carry with them practically all the time. Via a special calibration technique, our system uses the built-in camera of such mobile devices and records a photo of the food before and after eating it to measure the consumption of calorie and nutrient components. Our results show that the accuracy of our system is acceptable and it will greatly improve and facilitate current manual calorie measurement techniques.", "title": "" }, { "docid": "cb2df8e27a3c284028d0fbb86652ae14", "text": "The large bulk of packets/flows in future core networks will require a highly efficient header processing in the switching elements. Simplifying lookup in core network switching elements is capital to transport data at high rates and with low latency. Flexible network hardware combined with agile network control is also an essential property for future software-defined networking. We argue that only further decoupling between the control and data planes will unlock the flexibility and agility in SDN for the design of new network solutions for core networks. This article proposes a new approach named KeyFlow to build a flexible network-fabricbased model. It replaces the table lookup in the forwarding engine by elementary operations relying on a residue number system. This provides us tools to design a stateless core network by still using OpenFlow centralized control. A proof of concept prototype is validated using the Mininet emulation environment and OpenFlow 1.0. The results indicate RTT reduction above 50 percent, especially for networks with densely populated flow tables. KeyFlow achieves above 30 percent reduction in keeping active flow state in the network.", "title": "" }, { "docid": "572453e5febc5d45be984d7adb5436c5", "text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.", "title": "" }, { "docid": "4f3e37db8d656fe1e746d6d3a37878b5", "text": "Shorter product life cycles and aggressive marketing, among other factors, have increased the complexity of sales forecasting. Forecasts are often produced using a Forecasting Support System that integrates univariate statistical forecasting with managerial judgment. Forecasting sales under promotional activity is one of the main reasons to use expert judgment. Alternatively, one can replace expert adjustments by regression models whose exogenous inputs are promotion features (price, display, etc.). However, these regression models may have large dimensionality as well as multicollinearity issues. We propose a novel promotional model that overcomes these limitations. It combines Principal Component Analysis to reduce the dimensionality of the problem and automatically identifies the demand dynamics. For items with limited history, the proposed model is capable of providing promotional forecasts by selectively pooling information across established products. The performance of the model is compared against forecasts provided by experts and statistical benchmarks, on weekly data; outperforming both substantially.", "title": "" }, { "docid": "668b8d1475bae5903783159a2479cc32", "text": "As environmental concerns and energy consumption continue to increase, utilities are looking at cost effective strategies for improved network operation and consumer consumption. Smart grid is a collection of next generation power delivery concepts that includes new power delivery components, control and monitoring throughout the power grid and more informed customer options. This session will cover utilization of AMI networks to realize some of the smart grid goals.", "title": "" }, { "docid": "2a4cb6dac01c4388b4b8d8a80e30fc2b", "text": "Chemotaxis toward amino-acids results from the suppression of directional changes which occur spontaneously in isotropic solutions.", "title": "" }, { "docid": "7003d59d401bce0f6764cc6aa25b5dd2", "text": "This paper presents a 13 bit 50 MS/s fully differential ring amplifier based SAR-assisted pipeline ADC, implemented in 65 nm CMOS. We introduce a new fully differential ring amplifier, which solves the problems of single-ended ring amplifiers while maintaining the benefits of high gain, fast slew based charging and an almost rail-to-rail output swing. We implement a switched-capacitor (SC) inter-stage residue amplifier that uses this new fully differential ring amplifier to give accurate amplification without calibration. In addition, a new floated detect-and-skip (FDAS) capacitive DAC (CDAC) switching method reduces the switching energy and improves linearity of first-stage CDAC. With these techniques, the prototype ADC achieves measured SNDR, SNR, and SFDR of 70.9 dB (11.5b), 71.3 dB and 84.6 dB, respectively, with a Nyquist frequency input. The prototype achieves 13 bit linearity without calibration and consumes 1 mW. This measured performance is equivalent to Walden and Schreier FoMs of 6.9 fJ/conversion ·step and 174.9 dB, respectively.", "title": "" }, { "docid": "0a4392285df7ddb92458ffa390f36867", "text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.", "title": "" }, { "docid": "c7cfc79579704027bf28fc7197496b8c", "text": "There is a growing trend nowadays for patients to seek the least invasive treatments possible with less risk of complications and downtime to correct rhytides and ptosis characteristic of aging. Nonsurgical face and neck rejuvenation has been attempted with various types of interventions. Suture suspension of the face, although not a new idea, has gained prominence with the advent of the so called \"lunch-time\" face-lift. Although some have embraced this technique, many more express doubts about its safety and efficacy limiting its widespread adoption. The present review aims to evaluate several clinical parameters pertaining to thread suspensions such as longevity of results of various types of polypropylene barbed sutures, their clinical efficacy and safety, and the risk of serious adverse events associated with such sutures. Early results of barbed suture suspension remain inconclusive. Adverse events do occur though mostly minor, self-limited, and of short duration. Less clear are the data on the extent of the peak correction and the longevity of effect, and the long-term effects of the sutures themselves. The popularity of barbed suture lifting has waned for the time being. Certainly, it should not be presented as an alternative to a face-lift.", "title": "" }, { "docid": "256afadf1604bd8c5c1413555cb892a4", "text": "A continuous-time dynamic model of a network of N nonlinear elements interacting via random asymmetric couplings is studied. A self-consistent mean-field theory, exact in the N ~ limit, predicts a transition from a stationary phase to a chaotic phase occurring at a critical value of the gain parameter. The autocorrelations of the chaotic flow as well as the maximal Lyapunov exponent are calculated.", "title": "" }, { "docid": "3c7fe036fe65d5e045a61813b6f01622", "text": "BACKGROUND\nImplant-supported restorations have become the most popular therapeutic option for professionals and patients for the treatment of total and partial edentulism. When implants are placed in an ideal position, with adequate prosthetic loading and proper maintenance, they can have success rates >90% over 15 years of function. Implants may be considered a better therapeutic alternative than performing more extensive conservative procedures in an attempt to save or maintain a compromised tooth. Inadequate indication for tooth extraction has resulted in the sacrifice of many sound savable teeth. This article presents a chart that can assist clinicians in making the right decision when they are deciding which route to take.\n\n\nMETHODS\nArticles published in peer-reviewed English journals were selected using several scientific databases and subsequently reviewed. Book sources were also searched. Individual tooth- and patient-related features were thoroughly analyzed, particularly when determining if a tooth should be indicated for extraction.\n\n\nRESULTS\nA color-based decision-making chart with six different levels, including several factors, was developed based upon available scientific literature. The rationale for including these factors is provided, and its interpretation is justified with literature support.\n\n\nCONCLUSION\nThe decision-making chart provided may serve as a reference guide for dentists when making the decision to save or extract a compromised tooth.", "title": "" }, { "docid": "c50e7d16cfc2f71c256d952391dfb8ec", "text": "Fuzzy Cognitive Maps (FCMs) are a flexible modeling technique with the goal of modeling causal relationships. Traditionally FCMs are developed by experts. We need to learn FCMs directly from data when expert knowledge is not available. The FCM learning problem can be described as the minimization of the difference between the desired response of the system and the estimated response of the learned FCM model. Learning FCMs from data can be a difficult task because of the large number of candidate FCMs. A FCM learning algorithm based on Ant Colony Optimization (ACO) is presented in order to learn FCM models from multiple observed response sequences. Experiments on simulated data suggest that the proposed ACO based FCM learning algorithm is capable of learning FCM with at least 40 nodes. The performance of the algorithm was tested on both single response sequence and multiple response sequences. The test results are compared to several algorithms, such as genetic algorithms and nonlinear Hebbian learning rule based algorithms. The performance of the ACO algorithm is better than these algorithms in several different experiment scenarios in terms of model errors, sensitivities and specificities. The effect of number of response sequences and number of nodes is discussed.", "title": "" }, { "docid": "22c72f94040cd65dde8e00a7221d2432", "text": "Research on “How to create a fair, convenient attendance management system”, is being pursued by academics and government departments fervently. This study is based on the biometric recognition technology. The hand geometry machine captures the personal hand geometry data as the biometric code and applies this data in the attendance management system as the attendance record. The attendance records that use this technology is difficult to replicate by others. It can improve the reliability of the attendance records and avoid fraudulent issues that happen when you use a register. This research uses the social survey method-questionnaire to evaluate the theory and practice of introducing biometric recognition technology-hand geometry capturing into the attendance management system.", "title": "" }, { "docid": "4da68af0db0b1e16f3597c8820b2390d", "text": "We study the task of verifiable delegation of computation on encrypted data. We improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. In this strong model, we construct a scheme for arbitrary computations and highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset on a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted.\n As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach. We support our theoretical constructions with extensive implementation tests that show the practical feasibility of our schemes.", "title": "" }, { "docid": "4f74d7e1d7d8a98f0228e0c87c0d85d8", "text": "This paper proposes a novel method for multivehicle detection and tracking using a vehicle-mounted monocular camera. In the proposed method, the features of vehicles are learned as a deformable object model through the combination of a latent support vector machine (LSVM) and histograms of oriented gradients (HOGs). The detection algorithm combines both global and local features of the vehicle as a deformable object model. Detected vehicles are tracked through a particle filter, which estimates the particles' likelihood by using a detection scores map and template compatibility for both root and parts of the vehicle while considering the deformation cost caused by the movement of vehicle parts. Tracking likelihoods are iteratively used as a priori probability to generate vehicle hypothesis regions and update the detection threshold to reduce false negatives of the algorithm presented before. Extensive experiments in urban scenarios showed that the proposed method can achieve an average vehicle detection rate of 97% and an average vehicle-tracking rate of 86% with a false positive rate of less than 0.26%.", "title": "" } ]
scidocsrr
a9386719345e74a55fcb87e8efd5fbe5
New color GPHOG descriptors for object and scene image classification
[ { "docid": "f551b3d24d1f6083e17ee60b925b0475", "text": "This paper presents new image descriptors based on color, texture, shape, and wavelets for object and scene image classification. First, a new three Dimensional Local Binary Patterns (3D-LBP) descriptor, which produces three new color images, is proposed for encoding both color and texture information of an image. The 3D-LBP images together with the original color image then undergo the Haar wavelet and local features. Second, a novel H-descriptor, which integrates the 3D-LBP and the HOG of its wavelet transform, is presented to encode color, texture, shape, as well as local information. Feature extraction for the H-descriptor is implemented by means of Principal Component Analysis (PCA) and Enhanced Fisher Model (EFM) and classification by the nearest neighbor rule for object and scene image classification. And finally, an innovative H-fusion descriptor is proposed by fusing the PCA features of the H-descriptors in seven color spaces in order to further incorporate color information. Experimental results using three datasets, the Caltech 256 object categories dataset, the UIUC Sports Event dataset, and the MIT Scene dataset, show that the proposed new image descriptors achieve better image classification performance than other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT), the Pyramid Histograms of visual Words (PHOW), the Pyramid Histograms of Oriented Gradients (PHOG), Spatial Envelope, Color SIFT four Concentric Circles (C4CC), Object Bank, the Hierarchical Matching Pursuit, as well as LBP. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b125649628d46871b2212c61e355ec43", "text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.", "title": "" }, { "docid": "432fe001ec8f1331a4bd033e9c49ccdf", "text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.", "title": "" } ]
[ { "docid": "28bd24c54b3e2ab2fc4902965fe9ebc6", "text": "With Android application packing technology evolving, there are more and more ways to harden APPs. Manually unpacking APPs becomes more difficult as the time needed for analyzing increase exponentially. At the beginning, the packing technology is designed to prevent APPs from being easily decompiled, tampered and re-packed. But unfortunately, many malicious APPs start to use packing service to protect themselves. At present, most of the antivirus software focus on APPs that are unpacked, which means if malicious APPs apply the packing service, they can easily escape from a lot of antivirus software. Therefore, we should not only emphasize the importance of packing, but also concentrate on the unpacking technology. Only by doing this can we protect the normal APPs, and not miss any harmful APPs at the same time. In this paper, we first systematically study a lot of DEX packing and unpacking technologies, then propose and develop a universal unpacking system, named CrackDex, which is capable of extracting the original DEX file from the packed APP. We propose three core technologies: simulation execution, DEX reassembling, and DEX restoration, to get the unpacked DEX file. CrackDex is a part of the Dalvik virtual machine, and it monitors the execution of functions to locate the unpacking point in the portable interpreter, then launches the simulation execution, collects the data of original DEX file through corresponding structure pointer, finally fulfills the unpacking process by reassembling the data collected. The results of our experiments show that CrackDex can be used to effectively unpack APPs that are packed by packing service in a universal approach without any other knowledge of packing service.", "title": "" }, { "docid": "d49ea26480f4170ec3684ddbf3272306", "text": "Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy-based” features—approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.", "title": "" }, { "docid": "73252fdecc2a01699bdadb4962b4b376", "text": "Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.", "title": "" }, { "docid": "8e1591b98c14a182125969bc12eda730", "text": "A growing proportion of citizens rely on social media to gather political information and to engage in political discussions within their personal networks. Existing studies argue that social media create “echo-chambers,” where individuals are primarily exposed to likeminded views. However, this literature has ignored that social media platforms facilitate exposure to messages from those with whom individuals have weak ties, which are more likely to provide novel information to which individuals would not be exposed otherwise through offline interactions. Because weak ties tend to be with people who are more politically heterogeneous than citizens’ immediate personal networks, this exposure reduces political extremism. To test this hypothesis, I develop a new method to estimate dynamic ideal points for social media users. I apply this method to measure the ideological positions of millions of individuals in Germany, Spain, and the United States over time, as well as the ideological composition of their personal networks. Results from this panel design show that most social media users are embedded in ideologically diverse networks, and that exposure to political diversity has a positive effect on political moderation. This result is robust to the inclusion of covariates measuring offline political behavior, obtained by matching Twitter user profiles with publicly available voter files in several U.S. states. I also provide evidence from survey data in these three countries that bolsters these findings. Contrary to conventional wisdom, my analysis provides evidence that social media usage reduces mass political polarization. ∗Pablo Barberá (www.pablobarbera.com) is a Moore-Sloan Fellow at the NYU Center for Data Science. Mass political polarization is a signature phenomenon of our time. As such, it has received considerable scholarly and journalistic attention in recent years (see e.g. Abramowitz and Saunders, 2008 and Fiorina and Abrams, 2008). A growing body of work argues that the introduction of the Internet as a relevant communication tool is contributing to this trend (Farrell, 2012). Empirical evidence of persistent ideological sorting in online communication networks (Adamic and Glance, 2005; Conover et al., 2012; Colleoni, Rozza and Arvidsson, 2014) has been taken to suggest that Internet use may exacerbate mass political polarization. As Sunstein (2001) or Hindman (2008) argue, the Internet appears to create communities of like-minded individuals where cross-ideological interactions and exposure to political diversity are rare. This argument builds upon a long tradition of research that shows that political discussion in homogenous communication networks reinforces individuals’ existing attitudes (Berelson, Lazarsfeld and McPhee, 1954; Huckfeldt, 1995; Mutz, 2006) In this paper I challenge this conventional wisdom. I contend that social media usage – one of the most frequent online activities – reduces political polarization, and I provide empirical evidence to support this claim. My argument is two-fold. First, social media platforms like Facebook or Twitter increase incidental exposure to political messages shared by peers. Second, these sites facilitate exposure to messages from those with whom individuals have weak social ties (Granovetter, 1973), which are more likely to provide novel information. Consequently, despite the homophilic nature of personal networks (McPherson, Smith-Lovin and Cook, 2001), social media leads to exposure to a wider range of political opinions than one would normally encounter offline. This induces political moderation at the individual level and, counter intuitively, helps to decrease mass political polarization. To test this hypothesis, I develop a new method to measure the ideological positions of Twitter users at any point in time, and apply it to estimate the ideal points of millions of citizens in three countries with different levels of mass political polarization (Germany, Spain, and the United States). This measure allows me to observe not only how their political preferences evolve, but also the ideological composition of their communication networks. My approach represents a crucial improvement over survey studies of political networks, which often ask only about close discussion partners and in practice exclude weak ties, limiting researchers’ ability to study their influence. In addition, I rely on name identification techniques to match Twitter users with publicly available voter files in the states of Arkansas, California, Florida, Ohio, and Pennsylvania. This allows me to demonstrate that my results are not confounded by covariates measuring", "title": "" }, { "docid": "54abb89b518916b86b306c4a6996dc5c", "text": "Recent clinical trials of gene therapy have shown remarkable therapeutic benefits and an excellent safety record. They provide evidence for the long-sought promise of gene therapy to deliver 'cures' for some otherwise terminal or severely disabling conditions. Behind these advances lie improved vector designs that enable the safe delivery of therapeutic genes to specific cells. Technologies for editing genes and correcting inherited mutations, the engagement of stem cells to regenerate tissues and the effective exploitation of powerful immune responses to fight cancer are also contributing to the revitalization of gene therapy.", "title": "" }, { "docid": "55fd332aa38c3240813e5947c65c867d", "text": "Skin detection is an important process in many of computer vision algorithms. It usually is a process that starts at a pixel-level, and that involves a pre-process of colorspace transformation followed by a classification process. A colorspace transformation is assumed to increase separability between skin and non-skin classes, to increase similarity among different skin tones, and to bring a robust performance under varying illumination conditions, without any sound reasonings. In this work, we examine if the colorspace transformation does bring those benefits by measuring four separability measurements on a large dataset of 805 images with different skin tones and illumination. Surprising results indicate that most of the colorspace transformations do not bring the benefits which have been assumed.", "title": "" }, { "docid": "d97b4905e1e06e521fe797df7499a521", "text": "This paper studied a remote control system based on the LabVIEW and ZLG PCI-5110 CAN card, in which students could perform experiments by remote control laboratory via the Internet. Due to the fact that the internet becomes more integrated into our daily lives, several possibilities have arisen to use this cost-effective worldwide standard for distributing data. National Instruments LabVIEW is available to publish data from the development environment to the Web. The student can access the remote laboratory and perform experiments without any limitation of time and location. They can also observe the signals by changing the parameters of the experiment and evaluating the results. During the session, the teacher can watch and communicate with students who perform their experiment. The usefulness of remote laboratory in teaching environments is already known: it saves equipment, personnel for the institution and it saves time and money for the remote students. It also allows the same equipment to be used in research purposes by many teams, through Internet. The experiments proved the feasibility of technical solutions, as well as the correctness of implementation in this paper.", "title": "" }, { "docid": "84a7592ccf4c79cb5cb4ed7dbbcc1af7", "text": "AIM\nTo examine the relationships between workplace bullying, destructive leadership and team conflict, and physical health, strain, self-reported performance and intentions to quit among veterinarians in New Zealand, and how these relationships could be moderated by psychological capital and perceived organisational support.\n\n\nMETHODS\nData were collected by means of an online survey, distributed to members of the New Zealand Veterinary Association. Participation was voluntary and all responses were anonymous and confidential. Scores for the variables measured were based on responses to questions or statements with responses categorised on a linear scale. A series of regression analyses were used to assess mediation or moderation by intermediate variables on the relationships between predictor variables and dependent variables.\n\n\nRESULTS\nCompleted surveys were provided by 197 veterinarians, of which 32 (16.2%) had been bullied at work, i.e. they had experienced two or more negative acts at least weekly over the previous 6 months, and nine (4.6%) had experienced cyber-bullying. Mean scores for workplace bullying were higher for female than male respondents, and for non-managers than managers (p<0.01). Scores for workplace bullying were positively associated with scores for destructive leadership and team conflict, physical health, strain, and intentions to quit (p<0.001). Workplace bullying and team conflict mediated the relationship between destructive leadership and strain, physical health and intentions to quit. Perceived organisational support moderated the effects of workplace bullying on strain and self-reported job performance (p<0.05).\n\n\nCONCLUSIONS\nRelatively high rates of negative behaviour were reported by veterinarians in this study, with 16% of participants meeting an established criterion for having been bullied. The negative effects of destructive leadership on strain, physical health and intentions to quit were mediated by team conflict and workplace bullying. It should be noted that the findings of this study were based on a survey of self-selected participants and the findings may not represent the wider population of New Zealand veterinarians.", "title": "" }, { "docid": "5a28fbdcce61256fd67d97fc353b138b", "text": "Use of encryption to achieve authenticated communication in computer networks is discussed. Example protocols are presented for the establishment of authenticated connections, for the management of authenticated mail, and for signature verification and document integrity guarantee. Both conventional and public-key encryption algorithms are considered as the basis for protocols.", "title": "" }, { "docid": "01490975c291a64b40484f6d37ea1c94", "text": "Context-aware systems offer entirely new opportunities for application developers and for end users by gathering context data and adapting systems’ behavior accordingly. Especially in combination with mobile devices such mechanisms are of great value and claim to increase usability tremendously. In this paper, we present a layered architectural framework for context-aware systems. Based on our suggested framework for analysis, we introduce various existing context-aware systems focusing on context-aware middleware and frameworks, which ease the development of context-aware applications. We discuss various approaches and analyze important aspects in context-aware computing on the basis of the presented systems.", "title": "" }, { "docid": "ffaa8edb1fccf68e6b7c066fb994510a", "text": "A fast and precise determination of the DOA (direction of arrival) for immediate object classification becomes increasingly important for future automotive radar generations. Hereby, the elevation angle of an object is considered as a key parameter especially in complex urban environments. An antenna concept allowing the determination of object angles in azimuth and elevation is proposed and discussed in this contribution. This antenna concept consisting of a linear patch array and a cylindrical dielectric lens is implemented into a radar sensor and characterized in terms of angular accuracy and ambiguities using correlation algorithms and the CRLB (Cramer Rao Lower Bound).", "title": "" }, { "docid": "f30e54728a10e416d61996c082197f5b", "text": "This paper describes an efficient and straightforward methodology for OCR-ing and post-correcting Arabic text material on Islamic embryology collected for the COBHUNI project. As the target texts of the project include diverse diachronic stages of the Arabic language, the team of annotators for performing the OCR post-correction requires well-trained experts on language skills. While technical skills are also desirable, highly trained language experts typically lack enough technical knowledge. Furthermore, a relatively small portion of the target texts needed to be OCR-ed, as most of the material was already on some digital form. Thus, the OCR task could only require a small amount of resources in terms of time and work complexity. Both the low technical skills of the annotators and the resource constraints made it necessary for us to find an easy-to-develop and suitable workflow for performing the OCR and post-correction tasks. For the OCR phase, we chose Tesseract Open Source OCR Engine, because it achieves state-of-the-art levels of accuracy. For the post-correction phase, we decided to use the Proofread Page extension of the MediaWiki software, as it strikes a perfect balance between usability and efficiency. The post-correction task was additionally supported by the implementation of an error checker based on simple heuristics. The application of this methodology resulted in the successful and fast OCR-ing and post-correction of a corpus of 36,132 tokens.", "title": "" }, { "docid": "2e0d4680cf5953d81f7e8bf8e932e64d", "text": "Ontological Semantics is an approach to automatically extracting the meaning of natural language texts. The OntoSem text analysis system, developed according to this approach, generates ontologically grounded, disambiguated text meaning representations that can serve as input to intelligent agent reasoning. This article focuses on two core subtasks of overall semantic analysis: lexical disambiguation and the establishment of the semantic dependency structure. In addition to describing the knowledge bases and processors used to carry out these tasks, we introduce a novel evaluation suite suited specifically to knowledge-based systems. To situate this contribution in the field, we critically compare the goals, methods and tasks of Ontological Semantics with those of the currently dominant paradigm of natural language processing, which relies on machine learning.", "title": "" }, { "docid": "755d726148171cc03d794188059274aa", "text": "TopoToolbox contains a set of Matlab functions that provide utilities for relief analysis in a nonGeographical Information System (GIS) environment. The tools have been developed to support the work flow in combined spatial and non-spatial numerical analysis. They offer flexible and user-friendly software for hydrological and geomorphological research that involves digital elevation model analysis and focuses on material fluxes and spatial variability of water, sediment, chemicals and nutrients. The objective of this paper is to give an introduction to the linear algebraic concept behind the software that employs sparse matrix computations for digital elevation model analysis. Moreover, we outline the functionality of the toolbox. The source codes are freely available in Matlab language on the authors’ webpage (physiogeo.unibas.ch/topotoolbox). 2009 Elsevier Ltd. All rights reserved. Software availability Program title: TopoToolbox Developer: Wolfgang Schwanghart First available: 2009 Source language: MATLAB Requirements: MATLAB R2009a, Image Processing Toolbox Availability: TopoToolbox is available free of charge and can be downloaded on http://physiogeo.unibas.ch/topotoolbox.", "title": "" }, { "docid": "cf54533bc317b960fc80f22baa26d7b1", "text": "The state-of-the-art named entity recognition (NER) systems are statistical machine learning models that have strong generalization capability (i.e., can recognize unseen entities that do not appear in training data) based on lexical and contextual information. However, such a model could still make mistakes if its features favor a wrong entity type. In this paper, we utilize Wikipedia as an open knowledge base to improve multilingual NER systems. Central to our approach is the construction of high-accuracy, highcoverage multilingual Wikipedia entity type mappings. These mappings are built from weakly annotated data and can be extended to new languages with no human annotation or language-dependent knowledge involved. Based on these mappings, we develop several approaches to improve an NER system. We evaluate the performance of the approaches via experiments on NER systems trained for 6 languages. Experimental results show that the proposed approaches are effective in improving the accuracy of such systems on unseen entities, especially when a system is applied to a new domain or it is trained with little training data (up to 18.3 F1 score improvement).", "title": "" }, { "docid": "b4cd9a58e96095c0339acca8c2c8776f", "text": "To cope with the accelerating pace of technological changes, talents are urged to add and refresh their skills for staying in active and gainful employment. This raises a natural question: what are the right skills to learn? Indeed, it is a nontrivial task to measure the popularity of job skills due to the diversified criteria of jobs and the complicated connections within job skills. To that end, in this paper, we propose a data driven approach for modeling the popularity of job skills based on the analysis of large-scale recruitment data. Specifically, we first build a job skill network by exploring a large corpus of job postings. Then, we develop a novel Skill Popularity based Topic Model (SPTM) for modeling the generation of the skill network. In particular, SPTM can integrate different criteria of jobs (e.g., salary levels, company size) as well as the latent connections within skills, thus we can effectively rank the job skills based on their multi-faceted popularity. Extensive experiments on real-world recruitment data validate the effectiveness of SPTM for measuring the popularity of job skills, and also reveal some interesting rules, such as the popular job skills which lead to high-paid employment.", "title": "" }, { "docid": "5692d2ee410c804e32ebebbcc129c8d6", "text": "Aimed at the industrial sorting technology problems, this paper researched correlative algorithm of image processing and analysis, and completed the construction of robot vision sense. the operational process was described as follows: the camera acquired image sequences of the metal work piece in the sorting region. Image sequence was analyzed to use algorithms of image pre-processing, Hough circle detection, corner detection and contour recognition. in the mean time, this paper also explained the characteristics of three main function model (image pre-processing, corner detection and contour recognition), and proposed algorithm of multi-objective center and a corner recognition. the simulated results show that the sorting system can effectively solve the sorting problem of regular geometric work piece, and accurately calculate center and edge of geometric work piece to achieve the sorting purpose.", "title": "" }, { "docid": "842cd58edd776420db869e858be07de4", "text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.", "title": "" }, { "docid": "aef25b8bc64bb624fb22ce39ad7cad89", "text": "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.", "title": "" }, { "docid": "04d06629a3683536fb94228f6295a7d3", "text": "User profiling is an important step for solving the problem of personalized news recommendation. Traditional user profiling techniques often construct profiles of users based on static historical data accessed by users. However, due to the frequent updating of news repository, it is possible that a user’s finegrained reading preference would evolve over time while his/her long-term interest remains stable. Therefore, it is imperative to reason on such preference evaluation for user profiling in news recommenders. Besides, in content-based news recommenders, a user’s preference tends to be stable due to the mechanism of selecting similar content-wise news articles with respect to the user’s profile. To activate users’ reading motivations, a successful recommender needs to introduce ‘‘somewhat novel’’ articles to", "title": "" } ]
scidocsrr
da7f2e025ae42bbbf667feccd57fc027
Why Have Divorce Rates Fallen ? The Role of Women ’ s Age at Marriage
[ { "docid": "6c2a74a6709b5f7355da3afec15cc751", "text": "\"This chapter critically examines the hypothesis that women's rising employment levels have increased their economic independence and hence have greatly reduced the desirability of marriage. Little firm empirical support for this hypothesis is found. The apparent congruence in time-series data of women's rising employment with declining marriage rates and increasing marital instability is partly a result of using the historically atypical early postwar behavior of the baby boom era as the benchmark for comparisons and partly due to confounding trends in delayed marriage with those of nonmarriage.\"", "title": "" } ]
[ { "docid": "e2dbcae54c48a88f840e09112c55fa86", "text": "This paper aims to improve the throughput of a broadcasting system that supports the transmission of multiple services with differentiated minimum signal-to-noise ratios (SNRs) required for successful receptions simultaneously. We propose a novel multiplexing method called bit division multiplexing (BDM), which outperforms the conventional time division multiplexing (TDM) counterpart by extending the multiplexing from symbol level to bit level. Benefiting from multiple error protection levels of bits within each high-order constellation symbol, BDM can provide so-called nonlinear allocation of the channel resources. Both average mutual information (AMI) analysis and simulation results demonstrate that, compared with TDM, BDM can significantly improve the overall transmission rate of multiple services subject to the differentiated minimum SNRs required for successful receptions, or decrease the minimum SNRs required for successful receptions subject to the transmission rate requirements of multiple services.", "title": "" }, { "docid": "0b78f52352580bdb7788635697614276", "text": "When operating higher up in frequency, the copper losses in transformer windings will significantly rise due to enhanced skin and proximity effect. This leads to a high need to develop new methods to accurately evaluate winding losses at higher frequencies. This paper investigates the effect of different geometrical parameters at a wide range of frequencies in order to propose a pseudoempirical formula for winding loss calculation in high-frequency transformers. A comprehensive analysis of the edge effect and ac resistance is done by performing more than 12 300 2-D finite element simulations on foil and round conductors. Unlike previous studies which mostly focused on specific case studies with limited applications, this model provides very high accuracy, especially where the most common analytical models drastically underestimate the winding losses, with a wide-range applicability which could be of interest for designers to avoid time consuming FEM simulation without compromising with the accuracy. Several transformers are built and the model is experimentally verified with a good agreement.", "title": "" }, { "docid": "9563b47a73e41292599c368e1dfcd40a", "text": "Non-functional requirements are an important, and often critical, aspect of any software system. However, determining the degree to which any particular software system meets such requirements and incorporating such considerations into the software design process is a difficult challenge. This paper presents a modification of the NFR framework that allows for the discovery of a set of system functionalities that optimally satisfice a given set of non-functional requirements. This new technique introduces an adaptation of softgoal interdependency graphs, denoted softgoal interdependency ruleset graphs, in which label propagation can be done consistently. This facilitates the use of optimisation algorithms to determine the best set of bottom-level operationalizing softgoals that optimally satisfice the highest-level NFR softgoals. The proposed method also introduces the capacity to incorporate both qualitative and quantitative information.", "title": "" }, { "docid": "69dc5c8fb4378002738991b49ec6e1d5", "text": "Functional and stereotactic neurosurgery has always been regarded as a subspecialty based on and driven by technological advances. However until recently, the fundamentals of deep brain stimulation (DBS) hardware and software design had largely remained stagnant since its inception almost three decades ago. Recent improved understanding of disease processes in movement disorders as well clinician and patient demands has resulted in new avenues of development for DBS technology. This review describes new advances both related to hardware and software for neuromodulation. New electrode designs with segmented contacts now enable sophisticated shaping and sculpting of the field of stimulation, potentially allowing multi-target stimulation and avoidance of side effects. To avoid lengthy programming sessions utilising multiple lead contacts, new user-friendly software allows for computational modelling and individualised directed programming. Therapy delivery is being improved with the next generation of smaller profile, longer-lasting, re-chargeable implantable pulse generators (IPGs). These include IPGs capable of delivering constant current stimulation or personalised closed-loop adaptive stimulation. Post-implantation Magnetic Resonance Imaging (MRI) has long been an issue which has been partially overcome with 'MRI conditional devices' and has enabled verification of DBS lead location. Surgical technique is considering a shift from frame-based to frameless stereotaxy or greater role for robot assisted implantation. The challenge for these contemporary techniques however, will be in demonstrating equivalent safety and accuracy to conventional methods. We also discuss potential future direction utilising wireless technology allowing for miniaturisation of hardware.", "title": "" }, { "docid": "e97f494b2eed2b14e2d4c0fd80e38170", "text": "We present a stochastic gradient descent optimisation method for image registration with adaptive step size prediction. The method is based on the theoretical work by Plakhov and Cruz (J. Math. Sci. 120(1):964–973, 2004). Our main methodological contribution is the derivation of an image-driven mechanism to select proper values for the most important free parameters of the method. The selection mechanism employs general characteristics of the cost functions that commonly occur in intensity-based image registration. Also, the theoretical convergence conditions of the optimisation method are taken into account. The proposed adaptive stochastic gradient descent (ASGD) method is compared to a standard, non-adaptive Robbins-Monro (RM) algorithm. Both ASGD and RM employ a stochastic subsampling technique to accelerate the optimisation process. Registration experiments were performed on 3D CT and MR data of the head, lungs, and prostate, using various similarity measures and transformation models. The results indicate that ASGD is robust to these variations in the registration framework and is less sensitive to the settings of the user-defined parameters than RM. The main disadvantage of RM is the need for a predetermined step size function. The ASGD method provides a solution for that issue.", "title": "" }, { "docid": "bc48242b9516948dc0ab95f1bead053f", "text": "This article presents the semantic portal MuseumFinland for publishing heterogeneous museum collections on the Semantic Web. It is shown how museums with their semantically rich and interrelated collection content can create a large, consolidated semantic collection portal together on the web. By sharing a set of ontologies, it is possible to make collections semantically interoperable, and provide the museum visitors with intelligent content-based search and browsing services to the global collection base. The architecture underlying MuseumFinland separates generic search and browsing services from the underlying application dependent schemas and metadata by a layer of logical rules. As a result, the portal creation framework and software developed has been applied successfully to other domains as well. MuseumFinland got the Semantic Web Challence Award (second prize) in 2004. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "46ff38a51f766cd5849a537cc0632660", "text": "BACKGROUND\nLinear IgA bullous dermatosis (LABD) is an acquired autoimmune sub-epidermal vesiculobullous disease characterized by continuous linear IgA deposit on the basement membrane zone, as visualized on direct immunofluorescence microscopy. LABD can affect both adults and children. The disease is very uncommon, with a still unknown incidence in the South American population.\n\n\nMATERIALS AND METHODS\nAll confirmed cases of LABD by histological and immunofluorescence in our hospital were studied.\n\n\nRESULTS\nThe confirmed cases were three females and two males, aged from 8 to 87 years. Precipitant events associated with LABD were drug consumption (non-steroid inflammatory agents in two cases) and ulcerative colitis (one case). Most of our patients were treated with dapsone, resulting in remission.\n\n\nDISCUSSION\nOur series confirms the heterogeneous clinical features of this uncommon disease in concordance with a larger series of patients reported in the literature.", "title": "" }, { "docid": "0a78c9305d4b5584e87327ba2236d302", "text": "This paper presents GeoS, a new algorithm for the efficient segmentation of n-dimensional image and video data. The segmentation problem is cast as approximate energy minimization in a conditional random field. A new, parallel filtering operator built upon efficient geodesic distance computation is used to propose a set of spatially smooth, contrast-sensitive segmentation hypotheses. An economical search algorithm finds the solution with minimum energy within a sensible and highly restricted subset of all possible labellings. Advantages include: i) computational efficiency with high segmentation accuracy; ii) the ability to estimate an approximation to the posterior over segmentations; iii) the ability to handle generally complex energy models. Comparison with max-flow indicates up to 60 times greater computational efficiency as well as greater memory efficiency. GeoS is validated quantitatively and qualitatively by thorough comparative experiments on existing and novel ground-truth data. Numerous results on interactive and automatic segmentation of photographs, video and volumetric medical image data are presented.", "title": "" }, { "docid": "648b03c59c8e976a6fa936cef6af8ec0", "text": "Ratha Pech, Hao Dong1,2,∗, Liming Pan, Hong Cheng, Zhou Tao1,2,∗ 1 CompleX Lab, University of Electronic Science and Technology of China, Chengdu 611731, People’s Republic of China 2 Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, People’s Republic of China and 3 Center for Robotics, University of Electronic Science and Technology of China, Chengdu 611731, People’s Republic of China", "title": "" }, { "docid": "83bdb6760483dd5f5ad45725cd61b7e7", "text": "Gaucher disease (GD, ORPHA355) is a rare, autosomal recessive genetic disorder. It is caused by a deficiency of the lysosomal enzyme, glucocerebrosidase, which leads to an accumulation of its substrate, glucosylceramide, in macrophages. In the general population, its incidence is approximately 1/40,000 to 1/60,000 births, rising to 1/800 in Ashkenazi Jews. The main cause of the cytopenia, splenomegaly, hepatomegaly, and bone lesions associated with the disease is considered to be the infiltration of the bone marrow, spleen, and liver by Gaucher cells. Type-1 Gaucher disease, which affects the majority of patients (90% in Europe and USA, but less in other regions), is characterized by effects on the viscera, whereas types 2 and 3 are also associated with neurological impairment, either severe in type 2 or variable in type 3. A diagnosis of GD can be confirmed by demonstrating the deficiency of acid glucocerebrosidase activity in leukocytes. Mutations in the GBA1 gene should be identified as they may be of prognostic value in some cases. Patients with type-1 GD-but also carriers of GBA1 mutation-have been found to be predisposed to developing Parkinson's disease, and the risk of neoplasia associated with the disease is still subject to discussion. Disease-specific treatment consists of intravenous enzyme replacement therapy (ERT) using one of the currently available molecules (imiglucerase, velaglucerase, or taliglucerase). Orally administered inhibitors of glucosylceramide biosynthesis can also be used (miglustat or eliglustat).", "title": "" }, { "docid": "746058addd16adea08ec8b33ff9a86c2", "text": "The effective ranking of documents in search engines is based on various document features, such as the frequency of the query terms in each document, the length, or the authoritativeness of each document. In order to obtain a better retrieval performance, instead of using a single or a few features, there is a growing trend to create a ranking function by applying a learning to rank technique on a large set of features. Learning to rank techniques aim to generate an effective document ranking function by combining a large number of document features. Different ranking functions can be generated by using different learning to rank techniques or on different document feature sets. While the generated ranking function may be uniformly applied to all queries, several studies have shown that different ranking functions favour different queries, and that the retrieval performance can be significantly enhanced if an appropriate ranking function is selected for each individual query. This thesis proposes Learning to Select (LTS), a novel framework that selectively applies an appropriate ranking function on a per-query basis, regardless of the given query’s type and the number of candidate ranking functions. In the learning to select framework, the effectiveness of a ranking function for an unseen query is estimated from the available neighbouring training queries. The proposed framework employs a classification technique (e.g. k-nearest neighbour) to identify neighbouring training queries for an unseen query by using a query feature. In particular, a divergence measure (e.g. Jensen-Shannon), which determines the extent to which a document ranking function alters the scores of an initial ranking of documents for a given query, is proposed for use as a query feature. The ranking function which performs the best on the identified training query set is then chosen for the unseen query. The proposed framework is thoroughly evaluated on two different TREC retrieval tasks (namely, Web search and adhoc search tasks) and on two large standard LETOR feature sets, which contain as many as 64 document features, deriving conclusions concerning the key components of LTS, namely the query feature and the identification of neighbouring queries components. Two different types of experiments are conducted. The first one is to select an appropriate ranking function from a number of candidate ranking functions. The second one is to select multiple appropriate document features from a number of candidate document features, for building a ranking function. Experimental results show that our proposed LTS framework is effective in both selecting an appropriate ranking function and selecting multiple appropriate document features, on a per-query basis. In addition, the retrieval performance is further enhanced when increasing the number of candidates, suggesting the robustness of the learning to select framework. This thesis also demonstrates how the LTS framework can be deployed to other search applications. These applications include the selective integration of a query independent feature into a document weighting scheme (e.g. BM25), the selective estimation of the relative importance of different query aspects in a search diversification task (the goal of the task is to retrieve a ranked list of documents that provides a maximum coverage for a given query, while avoiding excessive redundancy), and the selective application of an appropriate resource for expanding and enriching a given query for document search within an enterprise. The effectiveness of the LTS framework is observed across these search applications, and on different collections, including a large scale Web collection that contains over 50 million", "title": "" }, { "docid": "d3069dbe4da6057d15cc0f7f6e5244cc", "text": "We take the generation of Chinese classical poem lines as a sequence-to-sequence learning problem, and build a novel system based on the RNN Encoder-Decoder structure to generate quatrains (Jueju in Chinese), with a topic word as input. Our system can jointly learn semantic meaning within a single line, semantic relevance among lines in a poem, and the use of structural, rhythmical and tonal patterns, without utilizing any constraint templates. Experimental results show that our system outperforms other competitive systems. We also find that the attention mechanism can capture the word associations in Chinese classical poetry and inverting target lines in training can improve", "title": "" }, { "docid": "db1f1144c3e3c1b6acb2820e5257f056", "text": "A double-ended tuning fork (DETF) fabricated in a 0.35 um commercial CMOS technology is presented. Resonator performance for the application of this device in a RF front-end is measured using electrical test. DEFT offers a higher isolation between ports than clamped -clamped beams and the possibility to create a band-pass for frequency filtering or mixing using a single resonator. Discrepancies between expected and obtained results are studied using FEM mechanical simulations.", "title": "" }, { "docid": "79daebb05da3994d44250f60e4e1153d", "text": "The future of 3D integration and packaging of power electronics using printed circuit board (PCB) technology is presented. This is to show how power electronics can benefit from the same advantages that have been exploited by the microelectronic industry, for some time already, regarding high density packaging, as implemented in modern digital photo and video cameras for example. Complementary technologies, enhancing the role of the printed circuit board from mere electrical interconnect medium to means to perform electromagnetic integration of passives, using specialised laminate material; to perform volumetric optimisation, using flexible foil; and to perform thermal management, using existing material and volume; are investigated as well as the specific material and technology limitations being faced by design engineers today. The investigation lead to a PCB assembled power converter implementing electromagnetic integration of passives, a flexible foil integrated-LCT winding realisation as well as a novel 3D-folding packaging method to obtain increased power density. The impact which the exploitation of these combined technologies introduce has been quantified using recently developed performance indicators, from which a critical look on aspects of improvement is possible. Experimental validation of the presented performance indicators are performed by means of a technology demonstrator. The design criteria and technology choices for this demonstrator, based on planar core-, in combination with flexible foil winding realisation technology, are also addressed.", "title": "" }, { "docid": "927a4aa3377962d1fa1834d43ac899de", "text": "User security education and training is one of the most important aspects of an organizations security posture. Using security exercises to reinforce this aspect is frequently done by education and industry alike; however these exercises usually enlist willing participants. We have taken the concept of using an exercise and modified it in application to evaluate a users propensity to respond to email phishing attacks in an unannounced test. This paper describes the considerations in establishing and the process used to create and implement an evaluation of one aspect of our user information assurance education program. The evaluation takes the form of a exercise, where we send out a phishing styled email record the responses. Published by Elsevier Ltd.", "title": "" }, { "docid": "5816f70a7f4d7d0beb6e0653db962df3", "text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.", "title": "" }, { "docid": "f32ede03617159c0549b3475d9448096", "text": "Chatbots have rapidly become a mainstay in software development. A range of chatbots contribute regularly to the creation of actual production software. It is somewhat difficult, however, to precisely delineate hype from reality. Questions arise as to what distinguishes a chatbot from an ordinary software tool, what might be desirable properties of chatbots, and where their future may lie. This position paper introduces a starting framework through which we examine the current state of chatbots and identify directions for future work.", "title": "" }, { "docid": "13ae30bc5bcb0714fe752fbe9c7e5de8", "text": "The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.", "title": "" }, { "docid": "229288405fbbc0779c42fb311754ca1d", "text": "We present a system for monocular simultaneous localization and mapping (mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is completely incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover one of the longest distance ever reported, up to 2.5 kilometers.", "title": "" }, { "docid": "46f623cea7c1f643403773fc5ed2508d", "text": "The use of machine learning tools has become widespread in medical diagnosis. The main reason for this is the effective results obtained from classification and diagnosis systems developed to help medical professionals in the diagnosis phase of diseases. The primary objective of this study is to improve the accuracy of classification in medical diagnosis problems. To this end, studies were carried out on 3 different datasets. These datasets are heart disease, Parkinson’s disease (PD) and BUPA liver disorders. Key feature of these datasets is that they have a linearly non-separable distribution. A new method entitled k-medoids clustering-based attribute weighting (kmAW) has been proposed as a data preprocessing method. The support vector machine (SVM) was preferred in the classification phase. In the performance evaluation stage, classification accuracy, specificity, sensitivity analysis, f-measure, kappa statistics value and ROC analysis were used. Experimental results showed that the developed hybrid system entitled kmAW + SVM gave better results compared to other methods described in the literature. Consequently, this hybrid intelligent system can be used as a useful medical decision support tool.", "title": "" } ]
scidocsrr
8c6ed91a636dc9882769d0faa93bf9b8
The Affordances of Business Analytics for Strategic Decision-Making and Their Impact on Organisational Performance
[ { "docid": "ba4121003eb56d3ab6aebe128c219ab7", "text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "title": "" } ]
[ { "docid": "879b58634bd71c8eee6c37350c196dc3", "text": "This paper presents a novel high-voltage gain boost converter topology based on the three-state commutation cell for battery charging using PV panels and a reduced number of conversion stages. The presented converter operates in zero-voltage switching (ZVS) mode for all switches. By using the new concept of single-stage approaches, the converter can generate a dc bus with a battery bank or a photovoltaic panel array, allowing the simultaneous charge of the batteries according to the radiation level. The operation principle, design specifications, and experimental results from a 500-W prototype are presented in order to validate the proposed structure.", "title": "" }, { "docid": "2ae773f548c1727a53a7eb43550d8063", "text": "Today's Internet hosts are threatened by large-scale distributed denial-of-service (DDoS) attacks. The path identification (Pi) DDoS defense scheme has recently been proposed as a deterministic packet marking scheme that allows a DDoS victim to filter out attack packets on a per packet basis with high accuracy after only a few attack packets are received (Yaar , 2003). In this paper, we propose the StackPi marking, a new packet marking scheme based on Pi, and new filtering mechanisms. The StackPi marking scheme consists of two new marking methods that substantially improve Pi's incremental deployment performance: Stack-based marking and write-ahead marking. Our scheme almost completely eliminates the effect of a few legacy routers on a path, and performs 2-4 times better than the original Pi scheme in a sparse deployment of Pi-enabled routers. For the filtering mechanism, we derive an optimal threshold strategy for filtering with the Pi marking. We also develop a new filter, the PiIP filter, which can be used to detect Internet protocol (IP) spoofing attacks with just a single attack packet. Finally, we discuss in detail StackPi's compatibility with IP fragmentation, applicability in an IPv6 environment, and several other important issues relating to potential deployment of StackPi", "title": "" }, { "docid": "e71402bed9c526d9152885ef86c30bb5", "text": "Narratives structure our understanding of the world and of ourselves. They exploit the shared cognitive structures of human motivations, goals, actions, events, and outcomes. We report on a computational model that is motivated by results in neural computation and captures fine-grained, context sensitive information about human goals, processes, actions, policies, and outcomes. We describe the use of the model in the context of a pilot system that is able to interpret simple stories and narrative fragments in the domain of international politics and economics. We identify problems with the pilot system and outline extensions required to incorporate several crucial dimensions of narrative structure.", "title": "" }, { "docid": "9a6e7b49ddfa98520af1bb33bfb5fafa", "text": "Spell Description Schl Comp Time Range Target, Effect, Area Duration Save SR PHB £ Acid Fog Fog deals 2d6/rnd acid damage Conj V,S,M/DF 1 a Medium 20-ft radius 1 rnd/lvl-196 £ Acid Splash Acid Missile 1d3 damage Conj V,S 1 a Close Acid missile Instantaneous-196 £ Aid +1 att,+1 fear saves,1d8 +1/lvl hps Ench V,S,DF 1 a Touch One living creature 1 min/lvl-Yes 196 £ Air Walk Target treads on air as if solid Trans V,S,DF 1 a Touch One creature 10 min/lvl-Yes 196 £ Alarm Wards an area for 2 hr/lvl Abjur V,S,F/DF 1 a Close 20-ft radius 2 hr/lvl (D)-197 £ Align Weapon Adds alignment to weapon Trans V,S,DF 1 a Touch Weapon 1 min/lvl Will negs Yes 197 £ Alter Self Changes appearance Trans V,S 1 a Self Caster, +10 disguise 10 min/lvl (D)-197 £ Analyze Dweomer Reveals magical aspects of target Div V,S,F 1 a Close Item or creature/lvl 1 rnd/lvl (D) Will negs-197 £ Animal Growth Animal/2 lvls increases size category Trans V,S 1 a Medium 1 animal/2 lvls 1 min/lvl Fort negs Yes 198 £ Animal Messenger Send a tiny animal to specific place Ench V,S,M 1 a Close One tiny animal 1 day/lvl-Yes 198 £ Animal Shapes 1 ally/lvl polymorphs into animal Trans V,S,DF 1 a Close One creature/lvl 1 hr/lvl (D)-Yes 198 £ Animal Trance Fascinates 2d6 HD of animals Ench V,S 1 a Close Animals, Int 1 or 2 Conc Will negs Yes 198 £ Animate Dead Creates skeletons and zombies Necro V,S,M 1 a Touch Max 2HD/lvl Instantaneous-198 £ Animate Objects Items attack your foes Trans V,S 1 a Medium One small item/lvl 1 rnd/lvl-199 £ Animate Plants Animated plant Trans V 1 a Close 1 plant/3lvls 1 rnd/lvl-199 £ Animate Rope Rope moves at your command Trans V,S 1 a Medium 1 ropelike item 1 rnd/lvl-199 £ Antilife Shell 10-ft field excludes living creatures Abjur V,S,DF Round 10-ft 10-ft radius 10 min/lvl (D)-Yes 199 £ Antimagic Field Negates magic within 10-ft Abjur V,S,M/DF 1 a 10-ft 10-ft radius 10 min/lvl (D)-Sp 200 £ Antipathy Item or location repels creatures Ench V,S,M/DF 1 hr Close Location or item 2 hr/lvl (D) Will part Yes 200 £ Antiplant Shell Barrier protects against plants Abjur V,S,DF 1 a 10-ft 10-ft radius 10 min/lvl (D)-Yes 200 £ Arcane Eye Floating eye, moves 30ft/rnd Div V,S,M 10 min Unlimited Magical sensor 1 min/lvl (D)-200 …", "title": "" }, { "docid": "de6e139d0b5dc295769b5ddb9abcc4c6", "text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.", "title": "" }, { "docid": "1b60ded506c85edd798fe0759cce57fa", "text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.", "title": "" }, { "docid": "15cfa9005e68973cbca60f076180b535", "text": "Much of the literature on fair classifiers considers the case of a single classifier used once, in isolation. We initiate the study of composition of fair classifiers. In particular, we address the pitfalls of näıve composition and give general constructions for fair composition. Focusing on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], we also extend our results to a large class of group fairness definitions popular in the recent literature. We exhibit several cases in which group fairness definitions give misleading signals under composition and conclude that additional context is needed to evaluate both group and individual fairness under composition.", "title": "" }, { "docid": "9a73e9bc7c0dc343ad9dbe1f3dfe650c", "text": "The word robust has been used in many contexts in signal processing. Our treatment concerns statistical robustness, which deals with deviations from the distributional assumptions. Many problems encountered in engineering practice rely on the Gaussian distribution of the data, which in many situations is well justified. This enables a simple derivation of optimal estimators. Nominal optimality, however, is useless if the estimator was derived under distributional assumptions on the noise and the signal that do not hold in practice. Even slight deviations from the assumed distribution may cause the estimator's performance to drastically degrade or to completely break down. The signal processing practitioner should, therefore, ask whether the performance of the derived estimator is acceptable in situations where the distributional assumptions do not hold. Isn't it robustness that is of a major concern for engineering practice? Many areas of engineering today show that the distribution of the measurements is far from Gaussian as it contains outliers, which cause the distribution to be heavy tailed. Under such scenarios, we address single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data. A rather extensive treatment of the important and challenging case of dependent data for the signal processing practitioner is also included. For these problems, a comparative analysis of the most important robust methods is carried out by evaluating their performance theoretically, using simulations as well as real-world data.", "title": "" }, { "docid": "290b56471b64e150e40211f7a51c1237", "text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.", "title": "" }, { "docid": "3405c4808237f8d348db27776d6e9b61", "text": "Pheochromocytomas are catecholamine-releasing tumors that can be found in an extraadrenal location in 10% of the cases. Almost half of all pheochromocytomas are now discovered incidentally during cross-sectional imaging for unrelated causes. We present a case of a paragaglioma of the organ of Zuckerkandl that was discovered incidentally during a magnetic resonance angiogram performed for intermittent claudication. Subsequent investigation with computed tompgraphy and I-123 metaiodobenzylguanine scintigraphy as well as an overview of the literature are also presented.", "title": "" }, { "docid": "fadbfcc98ad512dd788f6309d0a932af", "text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.", "title": "" }, { "docid": "4f3b91bfaa2304e78ad5cd305fb5d377", "text": "The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time. 1993 Academic Press, Inc.", "title": "" }, { "docid": "cc3fbbff0a4d407df0736ef9d1be5dd0", "text": "The purpose of this study is to examine the effect of brand image benefits on satisfaction and loyalty intention in the context of color cosmetic product. Five brand image benefits consisting of functional, social, symbolic, experiential and appearance enhances were investigated. A survey carried out on 97 females showed that functional and appearance enhances significantly affect loyalty intention. Four of brand image benefits: functional, social, experiential, and appearance enhances are positively related to overall satisfaction. The results also indicated that overall satisfaction does influence customers' loyalty. The results imply that marketers should focus on brand image benefits in their effort to achieve customer loyalty.", "title": "" }, { "docid": "f07c06a198547aa576b9a6350493e6d4", "text": "In this paper we examine the diffusion of competing rumors in social networks. Two players select a disjoint subset of nodes as initiators of the rumor propagation, seeking to maximize the number of persuaded nodes. We use concepts of game theory and location theory and model the selection of starting nodes for the rumors as a strategic game. We show that computing the optimal strategy for both the first and the second player is NP-complete, even in a most restricted model. Moreover we prove that determining an approximate solution for the first player is NP-complete as well. We analyze several heuristics and show that—counter-intuitively—being the first to decide is not always an advantage, namely there exist networks where the second player can convince more nodes than the first, regardless of the first player’s decision.", "title": "" }, { "docid": "186145f38fd2b0e6ff41bb50cdeace13", "text": "Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, automatic sarcasm detection has witnessed great interest from the sentiment analysis community. This article is a compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pattern extraction to identify implicit sentiment, use of hashtag-based supervision, and incorporation of context beyond target text. In this article, we describe datasets, approaches, trends, and issues in sarcasm detection. We also discuss representative performance values, describe shared tasks, and provide pointers to future work, as given in prior works. In terms of resources to understand the state-of-the-art, the survey presents several useful illustrations—most prominently, a table that summarizes past papers along different dimensions such as the types of features, annotation techniques, and datasets used.", "title": "" }, { "docid": "ee141b7fd5c372fb65d355fe75ad47af", "text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.", "title": "" }, { "docid": "ad56422f7dc5c9ebf8451e17565a79e8", "text": "Morphological changes of retinal vessels such as arteriovenous (AV) nicking are signs of many systemic diseases. In this paper, an automatic method for AV-nicking detection is proposed. The proposed method includes crossover point detection and AV-nicking identification. Vessel segmentation, vessel thinning, and feature point recognition are performed to detect crossover point. A method of vessel diameter measurement is proposed with processing of removing voids, hidden vessels and micro-vessels in segmentation. The AV-nicking is detected based on the features of vessel diameter measurement. The proposed algorithms have been tested using clinical images. The results show that nicking points in retinal images can be detected successfully in most cases.", "title": "" }, { "docid": "ac657141ed547f870ad35d8c8b2ba8f5", "text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.", "title": "" }, { "docid": "a0547eae9a2186d4c6f1b8307317f061", "text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
8639281a8bea48ad1c41a6776530fc6e
Simulation of Impulse Voltage Generator and Impulse Testing of Insulator using MATLAB Simulink
[ { "docid": "d4aca467d0014b2c2359f5609a1a199b", "text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.", "title": "" } ]
[ { "docid": "80263ae3f1557378dea72dda9bfcf4a9", "text": "Recent state-of-the-art algorithms have achieved good performance on normal pedestrian detection tasks. However, pedestrian detection in crowded scenes is still challenging due to the significant appearance variation caused by heavy occlusions and complex spatial interactions. In this paper we propose a unified probabilistic framework to globally describe multiple pedestrians in crowded scenes in terms of appearance and spatial interaction. We utilize a mixture model, where every pedestrian is assumed in a special subclass and described by the sub-model. Scores of pedestrian parts are used to represent appearance and quadratic kernel is used to represent relative spatial interaction. For efficient inference, multi-pedestrian detection is modeled as a MAP problem and we utilize greedy algorithm to get an approximation. For discriminative parameter learning, we formulate it as a learning to rank problem, and propose Latent Rank SVM for learning from weakly labeled data. Experiments on various databases validate the effectiveness of the proposed approach.", "title": "" }, { "docid": "812fe4ad957d52b9d93ec396fc571173", "text": "The US physician Bratman’s remarkable observation that some individuals were becoming highly obsessive about healthy eating that drives them into pathology lead him to coin the term Orthorexia Nervosa (ON) (Bratman & Knight, 2000). Since this first report, however, research on ON has only been of mediocre quality. In fact, most studies have been conducted using flawed methodological approaches. As recently reported in an article published in the Journal Appetite, yet another study on ON investigating the “Prevalence of Orthorexia Nervosa among College Students Based on Bratman’s Test and Associated Tendencies“, based its findings on an assumingly flawed methodology (Bundros, Clifford, Silliman, & Morris, 2016). But first, let us start with an anecdote. Imagine being at a dinner partywith your friends. There is a large buffet lined upwith all sorts of different foods. A smorgasbord with several salads, cheeses and meats is deliciously arranged. A lot of the presented food is either marked “without XXX”, “free from YYY”, or is described as “vegan”, “paleo” or “gluten-free”. What people bring to dinner parties", "title": "" }, { "docid": "b56b90d98b4b1b136e283111e9acf732", "text": "Mobile phones are widely used nowadays and during the last years developed from simple phones to small computers with an increasing number of features. These result in a wide variety of data stored on the devices which could be a high security risk in case of unauthorized access. A comprehensive user survey was conducted to get information about what data is really stored on the mobile devices, how it is currently protected and if biometric authentication methods could improve the current state. This paper states the results from about 550 users of mobile devices. The analysis revealed a very low securtiy level of the devices. This is partly due to a low security awareness of their owners and partly due to the low acceptance of the offered authentication method based on PIN. Further results like the experiences with mobile thefts and the willingness to use biometric authentication methods as alternative to PIN authentication are also stated.", "title": "" }, { "docid": "eed6db13b57d9e510c22b4a95936ea5b", "text": "Today data mining is widely used by companies with a strong consumer focus like retail, financial, communication and marketing organizations. Here technically data mining is the process of extraction of required information from huge databases. It allows users to analyze data from many different dimensions or angles, categorize it and summarize the relationships identified. The ultimate goal of this paper is to propose a methodology for the improvement in DB-SCAN algorithm to improve clustering accuracy. The proposed improvement is based on back propagation algorithm to calculate Euclidean distance in the dynamic manner. Also this paper shows the obtained results of implemented proposed and existing methods and it compares the results in terms of its execution time and accuracy.", "title": "" }, { "docid": "857132b27d87727454ec3019e52039ba", "text": "In this paper we will introduce an ensemble of codes called irregular repeat-accumulate (IRA) codes. IRA codes are a generalization of the repeat-accumluate codes introduced in [1], and as such have a natural linear-time encoding algorithm. We shall prove that on the binary erasure channel, IRA codes can be decoded reliably in linear time, using iterative sum-product decoding, at rates arbitrarily close to channel capacity. A similar result appears to be true on the AWGN channel, although we have no proof of this. We illustrate our results with numerical and experimental examples.", "title": "" }, { "docid": "a85803f14639bef7f4539bad631d088c", "text": "5.", "title": "" }, { "docid": "a4731b9d3bfa2813858ff9ea97668577", "text": "Both the Swenson and the Soave procedures have been adapted as transanal approaches. Our purpose is to compare the outcomes and complications between transanal Swenson and Soave procedures.This clinical analysis involved a retrospective series of 148 pediatric patients with HD from Dec, 2001, to Dec, 2015. Perioperative/operative characteristics, postoperative complications, and outcomes between the 2 groups were analyzed. Students' t-test and chi-squared analysis were performed.In total 148 patients (Soave 69, Swenson 79) were included in our study. Mean follow-up was 3.5 years. There are no significant differences in overall hospital stay and bowel function. We noted significant differences regarding mean operating time, blood loss, and overall complications. We noted significant differences in mean operating time, blood loss, and overall complications in favor of the Swenson group when compared to the Soave group (P < 0.05).According to our results, although transanal pullthrough Swenson cannot reduce overall hospital stay and improve bowel function compared with the Soave procedure, it results in less blood loss, shorter operation time, and a lower complication rate.", "title": "" }, { "docid": "7d507a0b754a8029d28216e795cb7286", "text": "a Lake Michigan Field Station/Great Lakes Environmental Research Laboratory/NOAA, 1431 Beach St, Muskegon, MI 49441, USA b Great Lakes Environmental Research Laboratory/NOAA, 4840 S. State Rd., Ann Arbor, MI 48108, USA c School Forest Resources, Pennsylvania State University, 434 Forest Resources Building, University Park, PA 16802, USA d School of Natural Resources and Environment, University of Michigan, 440 Church St., Ann Arbor, MI 48109, USA", "title": "" }, { "docid": "7ca8483e91485d29b58f0f98194c13a3", "text": "Managing Network Function (NF) service chains requires careful system resource management. We propose NFVnice, a user space NF scheduling and service chain management framework to provide fair, efficient and dynamic resource scheduling capabilities on Network Function Virtualization (NFV) platforms. The NFVnice framework monitors load on a service chain at high frequency (1000Hz) and employs backpressure to shed load early in the service chain, thereby preventing wasted work. Borrowing concepts such as rate proportional scheduling from hardware packet schedulers, CPU shares are computed by accounting for heterogeneous packet processing costs of NFs, I/O, and traffic arrival characteristics. By leveraging cgroups, a user space process scheduling abstraction exposed by the operating system, NFVnice is capable of controlling when network functions should be scheduled. NFVnice improves NF performance by complementing the capabilities of the OS scheduler but without requiring changes to the OS's scheduling mechanisms. Our controlled experiments show that NFVnice provides the appropriate rate-cost proportional fair share of CPU to NFs and significantly improves NF performance (throughput and loss) by reducing wasted work across an NF chain, compared to using the default OS scheduler. NFVnice achieves this even for heterogeneous NFs with vastly different computational costs and for heterogeneous workloads.", "title": "" }, { "docid": "ddd353b5903f12c14cc3af1163ac617c", "text": "Unmanned Aerial Vehicles (UAVs) have recently received notable attention because of their wide range of applications in urban civilian use and in warfare. With air traffic densities increasing, it is more and more important for UAVs to be able to predict and avoid collisions. The main goal of this research effort is to adjust real-time trajectories for cooperative UAVs to avoid collisions in three-dimensional airspace. To explore potential collisions, predictive state space is utilized to present the waypoints of UAVs in the upcoming situations, which makes the proposed method generate the initial collision-free trajectories satisfying the necessary constraints in a short time. Further, a rolling optimization algorithm (ROA) can improve the initial waypoints, minimizing its total distance. Several scenarios are illustrated to verify the proposed algorithm, and the results show that our algorithm can generate initial collision-free trajectories more efficiently than other methods in the common airspace.", "title": "" }, { "docid": "9998497c000fa194bf414604ff0d69b2", "text": "By embedding shorting vias, a dual-feed and dual-band L-probe patch antenna, with flexible frequency ratio and relatively small lateral size, is proposed. Dual resonant frequency bands are produced by two radiating patches located in different layers, with the lower patch supported by shorting vias. The measured impedance bandwidths, determined by 10 dB return loss, of the two operating bands reach 26.6% and 42.2%, respectively. Also the radiation patterns are stable over both operating bands. Simulation results are compared well with experiments. This antenna is highly suitable to be used as a base station antenna for multiband operation.", "title": "" }, { "docid": "d291df8866e40745591730b85a802f13", "text": "A revision of the nearly 8-year-old World Health Organization classification of the lymphoid neoplasms and the accompanying monograph is being published. It reflects a consensus among hematopathologists, geneticists, and clinicians regarding both updates to current entities as well as the addition of a limited number of new provisional entities. The revision clarifies the diagnosis and management of lesions at the very early stages of lymphomagenesis, refines the diagnostic criteria for some entities, details the expanding genetic/molecular landscape of numerous lymphoid neoplasms and their clinical correlates, and refers to investigations leading to more targeted therapeutic strategies. The major changes are reviewed with an emphasis on the most important advances in our understanding that impact our diagnostic approach, clinical expectations, and therapeutic strategies for the lymphoid neoplasms.", "title": "" }, { "docid": "ae2473ab9c004afd6908f32c7be1fd90", "text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods of detection involve extensive use of auditing, where a trained individual manually observes reports or transactions in an attempt to discover fraudulent behaviour. This method is not only time consuming, expensive and inaccurate, but in the age of big data it is also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive investigation on financial fraud detection practices using such data mining methods, with a particular focus on computational intelligence-based techniques. Classification of the practices based on key aspects such as detection algorithm used, fraud type investigated, and success rate have been covered. Issues and challenges associated with the current practices and potential future direction of research have also been identified.", "title": "" }, { "docid": "01a649c8115810c8318e572742d9bd00", "text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.", "title": "" }, { "docid": "95e212c0b9b40b4dcb7dc4a94b0c0fd2", "text": "In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "626470bd5182dd2a6d4e8a09b31731df", "text": "In this paper, we present a semi-supervised method for automatic speech act recognition in email and forums. The major challenge of this task is due to lack of labeled data in these two genres. Our method leverages labeled data in the SwitchboardDAMSL and the Meeting Recorder Dialog Act database and applies simple domain adaptation techniques over a large amount of unlabeled email and forum data to address this problem. Our method uses automatically extracted features such as phrases and dependency trees, called subtree features, for semi-supervised learning. Empirical results demonstrate that our model is effective in email and forum speech act recognition.", "title": "" }, { "docid": "748abc573febb27f9b9eae92ec68fff7", "text": "In this paper we develop a computational model of adaptation and spatial vision for realistic tone reproduction. The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. We incorporate the model into a tone reproduction operator that maps the vast ranges of radiances found in real and synthetic scenes into the small fixed ranges available on conventional display devices such as CRT’s and printers. The model allows the operator to address the two major problems in realistic tone reproduction: wide absolute range and high dynamic range scenes can be displayed; and the displayed images match our perceptions of the scenes at both threshold and suprathreshold levels to the degree possible given a particular display device. Although in this paper we apply our visual model to the tone reproduction problem, the model is general and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms. CR Categories: I.3.0 [Computer Graphics]: General;", "title": "" }, { "docid": "25ad730b651ce9168fb008a6013e184f", "text": "Model-Based Engineering (MBE) is a promising approach to cope with the challenges of designing the next-generation automotive systems. The increasing complexity of automotive electronics, the platform, distributed real-time embedded software, and the need for continuous evolution from one generation to the next has necessitated highly productive design approaches. However, heterogeneity, interoperability, and the lack of formal semantic underpinning in modeling, integration, validation and optimization make design automation a big challenge, which becomes a hindrance to the wider application of MBE in the industry. This paper briefly presents the interoperability challenges in the context of MBE and summarizes our current contribution to address these challenges with regard to automotive control software systems. A novel model-based formal integration framework is being developed to enable architecture modeling, timing specification, formal semantics, design by contract and optimization in the system-level design. The main advantages of the proposed approach include its pervasive use of formal methods, architecture analysis and design language (AADL) and associated tools, a novel timing annex for AADL with an expressive timing relationship language, a formal contract language to express component-level requirements and validation of component integration, and the resulting high assurance system delivery.", "title": "" }, { "docid": "3d2e170b4cd31d0e1a28c968f0b75cf6", "text": "Fog Computing is a new variety of the cloud computing paradigm that brings virtualized cloud services to the edge of the network to control the devices in the IoT. We present a pattern for fog computing which describes its architecture, including its computing, storage and networking services. Fog computing is implemented as an intermediate platform between end devices and cloud computing data centers. The recent popularity of the Internet of Things (IoT) has made fog computing a necessity to handle a variety of devices. It has been recognized as an important platform to provide efficient, location aware, close to the edge, cloud services. Our model includes most of the functionality found in current fog architectures.", "title": "" }, { "docid": "f5ebdd45dfbdb865c163845c49031e1b", "text": "Recent years have seen deep neural networks (DNNs) becoming wider and deeper to achieve better performance in many applications of AI. Such DNNs however require huge amounts of memory to store weights and intermediate results (e.g., activations, feature maps, etc.) in propagation. This requirement makes it difficult to run the DNNs on devices with limited, hard-to-extend memory, degrades the running time performance, and restricts the design of network models. We address this challenge by developing a novel profile-guided memory optimization to efficiently and quickly allocate memory blocks during the propagation in DNNs. The optimization utilizes a simple and fast heuristic algorithm based on the two-dimensional rectangle packing problem. Experimenting with well-known neural network models, we confirm that our method not only reduces the memory consumption by up to 49.5% but also accelerates training and inference by up to a factor of four thanks to the rapidity of the memory allocation and the ability to use larger mini-batch sizes.", "title": "" } ]
scidocsrr
448b9aca3ae556eef41c4dafe3f77a66
VIRAL ADVERTISING IN SOCIAL MEDIA : PARTICIPATION IN FACEBOOK GROUPS AND RESPONSES AMONG COLLEGE-AGED USERS
[ { "docid": "9948738a487ed899ec50ac292e1f9c6d", "text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.", "title": "" } ]
[ { "docid": "86454998a4fd8091712e16cfc2783966", "text": "Modern digital cameras rely on sequential execution of separate image processing steps to produce realistic images. The first two steps are usually related to denoising and demosaicking where the former aims to reduce noise from the sensor and the latter converts a series of light intensity readings to color images. Modern approaches try to jointly solve these problems, i.e joint denoising-demosaicking which is an inherently ill-posed problem given that two-thirds of the intensity information are missing and the rest are perturbed by noise. While there are several machine learning systems that have been recently introduced to tackle this problem, in this work we propose a novel algorithm which is inspired by powerful classical image regularization methods, large-scale optimization and deep learning techniques. Consequently, our derived iterative neural network has a transparent and clear interpretation compared to other black-box data driven approaches. The extensive comparisons that we report demonstrate the superiority of our proposed network, which outperforms any previous approaches on both noisy and noise-free data across many different datasets using less training samples. This improvement in reconstruction quality is attributed to the principled way we design and train our network architecture, which as a result requires fewer trainable parameters than the current state-of-the-art solution.", "title": "" }, { "docid": "8fe1869ea4865f6ad73b96aa4c0e5e3e", "text": "This study assessed the hypothesis that popularity in adolescence takes on a twofold role, marking high levels of concurrent adaptation but predicting increases over time in both positive and negative behaviors sanctioned by peer norms. Multimethod, longitudinal data, on a diverse community sample of 185 adolescents (13 to 14 years), addressed these hypotheses. As hypothesized, popular adolescents displayed higher concurrent levels of ego development, secure attachment, and more adaptive interactions with mothers and best friends. Longitudinal analyses supported a popularity-socialization hypothesis, however, in which popular adolescents were more likely to increase behaviors that receive approval in the peer group (e.g., minor levels of drug use and delinquency) and decrease behaviors unlikely to be well received by peers (e.g., hostile behavior with peers).", "title": "" }, { "docid": "d18787b890b6e6a0a9337eb2b3d3e6a8", "text": "This paper introduces the end-to-end embedding of a CNN into a HMM, while interpreting the outputs of the CNN in a Bayesian fashion. The hybrid CNN-HMM combines strong discriminative abilities of CNNs with sequence modeling capabilities of HMMs. Most current approaches in the field of gesture and sign language recognition disregard the necessity of dealing with sequence data both for training and evaluation. With our presented end-to-end embedding we are able to improve over the state-of-the-art on three challenging benchmark continuous sign language recognition tasks by between 15% & 38% relative & up to 13.3% absolute.", "title": "" }, { "docid": "ccf8e1f627af3fe1327a4fa73ac12125", "text": "One of the most common needs in manufacturing plants is rejecting products not coincident with the standards as anomalies. Accurate and automatic anomaly detection improves product reliability and reduces inspection cost. Probabilistic models have been employed to detect test samples with lower likelihoods as anomalies in unsupervised manner. Recently, a probabilistic model called deep generative model (DGM) has been proposed for end-to-end modeling of natural images and already achieved a certain success. However, anomaly detection of machine components with complicated structures is still challenging because they produce a wide variety of normal image patches with low likelihoods. For overcoming this difficulty, we propose unregularized score for the DGM. As its name implies, the unregularized score is the anomaly score of the DGM without the regularization terms. The unregularized score is robust to the inherent complexity of a sample and has a smaller risk of rejecting a sample appearing less frequently but being coincident with the standards.", "title": "" }, { "docid": "f941c1f5e5acd9865e210b738ff1745a", "text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.", "title": "" }, { "docid": "cd73d3acb274d179b52ec6930f6f26bd", "text": "We present the design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems. We explore the use of multicore CPU as well as multicore GPUs for this purpose. We show that overcoming the severe memory and bandwidth limitations of current generation GPUs not only leads to more space efficient algorithms, but also to surprising savings in runtime. Our CPU based system is up to ten times and our GPU based system is up to thirty times faster than the current state of the art methods [1], while maintaining comparable convergence behavior. The code and additional results are available at http://grail.cs. washington.edu/projects/mcba.", "title": "" }, { "docid": "bd5e8b3e74660644ed23549be6d247c5", "text": "Recent work in ultra-low-power sensor platforms has enabled a number of new applications in medical, infrastructure, and environmental monitoring. Due to their limited energy storage volume, these sensors operate with long idle times and ultra-low standby power ranging from 10s of nW down to 100s of pW [1–2]. Since radio transmission is relatively expensive, even at the lowest reported power of 0.2mW [3], wireless communication between sensor nodes must be performed infrequently. Accurate measurement of the time interval between communication events (i.e. the synchronization cycle) is of great importance. Inaccuracy in the synchronization cycle time results in a longer period of uncertainty where sensor nodes are required to enable their radios to establish communication (Fig. 2.7.1), quickly making radios dominate the energy budget. Quartz crystal oscillators and CMOS harmonic oscillators exhibit very small sensitivity to supply voltage and temperature [4] but cannot be used in the target application space since they operate at very high frequencies and exhibit power consumption that is several orders of magnitude larger (>300nW) than the needed idle power. A gate-leakage-based timer was proposed [5] that leveraged small gate leakage currents to achieve power consumption within the required budget (< 1nW). However, this timer incurs high RMS jitter (1400ppm) and temperature sensitivity (0.16%/ºC). A 150pW program-and-hold timer was proposed [6] to reduce temperature sensitivity but its drifting clock frequency limits its use for synchronization. The quality of a timer is not captured well by RMS jitter since it ignores the averaging of jitter over multiple timer clock periods in a single synchronization cycle. Instead, we propose the uncertainty in a single synchronization cycle of length T as new metric and use this synchronization uncertainty (SU) to evaluate different timer approaches. The timer period is a random variable X(n), with mean and sigma, μ and σ. Given a synchronization cycle time T, consisting of N timer periods, we define SU as the standard deviation of T as given by √T/μ × σ, assuming X(n) is Gaussian. Note that a smaller clock period increases N and results in more averaging and a lower SU with fixed jitter (σ/μ).", "title": "" }, { "docid": "985c7b11637706e60726cf168790e594", "text": "This Exploratory paper’s second part reveals the detail technological aspects of Hand Gesture Recognition (HGR) System. It further explored HGR basic building blocks, its application areas and challenges it faces. The paper also provides literature review on latest upcoming techniques like – Point Grab, 3D Mouse and Sixth-Sense etc. The paper concluded with focus on major Application fields.", "title": "" }, { "docid": "df095688abccd8cc8e84b873684c8729", "text": "Information technologies (ITs) prevail all functions of strategic and operational management. As information is the lifeblood of tourism, ITs provide both opportunities and challenges for the industry. Despite the uncertainty experienced in the developments of ITs in tourism, the \"only constant will be change\". Increasingly, organisations and destinations, which need to compete will be forced to compute. Unless the current tourism industry improves its competitiveness, by utilising the emerging ITs and innovative management methods, there is a danger for exogenous players to enter the marketplace, jeopardising the position of the existing ones. Only creative and innovative suppliers will be able to survive the competition in the new millennium. This paper provides a framework for the utilisation of technology in tourism by adopting a strategic perspective. A continuous business process re-engineering is proposed in order to ensure that a wide range of prerequisites such as vision, rational organisation, commitment and training are in place, so they can enable destinations and principals to capitalise on the unprecedented opportunities emerging through ITs.", "title": "" }, { "docid": "eaae33cb97b799eff093a7a527143346", "text": "RGB Video now is one of the major data sources of traffic surveillance applications. In order to detect the possible traffic events in the video, traffic-related objects, such as vehicles and pedestrians, should be first detected and recognized. However, due to the 2D nature of the RGB videos, there are technical difficulties in efficiently detecting and recognizing traffic-related objects from them. For instance, the traffic-related objects cannot be efficiently detected in separation while parts of them overlap, and complex background will influence the accuracy of the object detection. In this paper, we propose a robust RGB-D data based traffic scene understanding algorithm. By integrating depth information, we can calculate more discriminative object features and spatial information can be used to separate the objects in the scene efficiently. Experimental results show that integrating depth data can improve the accuracy of object detection and recognition. We also show that the analyzed object information plus depth data facilitate two important traffic event detection applications: overtaking warning and collision", "title": "" }, { "docid": "d41ac7c4301e5efa591f1949327acb38", "text": "During even the most quiescent behavioral periods, the cortex and thalamus express rich spontaneous activity in the form of slow (<1 Hz), synchronous network state transitions. Throughout this so-called slow oscillation, cortical and thalamic neurons fluctuate between periods of intense synaptic activity (Up states) and almost complete silence (Down states). The two decades since the original characterization of the slow oscillation in the cortex and thalamus have seen considerable advances in deciphering the cellular and network mechanisms associated with this pervasive phenomenon. There are, nevertheless, many questions regarding the slow oscillation that await more thorough illumination, particularly the mechanisms by which Up states initiate and terminate, the functional role of the rhythmic activity cycles in unconscious or minimally conscious states, and the precise relation between Up states and the activated states associated with waking behavior. Given the substantial advances in multineuronal recording and imaging methods in both in vivo and in vitro preparations, the time is ripe to take stock of our current understanding of the slow oscillation and pave the way for future investigations of its mechanisms and functions. My aim in this Review is to provide a comprehensive account of the mechanisms and functions of the slow oscillation, and to suggest avenues for further exploration.", "title": "" }, { "docid": "188c55ef248f7021a66c1f2e05c2fc98", "text": "The objective of the proposed study is to explore the performance of credit scoring using a two-stage hybrid modeling procedure with artificial neural networks and multivariate adaptive regression splines (MARS). The rationale under the analyses is firstly to use MARS in building the credit scoring model, the obtained significant variables are then served as the input nodes of the neural networks model. To demonstrate the effectiveness and feasibility of the proposed modeling procedure, credit scoring tasks are performed on one bank housing loan dataset using cross-validation approach. As the results reveal, the proposed hybrid approach outperforms the results using discriminant analysis, logistic regression, artificial neural networks and MARS and hence provides an alternative in handling credit scoring tasks. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "39ea1ed1011425eb092b6ebccb39d5f2", "text": "One of the basic goals of software engineering is the establishment of useful models and equations to predict the cost of any given programming project. Many models have been proposed over the last several years, but, because of differences in the data collected, types of projects and environmental factors among software development sites, these models are not transportable and are only valid within the organization where they were developed. This result seems reasonable when one considers that a model developed at a certain environment will only be able to capture the impact of the factors which have a variable effect within that environment. Those factors which are constant at that environment, and therefore do not cause variations in the productivity among projects produced there, may have different or variable effects at another environment.\n This paper presents a model-generation process which permits the development of a resource estimation model for any particular organization. The model is based on data collected by that organization which captures its particular environmental factors and the differences among its particular projects. The process provides the capability of producing a model tailored to the organization which can be expected to be more effective than any model originally developed for another environment. It is demonstrated here using data collected from the Software Engineering Laboratory at the NASA/Goddard Space Flight Center.", "title": "" }, { "docid": "20d754528009ebce458eaa748312b2fe", "text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.", "title": "" }, { "docid": "ddd15d0b877d3f9ae8c8cb104178fcf0", "text": "Senescence, defined as irreversible cell-cycle arrest, is the main driving force of aging and age-related diseases. Here, we performed high-throughput screening to identify compounds that alleviate senescence and identified the ataxia telangiectasia mutated (ATM) inhibitor KU-60019 as an effective agent. To elucidate the mechanism underlying ATM's role in senescence, we performed a yeast two-hybrid screen and found that ATM interacted with the vacuolar ATPase V1 subunits ATP6V1E1 and ATP6V1G1. Specifically, ATM decreased E-G dimerization through direct phosphorylation of ATP6V1G1. Attenuation of ATM activity restored the dimerization, thus consequently facilitating assembly of the V1 and V0 domains with concomitant reacidification of the lysosome. In turn, this reacidification induced the functional recovery of the lysosome/autophagy system and was coupled with mitochondrial functional recovery and metabolic reprogramming. Together, our data reveal a new mechanism through which senescence is controlled by the lysosomal-mitochondrial axis, whose function is modulated by the fine-tuning of ATM activity.", "title": "" }, { "docid": "f2b3643ca7a9a1759f038f15847d7617", "text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.", "title": "" }, { "docid": "c971c19f8006f92cb013adca941e36aa", "text": "In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixelwise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81.72 percent by Co-CNN, significantly higher than 62.81 percent and 64.38 percent by the state-of-the-art algorithms, M-CNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85.36 percent in F-1 score.", "title": "" }, { "docid": "15ec613478f058ef1ad2e213d32001ec", "text": "In this paper we present the first planner for the problem of Navigation Among Movable Obstacles (NAMO) on a real robot that can handle environments with under-specified object dynamics. This result makes use of recent progress from two threads of the Reinforcement Learning literature. The first is a hierarchical Markov-Decision Process formulation of the NAMO problem designed to handle dynamics uncertainty. The second is a physics-based Reinforcement Learning framework which offers a way to ground this uncertainty in a compact model space that can be efficiently updated from data received by the robot online. Our results demonstrate the ability of a robot to adapt to unexpected object behavior in a real office scenario.", "title": "" }, { "docid": "aa16ca139a7648f7d9bb3ff81aaf0bbc", "text": "Atherosclerosis has an important inflammatory component and acute cardiovascular events can be initiated by inflammatory processes occurring in advanced plaques. Fatty acids influence inflammation through a variety of mechanisms; many of these are mediated by, or associated with, the fatty acid composition of cell membranes. Human inflammatory cells are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the marine n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these are usually biologically weak. EPA and DHA give rise to resolvins which are anti-inflammatory and inflammation resolving. EPA and DHA also affect production of peptide mediators of inflammation (adhesion molecules, cytokines, etc.). Thus, the fatty acid composition of human inflammatory cells influences their function; the contents of arachidonic acid, EPA and DHA appear to be especially important. The anti-inflammatory effects of marine n-3 polyunsaturated fatty acids (PUFAs) may contribute to their protective actions towards atherosclerosis and plaque rupture.", "title": "" } ]
scidocsrr
99f0bbd36783b02ec4fad99363c4483a
A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes
[ { "docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79", "text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "title": "" } ]
[ { "docid": "72a2a4f84a685a01a8a9df332734e90a", "text": "We propose a method for detecting obstacles by comparing input and reference train frontal view camera images. In the field of obstacle detection, most methods employ a machine learning approach, so they can only detect pre-trained classes, such as pedestrian, bicycle, etc. This means that obstacles of unknown classes cannot be detected. To overcome this problem, we propose a background subtraction method that can be applied to moving cameras. First, the proposed method computes frame-by-frame correspondences between the current and the reference (database) image sequences. Then, obstacles are detected by applying image subtraction to corresponding frames. To confirm the effectiveness of the proposed method, we conducted an experiment using several image sequences captured on an experimental track. Its results showed that the proposed method could detect various obstacles accurately and effectively.", "title": "" }, { "docid": "50b9ee0abf12d87e4a0de727bbd2c1d7", "text": "This paper summarizes the concepts of IOT (the internet of things), including the structure of IOT and the implementations of IOT functions. This paper also introduces the telemedicine, including the advantages of telemedicines and the telemedicine in China. At last the paper illustrates the technologies of IOT used in medical system.", "title": "" }, { "docid": "96aced9d0a24431f303eec8b5293c93f", "text": "The discourse properties of text have long been recognized as critical to language technology, and over the past 40 years, our understanding of and ability to exploit the discourse properties of text has grown in many ways. This essay briefly recounts these developments, the technology they employ, the applications they support, and the new challenges that each subsequent development has raised. We conclude with the challenges faced by our current understanding of discourse, and the applications that meeting these challenges will promote. 1 Why bother with discourse? Research in Natural Language Processing (NLP) has long benefitted from the fact that text can often be treated as simply a bag of words or a bag of sentences. But not always: Position often matters — e.g., It is well-known that the first one or two sentences in a news report usually comprise its best extractive summary. Order often matters – e.g., very different events are conveyed depending on how clauses and sentences are ordered. (1) a. I said the magic words, and a genie appeared. b. A genie appeared, and I said the magic words. Adjacency often matters — e.g., attributed material may span a sequence of adjacent sentences, and contrasts are visible through sentence juxtaposition. Context always matters — e.g., All languages achieve economy through minimal expressions that can only convey intended meaning when understood in context. Position, order, adjacency and context are intrinsic features of discourse, and research on discourse processing attempts to solve the challenges posed by context-bound expressions and the discourse structures that give rise, when linearized, to position, order and adjacency. But challenges are not why Language Technology (LT) researchers should care about discourse: Rather, discourse can enable LT to overcome known obstacles to better performance. Consider automated summarization and machine translation: Humans regularly judge output quality in terms that include referential clarity and coherence. Systems can only improve here by paying attention to discourse — i.e., to linguistic features above the level of ngrams and single sentences. (In fact, we predict that as soon as cheap — i.e., non-manual – methods are found for reliably assessing these features — for example, using proxies like those suggested in (Pitler et al., 2010) — they will supplant, or at least complement today’s common metrics, Bleu and Rouge that say little about what matters to human text understanding (Callison-Burch et al., 2006).) Consider also work on automated text simplification: One way that human editors simplify text is by re-expressing a long complex sentence as a discourse sequence of simple sentences. Researchers should be able to automate this through understanding the various ways that information is conveyed in discourse. Other examples of LT applications already benefitting from recognizing and applying discourse-level information include automated assessment of student essays (Burstein and Chodorow, 2010); summarization (Thione et al., 2004), infor-", "title": "" }, { "docid": "c0e70347999c028516eb981a15b8a6c8", "text": "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.", "title": "" }, { "docid": "219461d5edbcf1c71c3fe7eb70028c65", "text": "Sparse matrixlization, an innovative programming style for MATLAB, is introduced and used to develop an efficient software package, iFEM, on adaptive finite element methods. In this novel coding style, the sparse matrix and its operation is used extensively in the data structure and algorithms. Our main algorithms are written in one page long with compact data structure following the style “Ten digit, five seconds, and one page” proposed by Trefethen. The resulting code is simple, readable, and efficient. A unique strength of iFEM is the ability to perform three dimensional local mesh refinement and two dimensional mesh coarsening which are not available in existing MATLAB packages. Numerical examples indicate that iFEM can solve problems with size 105 unknowns in few seconds in a standard laptop. iFEM can let researchers considerably reduce development time than traditional programming methods.", "title": "" }, { "docid": "235e1f328a847fa7b6e074a58defed0b", "text": "A stemming algorithm, a procedure to reduce all words with the same stem to a common form, is useful in many areas of computational linguistics and information-retrieval work. While the form of the algorithm varies with its application, certain linguistic problems are common to any stemming procedure. As a basis for evaluation of previous attempts to deal with these problems, this paper first discusses the theoretical and practical attributes of stemming algorithms. Then a new version of a context-sensitive, longest-match stemming algorithm for English is proposed; though developed for use in a library information transfer system, it is of general application. A major linguistic problem in stemming, variation in spelling of stems, is discussed in some detail and several feasible programmed solutions are outlined, along with sample results of one of these methods.", "title": "" }, { "docid": "08ab7142ae035c3594d3f3ae339d3e27", "text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.", "title": "" }, { "docid": "29750fbd19b2e513729e240d7ce78eda", "text": "Coinciding with the widespread adoption of 3G and 4G smartphones among consumers, mobile marketing has increasingly become a staple tactic in brands’ advertising and promotional efforts. Target, Ralph Lauren, Dunkin Donuts, Starbucks, Volkswagen, Chanel, FIFA, and Puma represent just a few consumer brands from the United States, Europe, and Asia that have begun to aggressively adopt untethered mobile marketing platforms to forge closer and more relevant connections with specific audiences. In the U.S. alone, companies’ spending on mobile advertising and promotions and their ability to deliver brands to consumers is forecast to grow approximately 600% from $9.3 billion in 2010 to $56.5 billion by 2015 (Marketing Charts, 2011). Business Horizons (2012) 55, 485—493", "title": "" }, { "docid": "b13d4d5253a116153778d0f343bf76d7", "text": "OBJECTIVES\nThe purpose of this study was to investigate the effect of dynamic soft tissue mobilisation (STM) on hamstring flexibility in healthy male subjects.\n\n\nMETHODS\nForty five males volunteered to participate in a randomised, controlled single blind design study. Volunteers were randomised to either control, classic STM, or dynamic STM intervention. The control group was positioned prone for 5 min. The classic STM group received standard STM techniques performed in a neutral prone position for 5 min. The dynamic STM group received all elements of classic STM followed by distal to proximal longitudinal strokes performed during passive, active, and eccentric loading of the hamstring. Only specific areas of tissue tightness were treated during the dynamic phase. Hamstring flexibility was quantified as hip flexion angle (HFA) which was the difference between the total range of straight leg raise and the range of pelvic rotation. Pre- and post-testing was conducted for the subjects in each group. A one-way ANCOVA followed by pairwise post-hoc comparisons was used to determine whether change in HFA differed between groups. The alpha level was set at 0.05.\n\n\nRESULTS\nIncrease in hamstring flexibility was significantly greater in the dynamic STM group than either the control or classic STM groups with mean (standard deviation) increase in degrees in the HFA measures of 4.7 (4.8), -0.04 (4.8), and 1.3 (3.8), respectively.\n\n\nCONCLUSIONS\nDynamic soft tissue mobilisation (STM) significantly increased hamstring flexibility in healthy male subjects.", "title": "" }, { "docid": "d5ddc141311afb6050a58be88303b577", "text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "title": "" }, { "docid": "c0a05cad5021b1e779682b50a53f25fd", "text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.", "title": "" }, { "docid": "fa4f9e00ae199f34f2c28cb56799c7e5", "text": "OBJECTIVE\nTo examine how concurrent partnerships amplify the rate of HIV spread, using methods that can be supported by feasible data collection.\n\n\nMETHODS\nA fully stochastic simulation is used to represent a population of individuals, the sexual partnerships that they form and dissolve over time, and the spread of an infectious disease. Sequential monogamy is compared with various levels of concurrency, holding all other features of the infection process constant. Effective summary measures of concurrency are developed that can be estimated on the basis of simple local network data.\n\n\nRESULTS\nConcurrent partnerships exponentially increase the number of infected individuals and the growth rate of the epidemic during its initial phase. For example, when one-half of the partnerships in a population are concurrent, the size of the epidemic after 5 years is 10 times as large as under sequential monogamy. The primary cause of this amplification is the growth in the number of people connected in the network at any point in time: the size of the largest \"component'. Concurrency increases the size of this component, and the result is that the infectious agent is no longer trapped in a monogamous partnership after transmission occurs, but can spread immediately beyond this partnership to infect others. The summary measure of concurrency developed here does a good job in predicting the size of the amplification effect, and may therefore be a useful and practical tool for evaluation and intervention at the beginning of an epidemic.\n\n\nCONCLUSION\nConcurrent partnerships may be as important as multiple partners or cofactor infections in amplifying the spread of HIV. The public health implications are that data must be collected properly to measure the levels of concurrency in a population, and that messages promoting one partner at a time are as important as messages promoting fewer partners.", "title": "" }, { "docid": "9dc52cd5a58077f74868f48021b390af", "text": "Background: Motor development allows infants to gain knowledge of the world but its vital role in social development is often ignored. Method: A systematic search for papers investigating the relationship between motor and social skills was conducted , including research in typical development and in Developmental Coordination Disorder, Autism Spectrum Disorders and Specific Language Impairment. R sults: The search identified 42 studies, many of which highlighted a significant relationship between motor skills and the development of social cognition, language and social interactions. Conclusions: This complex relationship requires more attention from researchers and practitioners, allowing the development of more tailored intervention techniques for those at risk of motor, social and language difficulties. Key Practitioner Message  Significant relationships exist between the development of motor skills, social cognition, language and social interactions in typical and atypical development  Practitioners should be aware of the relationships between these aspects of development and understand the impact that early motor difficulties may have on later social skills  Complex relationships between motor and social skills are evident in children with ASD, DCD and SLI  Early screening and more targeted interventions may be appropriate", "title": "" }, { "docid": "42a412b11300ec8d7721c1f532dadfb9", "text": " Most data-driven dependency parsing approaches assume that sentence structure is represented as trees. Although trees have several desirable properties from both computational and linguistic perspectives, the structure of linguistic phenomena that goes beyond shallow syntax often cannot be fully captured by tree representations. We present a parsing approach that is nearly as simple as current data-driven transition-based dependency parsing frameworks, but outputs directed acyclic graphs (DAGs). We demonstrate the benefits of DAG parsing in two experiments where its advantages over dependency tree parsing can be clearly observed: predicate-argument analysis of English and syntactic analysis of Danish with a representation that includes long-distance dependencies and anaphoric reference links.", "title": "" }, { "docid": "a21678593e9edf3eb80ae80e1d4e6947", "text": "Zusammen mit einigen bemerkenswerten Durchbrüchen wie Watson oder dem Google Car in den vergangenen Monaten stehen wir vor einer neuen Ära in der Künstlichen Intelligenz. Eine besondere Rolle nimmt dabei die ,,Verteilte Künstliche Intelligenz“ ein, Cyber Physical Systems und Internet of Things ,,boomen“. Im Kern handelt es sich bei ersteren um einen Verbund mehrheitlich technischer Subkomponenten, die über eine internetbasierte Dateninfrastruktur miteinander kommunizieren, bei letzterem um die Ausweitung des ,,Beteiligungskonzepts“ des Internets: Teilnehmer sind nicht mehr ausschließlich Menschen, sondern auch ,,Dinge“ – wie etwa die Sensorik eines Autos, Klimadatenstationen, Prozessdatenrechner der Produktionstechnik und andere informationstragende und/oder mit der Umwelt unmittelbar interagierende Systeme.", "title": "" }, { "docid": "48b1fdb9343aee6582f11013d63667de", "text": "Most of the state of the art works and researches on the automatic sentiment analysis and opinion mining of texts collected from social networks and microblogging websites are oriented towards the classification of texts into positive and negative. In this paper, we propose a pattern-based approach that goes deeper in the classification of texts collected from Twitter (i.e., tweets). We classify the tweets into 7 different classes; however the approach can be run to classify into more classes. Experiments show that our approach reaches an accuracy of classification equal to 56.9% and a precision level of sentimental tweets (other than neutral and sarcastic) equal to 72.58%. Nevertheless, the approach proves to be very accurate in binary classification (i.e., classification into “positive” and “negative”) and ternary classification (i.e., classification into “positive”, “negative” and “neutral”): in the former case, we reach an accuracy of 87.5% for the same dataset used after removing neutral tweets, and in the latter case, we reached an accuracy of classification of 83.0%.", "title": "" }, { "docid": "65ac52564041b0c2e173560d49ec762f", "text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ", "title": "" }, { "docid": "0ec0b6797069ee5bd737ea787cba43ef", "text": "Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. MULLER, Henning, et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. Genève : 1999", "title": "" }, { "docid": "61d80b5b0c6c2b3feb1ce667babd2236", "text": "In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. In a recent paper published in a special issue of Human Communication Research devoted to methodological topics (Vol. 28, No. 4), Lombard, Snyder-Duch, and Bracken (2002) presented their findings of how reliability was treated in 200 content analyses indexed in Communication Abstracts between 1994 and 1998. In essence, their results showed that only 69% of the articles report reliabilities. This amounts to no significant improvements in reliability concerns over earlier studies (e.g., Pasadeos et al., 1995; Riffe & Freitag, 1996). Lombard et al. attribute the failure of consistent reporting of reliability of content analysis data to a lack of available guidelines, and they end up proposing such guidelines. Having come to their conclusions by content analytic means, Lombard et al. also report their own reliabilities, using not one, but four, indices for comparison: %-agreement; Scott‟s (1955)  (pi); Cohen‟s (1960)  (kappa); and Krippendorff‟s (1970, 2004)  (alpha). Faulty software 1 initially led the authors to miscalculations, now corrected (Lombard et al., 2003). However, in their original article, the authors cite several common beliefs about these coefficients and make recommendations that I contend can seriously mislead content analysis researchers, thus prompting my corrective response. To put the discussion of the purpose of these indices into a larger perspective, I will have to go beyond the arguments presented in their article. Readers who might find the technical details tedious are invited to go to the conclusion, which is in the form of four recommendations. The Conservative/Liberal Continuum Lombard et al. report “general agreement (in the literature) that indices which do not account for chance agreement (%-agreement and Holsti‟s [1969] CR – actually Osgood‟s [1959, p.44] index) are too liberal while those that do (, , and ) are too conservative” (2002, p. 593). For liberal or “more lenient” coefficients, the authors recommend adopting higher critical values for accepting data as reliable than for conservative or “more stringent” ones (p. 600) – as if differences between these coefficients were merely a problem of locating them on a shared scale. Discussing reliability coefficients in terms of a conservative/liberal continuum is not widespread in the technical literature. It entered the writing on content analysis not so long ago. Neuendorf (2002) used this terminology, but only in passing. Before that, Potter and Lewine-Donnerstein (1999, p. 287) cited Perreault and Leigh‟s (1989, p. 138) assessment of the chance-corrected  as being “overly conservative” and “difficult to compare (with) ... Cronbach‟s (1951) alpha,” for example – as if the comparison with a correlation coefficient mattered. I contend that trying to understand diverse agreement coefficients by their numerical results alone, conceptually placing them on a conservative/liberal continuum, is seriously misleading. Statistical coefficients are mathematical functions. They apply to a collection of data (records, values, or numbers) and result in one numerical index intended to inform its users about something – here about whether they can rely on their data. Differences among coefficients are due to responding to (a) different patterns in data and/or (b) the same patterns but in different ways. How these functions respond to which patterns of agreement and how their numerical results relate to the risk of drawing false conclusions from unreliable data – not just the numbers they produce – must be understood before selecting one coefficient over another. Issues of Scale Let me start with the ranges of the two broad classes of agreement coefficients, chancecorrected agreement and raw or %-agreement. While both kinds equal 1.000 or 100% when agreement is perfect, and data are considered reliable, %-agreement is zero when absolutely no agreement is observed; when one coder‟s categories unfailingly differ from the categories used by the other; or disagreement is systematic and extreme. Extreme disagreement is statistically almost as unexpected as perfect agreement. It should not occur, however, when coders apply the same coding instruction to the same set of units of analysis and work independently of each other, as is required when generating data for testing reliability. Where the reliability of data is an issue, the worst situation is not when one coder looks over the shoulder of another coder and selects a non-matching category, but when coders do not understand what they are asked to interpret, categorize by throwing dice, or examine unlike units of analysis, causing research results that are indistinguishable from chance events. While zero %-agreement has no meaningful reliability interpretation, chance-corrected agreement coefficients, by contrast, become zero when coders‟ behavior bears no relation to the phenomena to be coded, leaving researchers clueless as to what their data mean. Thus, the scales of chance-corrected agreement coefficients are anchored at two points of meaningful reliability interpretations, zero and one, whereas %-like agreement indices are anchored in only one, 100%, which renders all deviations from 100% uninterpretable, as far as data reliability is concerned. %-agreement has other undesirable properties; for example, it is limited to nominal data; can compare only two coders 2 ; and high %-agreement becomes progressively unlikely as more categories are available. I am suggesting that the convenience of calculating %-agreement, which is often cited as its advantage, cannot compensate for its meaninglessness. Let me hasten to add that chance-correction is not a panacea either. Chance-corrected agreement coefficients do not form a uniform class. Benini (1901), Bennett, Alpert, and Goldstein (1954), Cohen (1960), Goodman and Kruskal (1954), Krippendorff (1970, 2004), and Scott (1955) build different corrections into their coefficients, thus measuring reliability on slightly different scales. Chance can mean different things. Discussing these coefficients in terms of being conservative (yielding lower values than expected) or liberal (yielding higher values than expected) glosses over their crucial mathematical differences and privileges an intuitive sense of the kind of magnitudes that are somehow considered acceptable. If it were the issue of striking a balance between conservative and liberal coefficients, it would be easy to follow statistical practices and modify larger coefficients by squaring them and smaller coefficients by applying the square root to them. However, neither transformation would alter what these mathematical functions actually measure; only the sizes of the intervals between 0 and 1. Lombard et al., by contrast, attempt to resolve their dilemma by recommending that content analysts use several reliability measures. In their own report, they use , “an index ...known to be conservative,” but when  measures below .700, they revert to %-agreement, “a liberal index,” and accept data as reliable as long as the latter is above .900 (2002, p. 596). They give no empirical justification for their choice. I shall illustrate below the kind of data that would pass their criterion. Relation Between Agreement and Reliability To be clear, agreement is what we measure; reliability is what we wish to infer from it. In content analysis, reproducibility is arguably the most important interpretation of reliability (Krippendorff, 2004, p.215). I am suggesting that an agreement coefficient can become an index of reliability only when (1) It is applied to proper reliability data. Such data result from duplicating the process of describing, categorizing, or measuring a sample of data obtained from the population of data whose reliability is in question. Typically, but not exclusively, duplications are achieved by employing two or more widely available coders or observers who, working independent of each other, apply the same coding instructions or recording devices to the same set of units of analysis. (2) It treats units of analysis as separately describable or categorizable, without, however, presuming any knowledge about the correctness of their descriptions or categories. What matters, therefore, is not truths, correlations, subjectivity, or the predictability of one particular coder‟s use of categories from that by another coder, but agreements or disagreements among multiple descriptions generated by a coding procedure, regardless of who enacts that procedure. Reproducibility is about data making, not about coders. A coefficient for assessing the reliability of data must treat coders as interchangeable and count observable coder idiosyncrasies as disagreement. (3) Its values correlate with the conditions under which one is willing to rely on imperfect data. The correlation between a measure of agreement and the rely-ability on data involves two kinds of inferences. Estimating the (dis)agreement in a population of data from the (dis)agreements observed and meas", "title": "" }, { "docid": "0209627cd57745dc5c06dc5ff9723352", "text": "The cloud computing provides on demand services over the Internet with the help of a large amount of virtual storage. The main features of cloud computing is that the user does not have any setup of expensive computing infrastructure and the cost of its services is less. In the recent years, cloud computing integrates with the industry and many other areas, which has been encouraging the researcher to research on new related technologies. Due to the availability of its services & scalability for computing processes individual users and organizations transfer their application, data and services to the cloud storage server. Regardless of its advantages, the transformation of local computing to remote computing has brought many security issues and challenges for both consumer and provider. Many cloud services are provided by the trusted third party which arises new security threats. The cloud provider provides its services through the Internet and uses many web technologies that arise new security issues. This paper discussed about the basic features of the cloud computing, security issues, threats and their solutions. Additionally, the paper describes several key topics related to the cloud, namely cloud architecture framework, service and deployment model, cloud technologies, cloud security concepts, threats, and attacks. The paper also discusses a lot of open research issues related to the cloud security. Keywords—Cloud Computing, Cloud Framework, Cloud Security, Cloud Security Challenges, Cloud Security Issues", "title": "" } ]
scidocsrr
6cc992b6f461c985d9e300b2064fc3a1
Data-Driven Color Augmentation Techniques for Deep Skin Image Analysis
[ { "docid": "5a0490da6af72e60fea43433320a7505", "text": "Synthesizing images of the eye fundus is a challenging task that has been previously approached by formulating complex models of the anatomy of the eye. New images can then be generated by sampling a suitable parameter space. In this work, we propose a method that learns to synthesize eye fundus images directly from data. For that, we pair true eye fundus images with their respective vessel trees, by means of a vessel segmentation technique. These pairs are then used to learn a mapping from a binary vessel tree to a new retinal image. For this purpose, we use a recent image-to-image translation technique, based on the idea of adversarial learning. Experimental results show that the original and the generated images are visually different in terms of their global appearance, in spite of sharing the same vessel tree. Additionally, a quantitative quality analysis of the synthetic retinal images confirms that the produced images retain a high proportion of the true image set quality.", "title": "" }, { "docid": "5824a316f20751183676850c119c96cd", "text": " Proposed method – Max-RGB & Gray-World • Instantiations of Minkowski norm – Optimal illuminant estimate • L6 norm: Working best overall", "title": "" }, { "docid": "895f53c40a115740f840992656b60794", "text": "Melanoma is the deadliest form of skin cancer. While curable with early detection, only highly trained specialists are capable of accurately recognizing the disease. As expertise is in limited supply, automated systems capable of identifying disease could save lives, reduce unnecessary biopsies, and reduce costs. Toward this goal, we propose a system that combines recent developments in deep learning with established machine learning approaches, creating ensembles of methods that are capable of segmenting skin lesions, as well as analyzing the detected area and surrounding tissue for melanoma detection. The system is evaluated using the largest publicly available benchmark dataset of dermoscopic images, containing 900 training and 379 testing images. New state-of-the-art performance levels are demonstrated, leading to an improvement in the area under receiver operating characteristic curve of 7.5% (0.843 vs. 0.783), in average precision of 4% (0.649 vs. 0.624), and in specificity measured at the clinically relevant 95% sensitivity operating point 2.9 times higher than the previous state-of-the-art (36.8% specificity compared to 12.5%). Compared to the average of 8 expert dermatologists on a subset of 100 test images, the proposed system produces a higher accuracy (76% vs. 70.5%), and specificity (62% vs. 59%) evaluated at an equivalent sensitivity (82%).", "title": "" } ]
[ { "docid": "f29d0ea5ff5c96dadc440f4d4aa229c6", "text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.", "title": "" }, { "docid": "33e6ad4697c28a613d9e852e70b56877", "text": "The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain.", "title": "" }, { "docid": "b3c1372c6d315ee187b3600e4254afd5", "text": "Successful countries provide economy and society with infrastructure needed to maintain growth. Development experience suggests that investing 7 percent of GDP in infrastructure is the right order of magnitude for high and sustained growth. Over the last twelve years, the government of Vietnam was able to sustain infrastructure investment at 10 percent of GDP. This remarkably high level of investment has resulted in a rapid expansion of infrastructure stocks and improved access. Despite this achievement, Vietnam is experiencing more and more infrastructure weaknesses that negatively affect its ability to sustain high economic growth in the long term. Transport and electricity – the two most essential infrastructure activities – appear to be the weakest infrastructure sectors in Vietnam with blackouts and traffic jams occurring more and more frequently. In transport, many large-scale railway, seaport and airport projects are being planned in near total disregard of the emergence of fast growing industrial clusters. These wrongheaded projects will need to be terminated in order to make funds available for a few crucial projects in the most rapidly growing regions that currently face severe transport bottlenecks. The private sector participation in transport development will help identify and execute the most viable projects. But its potential will only be realized if the returns to private investors come from the projects’ own cash flow, rather than from government subsidies in the form of land. In electricity, the investment pattern of over-reliance on hydro needs to be changed. If hydro continues to be the single largest production source, then extensive idle time will be inevitable for thermal stations, since the wet/dry season power output ratio is so uneven. Vietnam must determine the appropriate mix of hydro and thermal generating capacity that can reliably supply the country’s demand. Electricity prices have to be raised to levels that enable EVN or a single buyer in the future to contract for new generating capacity through competitive bidding. The roadmap for liberalization in the energy sector contemplated in the 2004 Electricity Law needs to be implemented if Viet Nam is to successfully attract the volume of investment and promote the levels of competition and private sector participation required to meet Viet Nam’s long term energy and, hence, developmental needs.", "title": "" }, { "docid": "07d9956101af44fd8bcf2e133d2624ae", "text": "This paper studies a specific low-power wireless technology capable of reaching a long range, namely long range (LoRa). Such a technology can be used by different applications in cities involving many transmitting devices while requiring loose communication constrains. We focus on electricity grids, where LoRa end-devices are smart meters that send the average power demanded by their respective households during a given period. The successfully decoded data by the LoRa gateway are used by an aggregator to reconstruct the daily households’ profiles. We show how the interference from concurrent transmissions from both LoRa and non-LoRa devices negatively affect the communication outage probability and the link effective bit-rate. Besides, we use actual electricity consumption data to compare time-based and event-based sampling strategies, showing the advantages of the latter. We then employ this analysis to assess the gateway range that achieves an average outage probability that leads to a signal reconstruction with a given requirement. We also discuss that, although the proposed analysis focuses on electricity metering, it can be easily extended to any other smart city application with similar requirements, such as water metering or traffic monitoring.", "title": "" }, { "docid": "616ac87318a75c430149e254f4a0b931", "text": "Research on large shared medical datasets and data-driven research are gaining fast momentum and provide major opportunities for improving health systems as well as individual care. Such open data can shed light on the causes of disease and effects of treatment, including adverse reactions side-effects of treatments, while also facilitating analyses tailored to an individual's characteristics, known as personalized or \"stratified medicine.\" Developments, such as crowdsourcing, participatory surveillance, and individuals pledging to become \"data donors\" and the \"quantified self\" movement (where citizens share data through mobile device-connected technologies), have great potential to contribute to our knowledge of disease, improving diagnostics, and delivery of -healthcare and treatment. There is not only a great potential but also major concerns over privacy, confidentiality, and control of data about individuals once it is shared. Issues, such as user trust, data privacy, transparency over the control of data ownership, and the implications of data analytics for personal privacy with potentially intrusive inferences, are becoming increasingly scrutinized at national and international levels. This can be seen in the recent backlash over the proposed implementation of care.data, which enables individuals' NHS data to be linked, retained, and shared for other uses, such as research and, more controversially, with businesses for commercial exploitation. By way of contrast, through increasing popularity of social media, GPS-enabled mobile apps and tracking/wearable devices, the IT industry and MedTech giants are pursuing new projects without clear public and policy discussion about ownership and responsibility for user-generated data. In the absence of transparent regulation, this paper addresses the opportunities of Big Data in healthcare together with issues of responsibility and accountability. It also aims to pave the way for public policy to support a balanced agenda that safeguards personal information while enabling the use of data to improve public health.", "title": "" }, { "docid": "356dbb5e8e576cfa49153962a6e3be93", "text": "Knowing how many people occupy a building, and where they are located, is a key component of smart building services. Commercial, industrial and residential buildings often incorporate systems used to determine occupancy. However, relatively simple sensor technology and control algorithms limit the effectiveness of smart building services. In this paper we propose to replace sensor technology with time series models that can predict the number of occupants at a given location and time. We use Wi-Fi datasets readily available in abundance for smart building services and train Auto Regression Integrating Moving Average (ARIMA) models and Long Short-Term Memory (LSTM) time series models. As a use case scenario of smart building services, these models allow forecasting of the number of people at a given time and location in 15, 30 and 60 minutes time intervals at building as well as Access Point (AP) level. For LSTM, we build our models in two ways: a separate model for every time scale, and a combined model for the three time scales. Our experiments show that LSTM combined model reduced the computational resources with respect to the number of neurons by 74.48 % for the AP level, and by 67.13 % for the building level. Further, the root mean square error (RMSE) was reduced by 88.2%–93.4% for LSTM in comparison to ARIMA for the building levels models and by 80.9 %–87% for the AP level models.", "title": "" }, { "docid": "af6b26efef62f3017a0eccc5d2ae3c33", "text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.", "title": "" }, { "docid": "116d0735ded06ba1dc9814f21236b7b1", "text": "In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.", "title": "" }, { "docid": "967ebcd284a6a4dc58adf11eec0b10f0", "text": "An innovative LDS 4G antenna solution operating in the 698-960 MHz band is presented. It is composed of two radiating elements recombined in a broadband single feed antenna system using a multiband matching circuit design. Matching interfaces are synthesized thanks to lumped components placed on the FR4 PCB supporting the LDS antenna. Measurement shows a reflection coefficient better than -6 dB over the 698-960 MHz band, with a 30% peak total efficiency. Measurement using a realistic phone casing showed the same performances. The proposed approach can be extended to additional bands, offering an innovative antenna solution able to address the multi band challenge related to 4G applications.", "title": "" }, { "docid": "8a20ea85c44f66c0f63ee25f1abd0630", "text": "In this study, a human tissue-implantable compact folded dipole antenna of 19.6 * 2 * 0.254 mm3 operating in the Medical Implant Communication Service (MICS) frequency band (402-405 MHz) is presented. The antenna design and analysis is carried out inside a homogeneous flat phantom with electrical properties equivalent to those of 2/3 human muscle tissue. The dipole antenna, printed on a high-dielectric substrate layer, exhibits a frequency resonance at 402 MHz with a wide 10-dB impedance bandwidth of 105 MHz. The proposed antenna radiates an omnidirectional far-field radiation pattern with a maximum realized gain of -31.2 dB. In addition, the Specific Absorption Rate (SAR) assessment indicates the maximum input power deliverable to the antenna in order to meet the required safety regulations.", "title": "" }, { "docid": "629b774e179a446ac2cbaef683daef25", "text": "Flux-switching permanent magnet (FSPM) motors have a doubly salient structure, the magnets being housed in the stator and the stator winding comprising concentrated coils. They have attracted considerable interest due to their essentially sinusoidal phase back electromotive force (EMF) waveform. However, to date, the inherent nature of this desirable feature has not been investigated in detail. Thus, a typical three-phase FSPM motor with 12 stator teeth and ten rotor poles is considered. It is found that, since there is a significant difference in the magnetic flux paths associated with the coils of each phase, this results in harmonics in the coil back EMF waveforms being cancelled, resulting in essentially sinusoidal phase back EMF waveforms. In addition, the influence of the rotor pole-arc on the phase back EMF waveform is evaluated by finite-element analysis, and an optimal pole-arc for minimum harmonic content in the back EMF is obtained and verified experimentally.", "title": "" }, { "docid": "21c3f6d61eeeb4df1bdb500f388f71f3", "text": "Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract The Extensible Authentication Protocol (EAP), defined in RFC 3748, enables extensible network access authentication. This document specifies the EAP key hierarchy and provides a framework for the transport and usage of keying material and parameters generated by EAP authentication algorithms, known as \"methods\". It also provides a detailed system-level security analysis, describing the conditions under which the key management guidelines described in RFC 4962 can be satisfied.", "title": "" }, { "docid": "70fd930a2a6504404bec67779cba71b2", "text": "This article discusses the logical implementation of the media access control and the physical layer of 100 Gb/s Ethernet. The target are a MAC/PCS LSI, supporting MAC and physical coding sublayer, and a gearbox LSI, providing 10:4 parallel lane-width exchange inside an optical module. The two LSIs are connected by a 100 gigabit attachment unit interface, which consists of ten 10 Gb/s lines. We realized a MAC/PCS logical circuit with a low-frequency clock on a FPGA, whose size is 250 kilo LUTs with a 5.7 Mbit RAM, and the power consumption of the gearbox LSI estimated to become 2.3 W.", "title": "" }, { "docid": "34a21bf5241d8cc3a7a83e78f8e37c96", "text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.", "title": "" }, { "docid": "d67ee0219625f02ff7023e4d0d39e8d8", "text": "In information retrieval, pseudo-relevance feedback (PRF) refers to a strategy for updating the query model using the top retrieved documents. PRF has been proven to be highly effective in improving the retrieval performance. In this paper, we look at the PRF task as a recommendation problem: the goal is to recommend a number of terms for a given query along with weights, such that the final weights of terms in the updated query model better reflect the terms' contributions in the query. To do so, we propose RFMF, a PRF framework based on matrix factorization which is a state-of-the-art technique in collaborative recommender systems. Our purpose is to predict the weight of terms that have not appeared in the query and matrix factorization techniques are used to predict these weights. In RFMF, we first create a matrix whose elements are computed using a weight function that shows how much a term discriminates the query or the top retrieved documents from the collection. Then, we re-estimate the created matrix using a matrix factorization technique. Finally, the query model is updated using the re-estimated matrix. RFMF is a general framework that can be employed with any retrieval model. In this paper, we implement this framework for two widely used document retrieval frameworks: language modeling and the vector space model. Extensive experiments over several TREC collections demonstrate that the RFMF framework significantly outperforms competitive baselines. These results indicate the potential of using other recommendation techniques in this task.", "title": "" }, { "docid": "9c3218ce94172fd534e2a70224ee564f", "text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.", "title": "" }, { "docid": "a550969fc708fa6d7898ea29c0cedef8", "text": "This paper describes the findings of a research project whose main objective is to compile a character frequency list based on a very large collection of Chinese texts collected from various online sources. As compared with several previous studies on Chinese character frequencies, this project uses a much larger corpus that not only covers more subject fields but also contains a better proportion of informative versus imaginative Modern Chinese texts. In addition, this project also computes two bigram frequency lists that can be used for compiling a list of most frequently used two-character words in Chinese.", "title": "" }, { "docid": "5d3977c0a7e3e1a4129693342c6be3d3", "text": "With the fast advances in nextgen sequencing technology, high-throughput RNA sequencing has emerged as a powerful and cost-effective way for transcriptome study. De novo assembly of transcripts provides an important solution to transcriptome analysis for organisms with no reference genome. However, there lacked understanding on how the different variables affected assembly outcomes, and there was no consensus on how to approach an optimal solution by selecting software tool and suitable strategy based on the properties of RNA-Seq data. To reveal the performance of different programs for transcriptome assembly, this work analyzed some important factors, including k-mer values, genome complexity, coverage depth, directional reads, etc. Seven program conditions, four single k-mer assemblers (SK: SOAPdenovo, ABySS, Oases and Trinity) and three multiple k-mer methods (MK: SOAPdenovo-MK, trans-ABySS and Oases-MK) were tested. While small and large k-mer values performed better for reconstructing lowly and highly expressed transcripts, respectively, MK strategy worked well for almost all ranges of expression quintiles. Among SK tools, Trinity performed well across various conditions but took the longest running time. Oases consumed the most memory whereas SOAPdenovo required the shortest runtime but worked poorly to reconstruct full-length CDS. ABySS showed some good balance between resource usage and quality of assemblies. Our work compared the performance of publicly available transcriptome assemblers, and analyzed important factors affecting de novo assembly. Some practical guidelines for transcript reconstruction from short-read RNA-Seq data were proposed. De novo assembly of C. sinensis transcriptome was greatly improved using some optimized methods.", "title": "" }, { "docid": "c75ee3e700806bcb098f6e1c05fdecfc", "text": "This study examines patterns of cellular phone adoption and usage in an urban setting. One hundred and seventy-six cellular telephone users were surveyed abou their patterns of usage, demographic and socioeconomic characteristics, perceptions about the technology, and their motivations to use cellular services. The results of this study confirm that users' perceptions are significantly associated with their motivation to use cellular phones. Specifically, perceived ease of use was found to have significant effects on users' extrinsic and intrinsic motivations; apprehensiveness about cellular technology had a negative effect on intrinsic motivations. Implications of these findings for practice and research are examined.", "title": "" }, { "docid": "13685fa8e74d57d05d5bce5b1d3d4c93", "text": "Children left behind by parents who are overseas Filipino workers (OFW) benefit from parental migration because their financial status improves. However, OFW families might emphasize the economic benefits to compensate for their separation, which might lead to materialism among children left behind. Previous research indicates that materialism is associated with lower well-being. The theory is that materialism focuses attention on comparing one's possessions to others, making one constantly dissatisfied and wanting more. Research also suggests that gratitude mediates this link, with the focus on acquiring more possessions that make one less grateful for current possessions. This study explores the links between materialism, gratitude, and well-being among 129 adolescent children of OFWs. The participants completed measures of materialism, gratitude, and well-being (life satisfaction, self-esteem, positive and negative affect). Results showed that gratitude mediated the negative relationship between materialism and well-being (and its positive relationship with negative affect). Children of OFWs who have strong materialist orientation seek well-being from possessions they do not have and might find it difficult to be grateful of their situation, contributing to lower well-being. The findings provide further evidence for the mediated relationship between materialism and well-being in a population that has not been previously studied in the related literature. The findings also point to two possible targets for psychosocial interventions for families and children of OFWs.", "title": "" } ]
scidocsrr
d95a8434aaceca67d2c0b55e8278a21b
A longitudinal Study of Trust and Perceived Usefulness in Consumer Acceptance of an eService: the Case of Online Health Services
[ { "docid": "22d9ae82a09a212eb5dcd48ad77cc7a9", "text": "The purpose of this study is to propose an extended model of Theory of Planned Behavior (TPB) by incorporating constructs drawn from the model of Expectation Disconfirmation Theory (EDT) and to examine the antecedents of users’ intention to continue using online shopping (continuance intention). Prior research has demonstrated that TPB constructs, including attitude, subjective norm, and perceived behavioral control, are important factors in determining the acceptance and use of various information technologies. These factors, however, are insufficient to explain a user’s continuance intention in the online shopping context. In this study we extended TPB with two EDT constructs—disconfirmation and satisfaction—for studying users’ continuance intention in the online shopping context. By employing longitudinal method with two-stage survey, we empirically validated the proposed model and research hypotheses. r 2006 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "fc08c706567d10686fc2208ae329f969", "text": "Object technology is believed to be crucial in achieving the long sought-after goal of widespread reuse. This goal is the most frequently stated reason for adopting OT. Unfortunately, many people naively equate reuse with objects, expecting it to “automatically” ensure reuse, but often do not get much reuse. Based on my experience with reuse at HP, Objectory and Rational, and with many customers, I know that without extensive changes to support component-based development and systematic reuse, OT as used today will not succeed in giving users reuse. Without an explicit reuse agenda, and a systematic approach to the design and use of reusable components and frameworks, 00 reuse will not succeed. In almost all cases of successful reuse, architecture, a dedicated component development and support group, management support, and a stable domain were the keys to success. These largely non-technical issues seem to be more important to successful reuse than the specific language or design chosen.", "title": "" }, { "docid": "98ab9279efd8aeee6bb58fe84f5142f3", "text": "BACKGROUND\nBreast hypertrophy presents at puberty or thereafter. It is a condition of abnormal enlargement of the breast tissue in excess of the normal proportion. Gland hypertrophy, excessive fatty tissue or a combination of both may cause this condition. Macromastia can be unilateral or bilateral.\n\n\nOBJECTIVE\nTo present a case of massive bilateral gigantomastia with huge bilateral hypertrophy of the axillary breasts.\n\n\nMETHODS\nReview of the prentation, clinical and investigative findings aswell as the outcome of surgical intervention of a young Nigerian woman with bilateral severe breast hypertrophy and severe hypertrophy of axillary breasts.\n\n\nRESULT\nThe patient was a 26-year-old woman who presented with massive swelling of her breasts and bilateral axillary swellings, both of six years duration.. In addition to the breast pathology, she also suffered significant psychological problems. The breast ultrasonography confirmed only diffuse swellings, with no visible lumps or areas of calcifiCation. She had total bilateral excision of the hypertrophied axillary breasts, and bilateral breast amputation with composite nipple-areola complex graft of the normally located breasts.The total weight of the breast tissues removed was 44.8 kilogram.\n\n\nCONCLUSION\nMacromastia of this size is very rare. This case to date is probably the largest in the world literature. Surgical treatment of the condition gives a satisfactory outcome.", "title": "" }, { "docid": "fd22f81af03d9dbcd746ebdfed5277c6", "text": "Numerous NLP applications rely on search-engine queries, both to extract information from and to compute statistics over the Web corpus. But search engines often limit the number of available queries. As a result, query-intensive NLP applications such as Information Extraction (IE) distribute their query load over several days, making IE a slow, offline process. This paper introduces a novel architecture for IE that obviates queries to commercial search engines. The architecture is embodied in a system called KNOWITNOW that performs high-precision IE in minutes instead of days. We compare KNOWITNOW experimentally with the previouslypublished KNOWITALL system, and quantify the tradeoff between recall and speed. KNOWITNOW’s extraction rate is two to three orders of magnitude higher than KNOWITALL’s. 1 Background and Motivation Numerous modern NLP applications use the Web as their corpus and rely on queries to commercial search engines to support their computation (Turney, 2001; Etzioni et al., 2005; Brill et al., 2001). Search engines are extremely helpful for several linguistic tasks, such as computing usage statistics or finding a subset of web documents to analyze in depth; however, these engines were not designed as building blocks for NLP applications. As a result, the applications are forced to issue literally millions of queries to search engines, which limits the speed, scope, and scalability of the applications. Further, the applications must often then fetch some web documents, which at scale can be very time-consuming. In response to heavy programmatic search engine use, Google has created the “Google API” to shunt programmatic queries away from Google.com and has placed hard quotas on the number of daily queries a program can issue to the API. Other search engines have also introduced mechanisms to limit programmatic queries, forcing applications to introduce “courtesy waits” between queries and to limit the number of queries they issue. To understand these efficiency problems in more detail, consider the KNOWITALL information extraction system (Etzioni et al., 2005). KNOWITALL has a generateand-test architecture that extracts information in two stages. First, KNOWITALL utilizes a small set of domainindependent extraction patterns to generate candidate facts (cf. (Hearst, 1992)). For example, the generic pattern “NP1 such as NPList2” indicates that the head of each simple noun phrase (NP) in NPList2 is a member of the class named in NP1. By instantiating the pattern for class City, KNOWITALL extracts three candidate cities from the sentence: “We provide tours to cities such as Paris, London, and Berlin.” Note that it must also fetch each document that contains a potential candidate. Next, extending the PMI-IR algorithm (Turney, 2001), KNOWITALL automatically tests the plausibility of the candidate facts it extracts using pointwise mutual information (PMI) statistics computed from search-engine hit counts. For example, to assess the likelihood that “Yakima” is a city, KNOWITALL will compute the PMI between Yakima and a set of k discriminator phrases that tend to have high mutual information with city names (e.g., the simple phrase “city”). Thus, KNOWITALL requires at least k search-engine queries for every candidate extraction it assesses. Due to KNOWITALL’s dependence on search-engine queries, large-scale experiments utilizing KNOWITALL take days and even weeks to complete, which makes research using KNOWITALL slow and cumbersome. Private access to Google-scale infrastructure would provide sufficient access to search queries, but at prohibitive cost, and the problem of fetching documents (even if from a cached copy) would remain (as we discuss in Section 2.1). Is there a feasible alternative Web-based IE system? If so, what size Web index and how many machines are required to achieve reasonable levels of precision/recall? What would the architecture of this IE system look like, and how fast would it run? To address these questions, this paper introduces a novel architecture for web information extraction. It consists of two components that supplant the generateand-test mechanisms in KNOWITALL. To generate extractions rapidly we utilize our own specialized search engine, called the Bindings Engine (or BE), which efficiently returns bindings in response to variabilized queries. For example, in response to the query “Cities such as ProperNoun(Head(〈NounPhrase〉))”, BE will return a list of proper nouns likely to be city names. To assess these extractions, we use URNS, a combinatorial model, which estimates the probability that each extraction is correct without using any additional search engine queries.1 For further efficiency, we introduce an approximation to URNS, based on frequency of extractions’ occurrence in the output of BE, and show that it achieves comparable precision/recall to URNS. Our contributions are as follows: 1. We present a novel architecture for Information Extraction (IE), embodied in the KNOWITNOW system, which does not depend on Web search-engine queries. 2. We demonstrate experimentally that KNOWITNOW is the first system able to extract tens of thousands of facts from the Web in minutes instead of days. 3. We show that KNOWITNOW’s extraction rate is two to three orders of magnitude greater than KNOWITALL’s, but this increased efficiency comes at the cost of reduced recall. We quantify this tradeoff for KNOWITNOW’s 60,000,000 page index and extrapolate how the tradeoff would change with larger indices. Our recent work has described the BE search engine in detail (Cafarella and Etzioni, 2005), and also analyzed the URNS model’s ability to compute accurate probability estimates for extractions (Downey et al., 2005). However, this is the first paper to investigate the composition of these components to create a fast IE system, and to compare it experimentally to KNOWITALL in terms of time, In contrast, PMI-IR, which is built into KNOWITALL, requires multiple search engine queries to assess each potential extraction. recall, precision, and extraction rate. The frequencybased approximation to URNS and the demonstration of its success are also new. The remainder of the paper is organized as follows. Section 2 provides an overview of BE’s design. Section 3 describes the URNS model and introduces an efficient approximation to URNS that achieves similar precision/recall. Section 4 presents experimental results. We conclude with related and future work in Sections 5 and 6. 2 The Bindings Engine This section explains how relying on standard search engines leads to a bottleneck for NLP applications, and provides a brief overview of the Bindings Engine (BE)—our solution to this problem. A comprehensive description of BE appears in (Cafarella and Etzioni, 2005). Standard search engines are computationally expensive for IE and other NLP tasks. IE systems issue multiple queries, downloading all pages that potentially match an extraction rule, and performing expensive processing on each page. For example, such systems operate roughly as follows on the query (“cities such as 〈NounPhrase〉”): 1. Perform a traditional search engine query to find all URLs containing the non-variable terms (e.g., “cities such as”) 2. For each such URL: (a) obtain the document contents, (b) find the searched-for terms (“cities such as”) in the document text, (c) run the noun phrase recognizer to determine whether text following “cities such as” satisfies the linguistic type requirement, (d) and if so, return the string We can divide the algorithm into two stages: obtaining the list of URLs from a search engine, and then processing them to find the 〈NounPhrase〉 bindings. Each stage poses its own scalability and speed challenges. The first stage makes a query to a commercial search engine; while the number of available queries may be limited, a single one executes relatively quickly. The second stage fetches a large number of documents, each fetch likely resulting in a random disk seek; this stage executes slowly. Naturally, this disk access is slow regardless of whether it happens on a locally-cached copy or on a remote document server. The observation that the second stage is slow, even if it is executed locally, is important because it shows that merely operating a “private” search engine does not solve the problem (see Section 2.1). The Bindings Engine supports queries containing typed variables (such as NounPhrase) and string-processing functions (such as “head(X)” or “ProperNoun(X)”) as well as standard query terms. BE processes a variable by returning every possible string in the corpus that has a matching type, and that can be substituted for the variable and still satisfy the user’s query. If there are multiple variables in a query, then all of them must simultaneously have valid substitutions. (So, for example, the query “<NounPhrase> is located in <NounPhrase>” only returns strings when noun phrases are found on both sides of “is located in”.) We call a string that meets these requirements a binding for the variable in question. These queries, and the bindings they elicit, can usefully serve as part of an information extraction system or other common NLP tasks (such as gathering usage statistics). Figure 1 illustrates some of the queries that BE can handle. president Bush <Verb> cities such as ProperNoun(Head(<NounPhrase>)) <NounPhrase> is the CEO of <NounPhrase> Figure 1: Examples of queries that can be handled by BE. Queries that include typed variables and stringprocessing functions allow NLP tasks to be done efficiently without downloading the original document during query processing. BE’s novel neighborhood index enables it to process these queries with O(k) random disk seeks and O(k) serial disk reads, where k is the number of non-variable terms in its query. As a result, BE can yield orders of magnitude speedup as shown in the asymptotic analysis later in this section. The neighborhood index is an augme", "title": "" }, { "docid": "fed02a0854e954009feadd9ff1a417c0", "text": "Video streaming dominates the Internet's overall traffic mix, with reports stating that it will constitute 90% of all consumer traffic by 2019. Most of this video is delivered by Content Delivery Networks (CDNs), and, while they optimize QoE metrics such as buffering ratio and start-up time, no single CDN provides optimal performance. In this paper we make the case for elastic CDNs, the ability to build virtual CDNs on-the-fly on top of shared, third-party infrastructure at a scale. To bring this idea closer to reality we begin by large-scale simulations to quantify the effects that elastic CDNs would have if deployed, and build and evaluate MiniCache, a specialized, minimalistic virtualized content cache that runs on the Xen hypervisor. MiniCache is able to serve content at rates of up to 32 Gb/s and handle up to 600K reqs/sec on a single CPU core, as well as boot in about 90 milliseconds on x86 and around 370 milliseconds on ARM32.", "title": "" }, { "docid": "22fe98f01a5379a9ea280c22028da43f", "text": "Linux containers showed great superiority when compared to virtual machines and hypervisors in terms of networking, disk and memory management, start-up and compilation speed, and overall processing performance. In this research, we are questioning whether it is more secure to run services inside Linux containers than running them directly on a host base operating system or not. We used Docker v1.10 to conduct a series of experiments to assess the attack surface of hosts running services inside Docker containers compared to hosts running the same services on the base operating system represented in our paper as Debian Jessie. Our vulnerability assessment shows that using Docker containers increase the attack surface of a given host, not the other way around.", "title": "" }, { "docid": "9673939625a3caafecf3da68a19742b0", "text": "Automatic detection of road regions in aerial images remains a challenging research topic. Most existing approaches work well on the requirement of users to provide some seedlike points/strokes in the road area as the initial location of road regions, or detecting particular roads such as well-paved roads or straight roads. This paper presents a fully automatic approach that can detect generic roads from a single unmanned aerial vehicles (UAV) image. The proposed method consists of two major components: automatic generation of road/nonroad seeds and seeded segmentation of road areas. To know where roads probably are (i.e., road seeds), a distinct road feature is proposed based on the stroke width transformation (SWT) of road image. To the best of our knowledge, it is the first time to introduce SWT as road features, which show the effectiveness on capturing road areas in images in our experiments. Different road features, including the SWT-based geometry information, colors, and width, are then combined to classify road candidates. Based on the candidates, a Gaussian mixture model is built to produce road seeds and background seeds. Finally, starting from these road and background seeds, a convex active contour model segmentation is proposed to extract whole road regions. Experimental results on varieties of UAV images demonstrate the effectiveness of the proposed method. Comparison with existing techniques shows the robustness and accuracy of our method to different roads.", "title": "" }, { "docid": "ba6873627b976fa1a3899303b40eae3c", "text": "Most plant seeds are dispersed in a dry, mature state. If these seeds are non-dormant and the environmental conditions are favourable, they will pass through the complex process of germination. In this review, recent progress made with state-of-the-art techniques including genome-wide gene expression analyses that provided deeper insight into the early phase of seed germination, which includes imbibition and the subsequent plateau phase of water uptake in which metabolism is reactivated, is summarized. The physiological state of a seed is determined, at least in part, by the stored mRNAs that are translated upon imbibition. Very early upon imbibition massive transcriptome changes occur, which are regulated by ambient temperature, light conditions, and plant hormones. The hormones abscisic acid and gibberellins play a major role in regulating early seed germination. The early germination phase of Arabidopsis thaliana culminates in testa rupture, which is followed by the late germination phase and endosperm rupture. An integrated view on the early phase of seed germination is provided and it is shown that it is characterized by dynamic biomechanical changes together with very early alterations in transcript, protein, and hormone levels that set the stage for the later events. Early seed germination thereby contributes to seed and seedling performance important for plant establishment in the natural and agricultural ecosystem.", "title": "" }, { "docid": "c71f3284872169d1f506927000df557b", "text": "Natural rewards and drugs of abuse can alter dopamine signaling, and ventral tegmental area (VTA) dopaminergic neurons are known to fire action potentials tonically or phasically under different behavioral conditions. However, without technology to control specific neurons with appropriate temporal precision in freely behaving mammals, the causal role of these action potential patterns in driving behavioral changes has been unclear. We used optogenetic tools to selectively stimulate VTA dopaminergic neuron action potential firing in freely behaving mammals. We found that phasic activation of these neurons was sufficient to drive behavioral conditioning and elicited dopamine transients with magnitudes not achieved by longer, lower-frequency spiking. These results demonstrate that phasic dopaminergic activity is sufficient to mediate mammalian behavioral conditioning.", "title": "" }, { "docid": "88a8f162017f80c17be58faad16a6539", "text": "Instruction List (IL) is a simple typed assembly language commonly used in embedded control. There is little tool support for IL and, although defined in the IEC 61131-3 standard, there is no formal semantics. In this work we develop a formal operational semantics. Moreover, we present an abstract semantics, which allows approximative program simulation for a (possibly infinte) set of inputs in one simulation run. We also extended this framework to an abstract interpretation based analysis, which is implemented in our tool Homer. All these analyses can be carried out without knowledge of formal methods, which is typically not present in the IL community.", "title": "" }, { "docid": "32afde90b1bf577aa07135db66250b38", "text": "We present a generic method for augmenting unsupervised query segmentation by incorporating Parts-of-Speech (POS) sequence information to detect meaningful but rare n-grams. Our initial experiments with an existing English POS tagger employing two different POS tagsets and an unsupervised POS induction technique specifically adapted for queries show that POS information can significantly improve query segmentation performance in all these cases.", "title": "" }, { "docid": "548b9580c2b36bd1730392a92f6640c2", "text": "Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of magnetic resonance (MR) images. Unfortunately, MR images always contain a significant amount of noise caused by operator performance, equipment, and the environment, which can lead to serious inaccuracies with segmentation. A robust segmentation technique based on an extension to the traditional fuzzy c-means (FCM) clustering algorithm is proposed in this paper. A neighborhood attraction, which is dependent on the relative location and features of neighboring pixels, is shown to improve the segmentation performance dramatically. The degree of attraction is optimized by a neural-network model. Simulated and real brain MR images with different noise levels are segmented to demonstrate the superiority of the proposed technique compared to other FCM-based methods. This segmentation method is a key component of an MR image-based classification system for brain tumors, currently being developed.", "title": "" }, { "docid": "fb63ab21fa40b125c1a85b9c3ed1dd8d", "text": "The two central topics of information theory are the compression and the transmission of data. Shannon, in his seminal work, formalized both these problems and determined their fundamental limits. Since then the main goal of coding theory has been to find practical schemes that approach these limits. Polar codes, recently invented by Arıkan, are the first “practical” codes that are known to achieve the capacity for a large class of channels. Their code construction is based on a phenomenon called “channel polarization”. The encoding as well as the decoding operation of polar codes can be implemented with O(N log N) complexity, where N is the blocklength of the code. We show that polar codes are suitable not only for channel coding but also achieve optimal performance for several other important problems in information theory. The first problem we consider is lossy source compression. We construct polar codes that asymptotically approach Shannon’s rate-distortion bound for a large class of sources. We achieve this performance by designing polar codes according to the “test channel”, which naturally appears in Shannon’s formulation of the rate-distortion function. The encoding operation combines the successive cancellation algorithm of Arıkan with a crucial new ingredient called “randomized rounding”. As for channel coding, both the encoding as well as the decoding operation can be implemented with O(N log N) complexity. This is the first known “practical” scheme that approaches the optimal rate-distortion trade-off. We also construct polar codes that achieve the optimal performance for the Wyner-Ziv and the Gelfand-Pinsker problems. Both these problems can be tackled using “nested” codes and polar codes are naturally suited for this purpose. We further show that polar codes achieve the capacity of asymmetric channels, multi-terminal scenarios like multiple access channels, and degraded broadcast channels. For each of these problems, our constructions are the first known “practical” schemes that approach the optimal performance. The original polar codes of Arıkan achieve a block error probability decaying exponentially in the square root of the block length. For source coding, the gap between the achieved distortion and the limiting distortion also vanishes exponentially in the square root of the blocklength. We explore other polarlike code constructions with better rates of decay. With this generalization,", "title": "" }, { "docid": "06dea1f666eb80cd6b05e12ef3d2b3ee", "text": "Highly competitive environments are leading companies to implement Supply Chain Management (SCM) to improve performance and gain a competitive advantage. SCM involves integration, co-ordination and collaboration across organisations and throughout the supply chain. It means that SCM requires internal (intraorganisational) and external (interorganisational) integration. This paper examines the Logistics-Production and Logistics-Marketing interfaces and their relation with the external integration process. The study also investigates the causal impact of these internal and external relationships on the company’s logistical service performance. To analyse this, an empirical study was conducted in the Spanish Fast Moving Consumer Goods (FMCG) sector.", "title": "" }, { "docid": "59021dcb134a2b25122b3be73243bea6", "text": "The path taken by a packet traveling across the Internet depends on a large number of factors, including routing protocols and per-network routing policies. The impact of these factors on the end-to-end performance experienced by users is poorly understood. In this paper, we conduct a measurement-based study comparing the performance seen using the \"default\" path taken in the Internet with the potential performance available using some alternate path. Our study uses five distinct datasets containing measurements of \"path quality\", such as round-trip time, loss rate, and bandwidth, taken between pairs of geographically diverse Internet hosts. We construct the set of potential alternate paths by composing these measurements to form new synthetic paths. We find that in 30-80% of the cases, there is an alternate path with significantly superior quality. We argue that the overall result is robust and we explore two hypotheses for explaining it.", "title": "" }, { "docid": "7fc6ffb547bc7a96e360773ce04b2687", "text": "Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.", "title": "" }, { "docid": "e16fbf0917103601a3cda01fab6dbc1b", "text": "In recent years L-functions and their analytic properties have assumed a central role in number theory and automorphic forms. In this expository article, we describe the two major methods for proving the analytic continuation and functional equations of L-functions: the method of integral representations, and the method of Fourier expansions of Eisenstein series. Special attention is paid to technical properties, such as boundedness in vertical strips; these are essential in applying the converse theorem, a powerful tool that uses analytic properties of L-functions to establish cases of Langlands functoriality conjectures. We conclude by describing striking recent results which rest upon the analytic properties of L-functions.", "title": "" }, { "docid": "a7bbf188c7219ff48af391a5f8b140b8", "text": "The paper presents the results of studies concerning the designation of COD fraction in raw wastewater. The research was conducted in three mechanical-biological sewage treatment plants. The results were compared with data assumed in the ASM models. During the investigation, the following fractions of COD were determined: dissolved non-biodegradable SI, dissolved easily biodegradable SS, in organic suspension slowly degradable XS, and in organic suspension non-biodegradable XI. The methodology for determining the COD fraction was based on the ATVA 131guidelines. The real concentration of fractions in raw wastewater and the percentage of each fraction in total COD are different from data reported in the literature.", "title": "" }, { "docid": "6db5de1bb37513c3c251624947ee4e8f", "text": "The proliferation of Ambient Intelligence (AmI) devices and services and their integration in smart environments creates the need for a simple yet effective way of controlling and communicating with them. Towards that direction, the application of the Trigger -- Action model has attracted a lot of research with many systems and applications having been developed following that approach. This work introduces ParlAmI, a multimodal conversational interface aiming to give its users the ability to determine the behavior of AmI environments, by creating rules using natural language as well as a GUI. The paper describes ParlAmI, its requirements and functionality, and presents the findings of a user-based evaluation which was conducted.", "title": "" }, { "docid": "d698d49a82829a2bb772d1c3f6c2efc5", "text": "The concepts of Data Warehouse, Cloud Computing and Big Data have been proposed during the era of data flood. By reviewing current progresses in data warehouse studies, this paper introduces a framework to achieve better visualization for Big Data. This framework can reduce the cost of building Big Data warehouses by divide data into sub dataset and visualize them respectively. Meanwhile, basing on the powerful visualization tool of D3.js and directed by the principle of Whole-Parts, current data can be presented to users from different dimensions by different rich statistics graphics.", "title": "" }, { "docid": "b1b56020802d11d1f5b2badb177b06b9", "text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.", "title": "" } ]
scidocsrr
b1faead2db0c000b0a4fcbb7325a5ad0
A Geometry-Appearance-Based Pupil Detection Method for Near-Infrared Head-Mounted Cameras
[ { "docid": "e946deae6e1d441c152dca6e52268258", "text": "The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.", "title": "" } ]
[ { "docid": "b7a459e830d69f8360196641ddc2daec", "text": "Understanding software project risk can help in reducing the incidence of failure. Building on prior work, software project risk was conceptualized along six dimensions. A questionnaire was built and 507 software project managers were surveyed. A cluster analysis was then performed to identify aspects of low, medium, and high risk projects. An examination of risk dimensions across the levels revealed that even low risk projects have a high level of complexity risk. For high risk projects, the risks associated with requirements, planning and control, and the organization become more obvious. The influence of project scope, sourcing practices, and strategic orientation on project risk dimensions was also examined. Results suggested that project scope affects all dimensions of risk, whereas sourcing practices and strategic orientation had a more limited impact. A conceptual model of project risk and performance was presented. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a02294c9b732b0d58cb7b25faa5136c8", "text": "We consider the problem of spectrum trading with multiple licensed users (i.e., primary users) selling spectrum opportunities to multiple unlicensed users (i.e., secondary users). The secondary users can adapt the spectrum buying behavior (i.e., evolve) by observing the variations in price and quality of spectrum offered by the different primary users or primary service providers. The primary users or primary service providers can adjust their behavior in selling the spectrum opportunities to secondary users to achieve the highest utility. In this paper, we model the evolution and the dynamic behavior of secondary users using the theory of evolutionary game. An algorithm for the implementation of the evolution process of a secondary user is also presented. To model the competition among the primary users, a noncooperative game is formulated where the Nash equilibrium is considered as the solution (in terms of size of offered spectrum to the secondary users and spectrum price). For a primary user, an iterative algorithm for strategy adaptation to achieve the solution is presented. The proposed game-theoretic framework for modeling the interactions among multiple primary users (or service providers) and multiple secondary users is used to investigate network dynamics under different system parameter settings and under system perturbation.", "title": "" }, { "docid": "60dd1689962a702e72660b33de1f2a17", "text": "A grammar formalism called GHRG based on CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. A CHRG executes as a robust bottom-up parser with an inherent treatment of ambiguity. The rules of a CHRG may refer to grammar symbols on either side of a sequence to be matched and this provides a powerful way to let parsing and attribute evaluation depend on linguistic context; examples show disambiguation of simple and ambiguous context-free rules and a handling of coordination in natural language. CHRGs may have rules to produce and consume arbitrary hypothesis and as an important application is shown an implementation of Assumption Grammars.", "title": "" }, { "docid": "c319111c7ed9e816ba8db253cf9a5bcd", "text": "Soft actuators made of highly elastic polymers allow novel robotic system designs, yet application-specific soft robotic systems are rarely reported. Taking notice of the characteristics of soft pneumatic actuators (SPAs) such as high customizability and low inherent stiffness, we report in this work the use of soft pneumatic actuators for a biomedical use - the development of a soft robot for rodents, aimed to provide a physical assistance during gait rehabilitation of a spinalized animal. The design requirements to perform this unconventional task are introduced. Customized soft actuators, soft joints and soft couplings for the robot are presented. Live animal experiment was performed to evaluate and show the potential of SPAs for their use in the current and future biomedical applications.", "title": "" }, { "docid": "fcbddff6b048bc93fd81e363d08adc6d", "text": "Question Answering (QA) system is the task where arbitrary question IS posed in the form of natural language statements and a brief and concise text returned as an answer. Contrary to search engines where a long list of relevant documents returned as a result of a query, QA system aims at providing the direct answer or passage containing the answer. We propose a general purpose question answering system which can answer wh-interrogated questions. This system is using Wikipedia data as its knowledge source. We have implemented major components of a QA system which include challenging tasks of Named Entity Tagging, Question Classification, Information Retrieval and Answer Extraction. Implementation of state-of-the-art Entity Tagging mechanism has helped identify entities where systems like OpenEphyra or DBpedia spotlight have failed. The information retrieval task includes development of a framework to extract tabular information known as Infobox from Wikipedia pages which has ensured availability of latest updated information. Answer Extraction module has implemented an attributes mapping mechanism which is helpful to extract answer from data. The system is comparable in results with other available general purpose QA systems.", "title": "" }, { "docid": "38ea50d7e6e5e1816005b3197828dbae", "text": "Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The Grid project has developed the Taverna workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists’ experimental context. The lessons reflect an evolving understanding of life scientists’ requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science.", "title": "" }, { "docid": "f4bb9f769659436c79b67765145744ac", "text": "Sparse Principal Component Analysis (S-PCA) is a novel framework for learning a linear, orthonormal basis representation for structure intrinsic to an ensemble of images. S-PCA is based on the discovery that natural images exhibit structure in a low-dimensional subspace in a sparse, scale-dependent form. The S-PCA basis optimizes an objective function which trades off correlations among output coefficients for sparsity in the description of basis vector elements. This objective function is minimized by a simple, robust and highly scalable adaptation algorithm, consisting of successive planar rotations of pairs of basis vectors. The formulation of S-PCA is novel in that multi-scale representations emerge for a variety of ensembles including face images, images from outdoor scenes and a database of optical flow vectors representing a motion class.", "title": "" }, { "docid": "49333b20791e934ba2a4a6b5fc6382d9", "text": "Angiopoietins are ligands of the Tie2 receptor that control angiogenic remodeling in a context-dependent manner. Tie signaling is involved in multiple steps of the angiogenic remodeling process during development, including destabilization of existing vessels, endothelial cell migration, tube formation and the subsequent stabilization of newly formed tubes by mesenchymal cells. Beyond this critical role in blood vessel development, recent studies suggest a wider role for Tie2 and angiopoietins in lymphangiogenesis and the development of the hematopoietic system, as well as a possible role in the regulation of certain non-endothelial cells. The outcome of Tie signaling depends on which vascular bed is involved, and crosstalk between different VEGFs has an important modulating effect on the properties of the angiopoietins. Signaling through the Tie1 receptor is not well understood, but Tie1 may have both angiopoietin-dependent and ligand-independent functions. Changes in the expression of Tie receptors and angiopoietins occur in many pathological conditions, and mutations in the Tie2 gene are found in familial cases of vascular disease.", "title": "" }, { "docid": "228d7fa684e1caf43769fa13818b938f", "text": "Optimal tuning of proportional-integral-derivative (PID) controller parameters is necessary for the satisfactory operation of automatic voltage regulator (AVR) system. This study presents a tuning fuzzy logic approach to determine the optimal PID controller parameters in AVR system. The developed fuzzy system can give the PID parameters on-line for different operating conditions. The suitability of the proposed approach for PID controller tuning has been demonstrated through computer simulations in an AVR system.", "title": "" }, { "docid": "490fe197e7ed6c658160c8a04ee1fc82", "text": "Automatic concept learning from large scale imbalanced data sets is a key issue in video semantic analysis and retrieval, which means the number of negative examples is far more than that of positive examples for each concept in the training data. The existing methods adopt generally under-sampling for the majority negative examples or over-sampling for the minority positive examples to balance the class distribution on training data. The main drawbacks of these methods are: (1) As a key factor that affects greatly the performance, in most existing methods, the degree of re-sampling needs to be pre-fixed, which is not generally the optimal choice; (2) Many useful negative samples may be discarded in under-sampling. In addition, some works only focus on the improvement of the computational speed, rather than the accuracy. To address the above issues, we propose a new approach and algorithm named AdaOUBoost (Adaptive Over-sampling and Under-sampling Boost). The novelty of AdaOUBoost mainly lies in: adaptively over-sample the minority positive examples and under-sample the majority negative examples to form different sub-classifiers. And combine these sub-classifiers according to their accuracy to create a strong classifier, which aims to use fully the whole training data and improve the performance of the class-imbalance learning classifier. In AdaOUBoost, first, our clustering-based under-sampling method is employed to divide the majority negative examples into some disjoint subsets. Then, for each subset of negative examples, we utilize the borderline-SMOTE (synthetic minority over-sampling technique) algorithm to over-sample the positive examples with different size, train each sub-classifier using each of them, and get the classifier by fusing these sub-classifiers with different weights. Finally, we combine these classifiers in each subset of negative examples to create a strong classifier. We compare the performance between AdaOUBoost and the state-of-the-art methods on TRECVID 2008 benchmark with all 20 concepts, and the results show the AdaOUBoost can achieve the superior performance in large scale imbalanced data sets.", "title": "" }, { "docid": "1a9d276c4571419e0d1b297f248d874d", "text": "Organizational culture plays a critical role in the acceptance and adoption of agile principles by a traditional software development organization (Chan & Thong, 2008). Organizations must understand the differences that exist between traditional software development principles and agile principles. Based on an analysis of the literature published between 2003 and 2010, this study examines nine distinct organizational cultural factors that require change, including management style, communication, development team practices, knowledge management, and customer interactions.", "title": "" }, { "docid": "d723903b45554c7a6c2fb4f32aa5dc48", "text": "Harvard architecture CPU design is common in the embedded world. Examples of Harvard-based architecture devices are the Mica family of wireless sensors. Mica motes have limited memory and can process only very small packets. Stack-based buffer overflow techniques that inject code into the stack and then execute it are therefore not applicable. It has been a common belief that code injection is impossible on Harvard architectures. This paper presents a remote code injection attack for Mica sensors. We show how to exploit program vulnerabilities to permanently inject any piece of code into the program memory of an Atmel AVR-based sensor. To our knowledge, this is the first result that presents a code injection technique for such devices. Previous work only succeeded in injecting data or performing transient attacks. Injecting permanent code is more powerful since the attacker can gain full control of the target sensor. We also show that this attack can be used to inject a worm that can propagate through the wireless sensor network and possibly create a sensor botnet. Our attack combines different techniques such as return oriented programming and fake stack injection. We present implementation details and suggest some counter-measures.", "title": "" }, { "docid": "94fc516df0c0a5f0ebaf671befe10982", "text": "In this paper, an 8th-order cavity filter with two symmetrical transmission zeros in stopband is designedwith the method of generalized Chebyshev synthesis so as to satisfy the IMT-Advanced system demands. To shorten the development cycle of the filter from two or three days to several hours, a co-simulation with Ansoft HFSS and Designer is presented. The effectiveness of the co-simulation method is validated by the excellent consistency between the simulation and the experiment results.", "title": "" }, { "docid": "5f3b787993ae1ebae34d8cee3ba1a975", "text": "Neisseria meningitidis remains an important cause of severe sepsis and meningitis worldwide. The bacterium is only found in human hosts, and so must continually coexist with the immune system. Consequently, N meningitidis uses multiple mechanisms to avoid being killed by antimicrobial proteins, phagocytes, and, crucially, the complement system. Much remains to be learnt about the strategies N meningitidis employs to evade aspects of immune killing, including mimicry of host molecules by bacterial structures such as capsule and lipopolysaccharide, which poses substantial problems for vaccine design. To date, available vaccines only protect individuals against subsets of meningococcal strains. However, two promising vaccines are currently being assessed in clinical trials and appear to offer good prospects for an effective means of protecting individuals against endemic serogroup B disease, which has proven to be a major challenge in vaccine research.", "title": "" }, { "docid": "693dd8eb0370259c4ee5f8553de58443", "text": "Most research in Interactive Storytelling (IS) has sought inspiration in narrative theories issued from contemporary narratology to either identify fundamental concepts or derive formalisms for their implementation. In the former case, the theoretical approach gives raise to empirical solutions, while the latter develops Interactive Storytelling as some form of “computational narratology”, modeled on computational linguistics. In this paper, we review the most frequently cited theories from the perspective of IS research. We discuss in particular the extent to which they can actually inspire IS technologies and highlight key issues for the effective use of narratology in IS.", "title": "" }, { "docid": "49bc648b7588e3d6d512a65688ce23aa", "text": "Many Chinese websites (relying parties) use OAuth 2.0 as the basis of a single sign-on service to ease password management for users. Many sites support five or more different OAuth 2.0 identity providers, giving users choice in their trust point. However, although OAuth 2.0 has been widely implemented (particularly in China), little attention has been paid to security in practice. In this paper we report on a detailed study of OAuth 2.0 implementation security for ten major identity providers and 60 relying parties, all based in China. This study reveals two critical vulnerabilities present in many implementations, both allowing an attacker to control a victim user’s accounts at a relying party without knowing the user’s account name or password. We provide simple, practical recommendations for identity providers and relying parties to enable them to mitigate these vulnerabilities. The vulnerabilities have been reported to the parties concerned.", "title": "" }, { "docid": "9c00313926a8c625fd15da8708aa941e", "text": "OBJECTIVE\nThe objective of this study was to evaluate the effect of a dental water jet on plaque biofilm removal using scanning electron microscopy (SEM).\n\n\nMETHODOLOGY\nEight teeth with advanced aggressive periodontal disease were extracted. Ten thin slices were cut from four teeth. Two slices were used as the control. Eight were inoculated with saliva and incubated for 4 days. Four slices were treated using a standard jet tip, and four slices were treated using an orthodontic jet tip. The remaining four teeth were treated with the orthodontic jet tip but were not inoculated with saliva to grow new plaque biofilm. All experimental teeth were treated using a dental water jet for 3 seconds on medium pressure.\n\n\nRESULTS\nThe standard jet tip removed 99.99% of the salivary (ex vivo) biofilm, and the orthodontic jet tip removed 99.84% of the salivary biofilm. Observation of the remaining four teeth by the naked eye indicated that the orthodontic jet tip removed significant amounts of calcified (in vivo) plaque biofilm. This was confirmed by SEM evaluations.\n\n\nCONCLUSION\nThe Waterpik dental water jet (Water Pik, Inc, Fort Collins, CO) can remove both ex vivo and in vivo plaque biofilm significantly.", "title": "" }, { "docid": "f9dc4cfb42a5ec893f5819e03c64d4bc", "text": "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.,,To better capture the structure dependency of human body joints, the generator G is designed in a stacked multi-task manner to predict poses as well as occlusion heatmaps. Then, the pose and occlusion heatmaps are sent to the discriminators to predict the likelihood of the pose being real. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on two widely used human pose estimation benchmark datasets. Our approach significantly outperforms the state-of-the-art methods and almost always generates plausible human pose predictions.", "title": "" }, { "docid": "b0bb9c4bcf666dca927d4f747bfb1ca1", "text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.", "title": "" }, { "docid": "63063c0a2b08f068c11da6d80236fa87", "text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.", "title": "" } ]
scidocsrr
a5e7d0f60233e6d724219483252f5ad2
SpanFS: A Scalable File System on Fast Storage Devices
[ { "docid": "bb8604e0446fd1d3b01f426a8aa8c7e5", "text": "Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, including memory hierarchies, interconnects, instruction sets and variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware variants pose serious challenges for operating system structures.\n We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.\n We have implemented a multikernel OS to show that the approach is promising, and we describe how traditional scalability problems for operating systems (such as memory management) can be effectively recast using messages and can exploit insights from distributed systems and networking. An evaluation of our prototype on multicore systems shows that, even on present-day machines, the performance of a multikernel is comparable with a conventional OS, and can scale better to support future hardware.", "title": "" }, { "docid": "6601696c4871c54be9872058aafc02e8", "text": "We introduce optimistic crash consistency, a new approach to crash consistency in journaling file systems. Using an array of novel techniques, we demonstrate how to build an optimistic commit protocol that correctly recovers from crashes and delivers high performance. We implement this optimistic approach within a Linux ext4 variant which we call OptFS. We introduce two new file-system primitives, osync() and dsync(), that decouple ordering of writes from their durability. We show through experiments that OptFS improves performance for many workloads, sometimes by an order of magnitude; we confirm its correctness through a series of robustness tests, showing it recovers to a consistent state after crashes. Finally, we show that osync() and dsync() are useful in atomic file system and database update scenarios, both improving performance and meeting application-level consistency demands.", "title": "" } ]
[ { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "792df318ee62c4e5409f53829c3de05c", "text": "In this paper we present a novel technique to calibrate multiple casually aligned projectors on a fiducial-free cylindrical curved surface using a single camera. We impose two priors to the cylindrical display: (a) cylinder is a vertically extruded surface; and (b) the aspect ratio of the rectangle formed by the four corners of the screen is known. Using these priors, we can estimate the display's 3D surface geometry and camera extrinsic parameters using a single image without any explicit display to camera correspondences. Using the estimated camera and display properties, we design a novel deterministic algorithm to recover the intrinsic and extrinsic parameters of each projector using a single projected pattern seen by the camera which is then used to register the images on the display from any arbitrary viewpoint making it appropriate for virtual reality systems. Finally, our method can be extended easily to handle sharp corners — making it suitable for the common CAVE like VR setup. To the best of our knowledge, this is the first method that can achieve accurate geometric auto-calibration of multiple projectors on a cylindrical display without performing an extensive stereo reconstruction.", "title": "" }, { "docid": "58b57bb1472817f8ab13a28a3b2da908", "text": "This investigation combined behavioral and functional neuroimaging measures to explore whether perception of pain is modulated by the target's stigmatized status and whether the target bore responsibility for that stigma. During fMRI scanning, participants were exposed to a series of short video clips featuring age-matched individuals experiencing pain who were (a) similar to the participant (healthy), (b) stigmatized but not responsible for their stigmatized condition (infected with AIDS as a result of an infected blood transfusion), or (c) stigmatized and responsible for their stigmatized condition (infected with AIDS as a result of intravenous drug use). Explicit pain and empathy ratings for the targets were obtained outside of the MRI environment, along with a variety of implicit and explicit measures of AIDS bias. Results showed that participants were significantly more sensitive to the pain of AIDS transfusion targets as compared with healthy and AIDS drug targets, as evidenced by significantly higher pain and empathy ratings during video evaluation and significantly greater hemodynamic activity in areas associated with pain processing (i.e., right anterior insula, anterior midcingulate cortex, periaqueductal gray). In contrast, significantly less activity was observed in the anterior midcingulate cortex for AIDS drug targets as compared with healthy controls. Further, behavioral differences between healthy and AIDS drug targets were moderated by the extent to which participants blamed AIDS drug individuals for their condition. Controlling for both explicit and implicit AIDS bias, the more participants blamed these targets, the less pain they attributed to them as compared with healthy controls. The present study reveals that empathic resonance is moderated early in information processing by a priori attitudes toward the target group.", "title": "" }, { "docid": "6c10d03fa49109182c95c36debaf06cc", "text": "Visual versus near infrared (VIS-NIR) face image matching uses an NIR face image as the probe and conventional VIS face images as enrollment. It takes advantage of the NIR face technology in tackling illumination changes and low-light condition and can cater for more applications where the enrollment is done using VIS face images such as ID card photos. Existing VIS-NIR techniques assume that during classifier learning, the VIS images of each target people have their NIR counterparts. However, since corresponding VIS-NIR image pairs of the same people are not always available, which is often the case, so those methods cannot be applied. To address this problem, we propose a transductive method named transductive heterogeneous face matching (THFM) to adapt the VIS-NIR matching learned from training with available image pairs to all people in the target set. In addition, we propose a simple feature representation for effective VIS-NIR matching, which can be computed in three steps, namely Log-DoG filtering, local encoding, and uniform feature normalization, to reduce heterogeneities between VIS and NIR images. The transduction approach can reduce the domain difference due to heterogeneous data and learn the discriminative model for target people simultaneously. To the best of our knowledge, it is the first attempt to formulate the VIS-NIR matching using transduction to address the generalization problem for matching. Experimental results validate the effectiveness of our proposed method on the heterogeneous face biometric databases.", "title": "" }, { "docid": "36bb2a1f2e8942dead6aa0a4192c7a6c", "text": "This paper reports the completion of four fundamental fluidic operations considered essential to build digital microfluidic circuits, which can be used for lab-on-a-chip or micro total analysis system ( TAS): 1) creating, 2) transporting, 3) cutting, and 4) merging liquid droplets, all by electrowetting, i.e., controlling the wetting property of the surface through electric potential. The surface used in this report is, more specifically, an electrode covered with dielectrics, hence, called electrowetting-on-dielectric (EWOD). All the fluidic movement is confined between two plates, which we call parallel-plate channel, rather than through closed channels or on open surfaces. While transporting and merging droplets are easily verified, we discover that there exists a design criterion for a given set of materials beyond which the droplet simply cannot be cut by EWOD mechanism. The condition for successful cutting is theoretically analyzed by examining the channel gap, the droplet size and the degree of contact angle change by electrowetting on dielectric (EWOD). A series of experiments is run and verifies the criterion. A smaller channel gap, a larger droplet size and a larger change in the contact angle enhance the necking of the droplet, helping the completion of the cutting process. Creating droplets from a pool of liquid is highly related to cutting, but much more challenging. Although droplets may be created by simply pulling liquid out of a reservoir, the location of cutting is sensitive to initial conditions and turns out unpredictable. This problem of an inconsistent cutting location is overcome by introducing side electrodes, which pull the liquid perpendicularly to the main fluid path before activating the cutting. All four operations are carried out in air environment at 25 Vdc applied voltage. [862]", "title": "" }, { "docid": "06df4096b54d72eb415f9ad9c18cdf68", "text": "This paper concerns automated cell counting and detection in microscopy images. The approach we take is to use convolutional neural networks (CNNs) to regress a cell spatial density map across the image. This is applicable to situations where traditional single-cell segmentation-based methods do not work well due to cell clumping or overlaps. We make the following contributions: (i) we develop and compare architectures for two fully convolutional regression networks (FCRNs) for this task; (ii) since the networks are fully convolutional, they can predict a density map for an input image of arbitrary size, and we exploit this to improve efficiency by end-to-end training on image patches; (iii) we show that FCRNs trained entirely on synthetic data are able to give excellent predictions on microscopy images from real biological experiments without fine-tuning, and that the performance can be further improved by fine-tuning on these real images. Finally, (iv) by inverting feature representations, we show to what extent the information from an input image has been encodedby feature responses in different layers.We set a new state-of-the-art performance for cell counting on standard synthetic image benchmarks and show that the FCRNs trained entirely with synthetic data can generalise well to real microscopy images both for cell counting and detections for the case of overlapping cells. ARTICLE HISTORY Received 15 Nov 2015 Accepted 28 Jan 2016", "title": "" }, { "docid": "3c4e3d86df819aea592282b171191d0d", "text": "Memory forensic analysis collects evidence for digital crimes and malware attacks from the memory of a live system. It is increasingly valuable, especially in cloud computing. However, memory analysis on on commodity operating systems (such as Microsoft Windows) faces the following key challenges: (1) a partial knowledge of kernel data structures; (2) difficulty in handling ambiguous pointers; and (3) lack of robustness by relying on soft constraints that can be easily violated by kernel attacks. To address these challenges, we present MACE, a memory analysis system that can extract a more complete view of the kernel data structures for closed-source operating systems and significantly improve the robustness by only leveraging pointer constraints (which are hard to manipulate) and evaluating these constraint globally (to even tolerate certain amount of pointer attacks). We have evaluated MACE on 100 memory images for Windows XP SP3 and Windows 7 SP0. Overall, MACE can construct a kernel object graph from a memory image in just a few minutes, and achieves over 95% recall and over 96% precision. Our experiments on real-world rootkit samples and synthetic attacks further demonstrate that MACE outperforms other external memory analysis tools with respect to wider coverage and better robustness.", "title": "" }, { "docid": "910a416dc736ec3566583c57123ac87c", "text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman husnu@ou.edu 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.", "title": "" }, { "docid": "9649df6c5aab87091244f9271f46df5c", "text": "With about 2.2 million Americans currently using wheeled mobility devices, wheelchairs are frequently provided to people with impaired mobility to provide accessibility to the community. Individuals with spinal cord injuries, arthritis, balance disorders, and other conditions or diseases are typical users of wheelchairs. However, secondary injuries and wheelchair-related accidents are risks introduced by wheelchairs. Research is underway to advance wheelchair design to prevent or accommodate secondary injuries related to propulsion and transfer biomechanics, while improving safe, functional performance and accessibility to the community. This paper summarizes research and development underway aimed at enhancing safety and optimizing wheelchair design", "title": "" }, { "docid": "512e10c3d8078a0e069cc35f861b4c27", "text": "Android leverages a set of system permissions to protect platform resources. At the same time, it allows untrusted third-party applications to declare their own custom permissions to regulate access to app components. However, Android treats custom permissions the same way as system permissions even though they are declared by entities of different trust levels. In this work, we describe two new classes of vulnerabilities that arise from the ‘predicament’ created by mixing system and custom permissions in Android. These have been acknowledged as serious security flaws by Google and we demonstrate how they can be exploited in practice to gain unauthorized access to platform resources and to compromise popular Android apps. To address the shortcomings of the system, we propose a new modular design called Cusper for the Android permission model. Cusper separates the management of system and custom permissions and introduces a backward-compatible naming convention for custom permissions to prevent custom permission spoofing. We validate the correctness of Cusper by 1) introducing the first formal model of Android runtime permissions, 2) extending it to describe Cusper, and 3) formally showing that key security properties that can be violated in the current permission model are always satisfied in Cusper. To demonstrate Cusper’s practicality, we implemented it in the Android platform and showed that it is both effective and efficient.", "title": "" }, { "docid": "b69686c780d585d6b53fe7ec37e22b80", "text": "In written dialog, discourse participants need to justify claims they make, to convince the reader the claim is true and/or relevant to the discourse. This paper presents a new task (with an associated corpus), namely detecting such justifications. We investigate the nature of such justifications, and observe that the justifications themselves often contain discourse structure. We therefore develop a method to detect the existence of certain types of discourse relations, which helps us classify whether a segment is a justification or not. Our task is novel, and our work is novel in that it uses a large set of connectives (which we call indicators), and in that it uses a large set of discourse relations, without choosing among them.", "title": "" }, { "docid": "b97208934c9475bc9d9bb3a095826a15", "text": "Article history: Received 12 February 2014 Received in revised form 13 August 2014 Accepted 29 August 2014 Available online 8 September 2014", "title": "" }, { "docid": "d3444b0cee83da2a94f4782c79e0ce48", "text": "Predicting student academic performance plays an important role in academics. Classifying st udents using conventional techniques cannot give the desired lev l of accuracy, while doing it with the use of soft computing techniques may prove to be beneficial. A student can be classi fied into one of the available categories based on his behavioral and qualitative features. The paper presents a Neural N etwork model fused with Fuzzy Logic to model academi c profile of students. The model mimics teacher’s ability to deal with imprecise information representing student’s characteristics in linguistic form. The suggested model is developed in MATLAB which takes into consideration various features of students under study. The input to the model consists of dat of students studying in any faculty. A combination of Fuzzy Logic ARTMAP Neural Network results into a model useful for management of educational institutes for improving the quality of education. A good prediction of student’s success ione way to be in the competition in education sys tem. The use of Soft Computing methodology is justified for its real-time applicability in education system.", "title": "" }, { "docid": "7c9ded948f76bba73cb05e009d81cc89", "text": "This paper proposes a two-phase resource allocation framework (RAF) for a parallel cooperative joint multi-bitrate video caching and transcoding (CVCT) in heterogeneous virtualized mobileedge computing (HV-MEC) networks. In the cache placement phase, we propose delivery-aware cache placement strategies (DACPSs) based on the available video popularity distribution (VPD) and channel distribution information (CDI) to exploit the flexible delivery opportunities, i.e., video transmission and transcoding capabilities. Then, for the delivery phase, we propose a delivery policy for given caching status, instantaneous requests of users, and channel state information (CSI). The optimization problems corresponding to both phases aim to maximize the total revenue of slices subject to the quality of services contracted between slices and end-users and the system constraints based on their own assumptions. Both problems are non-convex and suffer from their high-computational complexities. For each phase, we show how these two problems can be solved efficiently. We also design a low-complexity RAF (LCRAF) in which the complexity of the delivery algorithm is significantly reduced. Extensive numerical assessments demonstrate up to 30% performance improvement of our proposed DACPSs over traditional approaches.", "title": "" }, { "docid": "7cd4d0effcac8806a1aed53987aad9b0", "text": "In this paper, a slider-crank based pole climbing robot will be discussed. The robot is designed to climb a pole of perimeter 35cm upward and downward, and its weight is limited to 3kg. It utilizes the combination of slider-crank mechanism for its climbing module and modular mechanism for its gripping module. In this work, the overall climbing motion can be completed in 6 steps. The benefits of this robot are low cost, simple mechanism, easy to control and fabricate. The robot has been built and successfully tested on a PVC tube of diameter 35cm. The average upward and downward climbing speed of the developed robot is 0.375cm/s and 0377cm/s respectively.", "title": "" }, { "docid": "4318041c3cf82ce72da5983f20c6d6c4", "text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.", "title": "" }, { "docid": "e777794833a060f99e11675952cd3342", "text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.", "title": "" }, { "docid": "b588d2eea79668ef678ad87121be1c15", "text": "Discrete Exterior Calculus (DEC) is a discrete version of the smooth exterior calculus. Exterior calculus is calculus on smooth manifolds, and DEC is a calculus for discrete manifolds. It has applications in computational mechanics, computer graphics and other fields. This project has two parts. In the first part, we build a C++ class library to implement some of the objects and operators of DEC. The objects implemented are discrete forms, and the operators implemented are the exterior derivative, Hodge star and wedge product. These objects and operators are implemented for 2D meshes embedded in R. The second part of this project is to extend DEC to include general discrete tensors. As a very preliminary first step, we do this in the context of an application, by proposing the definition of a discrete stress tensor for elasticity. We show how to compute the discrete stress tensor in planar elasticity and propose a discretization for Cauchy’s equation of motion based on that.", "title": "" }, { "docid": "4f23672b9178a1ae13ad7e5b305614e9", "text": "Sensor-based driver assistance systems often have a safety-related role in modern automotive designs. In this paper we argue that the current generation of “Hardware in the Loop” (HIL) simulators have limitations which restrict the extent to which testing of such systems can be carried out, with the consequence that it is more difficult to make informed decisions regarding the impact of new technologies and control methods on vehicle safety and performance prior to system deployment. In order to begin to address this problem, this paper presents a novel, low-cost and flexible HIL simulator. An overview of the simulator is provided, followed by detailed descriptions of the models that are employed. The effectiveness of the simulator is then illustrated using a case study, in which we examine the performance and safety integrity of eight different designs of a representative distributed embedded control system (a throttleand brake-by-wire system with adaptive cruise control capability). It is concluded that the proposed HIL simulator provides a highly effective and low-cost test environment for assessing and comparing new automotive control system implementations.", "title": "" }, { "docid": "7e5b18a0356a89a0285f80a2224d8b12", "text": "Machine recognition of a handwritten mathematical expression (HME) is challenging due to the ambiguities of handwritten symbols and the two-dimensional structure of mathematical expressions. Inspired by recent work in deep learning, we present Watch, Attend and Parse (WAP), a novel end-to-end approach based on neural network that learns to recognize HMEs in a two-dimensional layout and outputs them as one-dimensional character sequences in LaTeX format. Inherently unlike traditional methods, our proposed model avoids problems that stem from symbol segmentation, and it does not require a predefined expression grammar. Meanwhile, the problems of symbol recognition and structural analysis are handled, respectively, using a watcher and a parser. We employ a convolutional neural network encoder that takes HME images as input as the watcher and employ a recurrent neural network decoder equipped with an attention mechanism as the parser to generate LaTeX sequences. Moreover, the correspondence between the input expressions and the output LaTeX sequences is learned automatically by the attention mechanism. We validate the proposed approach on a benchmark published by the CROHME international competition. Using the official training dataset, WAP significantly outperformed the state-of-the-art method with an expression recognition accuracy of 46.55% on CROHME 2014 and 44.55% on CROHME 2016. © 2017 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
dc65ec270ed0c5bf2c826619dc8ba4b9
Scalable Database Logging for Multicores
[ { "docid": "f10660b168700e38e24110a575b5aafa", "text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.", "title": "" } ]
[ { "docid": "f0e1d67a4974ca30835de88276b2de4d", "text": "This paper presents a design of power efficient and high speed transmitter implemented in CMOS 180nm UMC technology, fully compatible with low voltage differential signaling (LVDS) standard specified by IEEE. The main stand-alone driver's functional blocks: LVDS core, common mode feedback (CMFB), control buffer and band-gap reference source are described in detail. Three architectures of LVDS transmitter cores: bridged-switched current source (BSCS), double current source (DCS) and open drain (OD) are compared and finally the BSCS architecture is selected due to project's requirements. The designed LVDS driver characterizes a very low level of static and dynamic power dissipation, Pstat = 7.5mW and Pdyn = 11.6mW respectively, at data speed transmission 1.8Gb/s with receiver's input capacitance CR = 1pF and Pdyn = 8.5mW, data rates equals 400Mb/s at CR = 5pF.", "title": "" }, { "docid": "dfdd857de86c75e769492b56a092b242", "text": "Understanding the anatomy of the ankle ligaments is important for correct diagnosis and treatment. Ankle ligament injury is the most frequent cause of acute ankle pain. Chronic ankle pain often finds its cause in laxity of one of the ankle ligaments. In this pictorial essay, the ligaments around the ankle are grouped, depending on their anatomic orientation, and each of the ankle ligaments is discussed in detail.", "title": "" }, { "docid": "dd956cadc4158b6529cca0966c446845", "text": "One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.", "title": "" }, { "docid": "d3257b09b8646cf88d41e018d919a190", "text": "Blockchain innovation was initially presented as the innovation behind the Bitcoin decentralized virtual currency, yet there is the desire that its qualities of precise and irreversible information move in a decentralized P2P system could make different applications conceivable. Blockchain an apparently unassuming information structure, and a suite of related conventions, have as of late taken the universes of Finance and Technology by tempest through its earth shattering application in the present day crypto-currency Bitcoin, and all the more so due to the problematic advancements it guarantees. Keywords—blockchain, bitcoin, security, public ledger.", "title": "" }, { "docid": "ccb6067614bebf844d96e9a337a4c0d4", "text": "BACKGROUND\nJoint pain is thought to be an early sign of injury to a pitcher.\n\n\nOBJECTIVE\nTo evaluate the association between pitch counts, pitch types, and pitching mechanics and shoulder and elbow pain in young pitchers.\n\n\nSTUDY DESIGN\nProspective cohort study.\n\n\nMETHODS\nFour hundred and seventy-six young (ages 9 to 14 years) baseball pitchers were followed for one season. Data were collected from pre- and postseason questionnaires, injury and performance interviews after each game, pitch count logs, and video analysis of pitching mechanics. Generalized estimating equations and logistic regression analysis were used.\n\n\nRESULTS\nHalf of the subjects experienced elbow or shoulder pain during the season. The curveball was associated with a 52% increased risk of shoulder pain and the slider was associated with an 86% increased risk of elbow pain. There was a significant association between the number of pitches thrown in a game and during the season and the rate of elbow pain and shoulder pain.\n\n\nCONCLUSIONS\nPitchers in this age group should be cautioned about throwing breaking pitches (curveballs and sliders) because of the increased risk of elbow and shoulder pain. Limitations on pitches thrown in a game and in a season can also reduce the risk of pain. Further evaluation of pain and pitching mechanics is necessary.", "title": "" }, { "docid": "77f40fa3df43c8dbf6e483f106ee1d8d", "text": "We performed a prospective study to document, by intra-operative manipulation under anaesthesia (MUA) of the pelvic ring, the stability of lateral compression type 1 injuries that were managed in a Level-I Trauma Centre. The documentation of the short-term outcome of the management of these injuries was our secondary aim. A total of 63 patients were included in the study. Thirty-five patients (group A) were treated surgically whereas 28 (group B) were managed nonoperatively. Intraoperative rotational instability, evident by more than two centimetres of translation during the manipulation manoeuvre, was combined with a complete sacral fracture in all cases. A statistically significant difference was present between the length of hospital stay, the time to independent pain-free mobilisation, post-manipulation pain levels and opioid requirements between the two groups, with group A demonstrating significantly decreased values in all these four variables (p < 0.05). There was also a significant difference between the pre- and 72-hour post-manipulation visual analogue scale and analgesic requirements of the group A patients, whereas the patients in group B did not demonstrate such a difference. LC-1 injuries with a complete posterior sacral injury are inheritably rotationally unstable and patients presenting with these fracture patterns definitely gain benefit from surgical stabilisation.", "title": "" }, { "docid": "38a1ffa058a31d8513b8859284472daf", "text": "We describe a case of reflex seizures induced by abstract reasoning but not other cognitive processes. The patient, a 46-year-old man, experienced myoclonic seizures whenever he played shogi (Japanese chess). To identify the critical thought processes responsible for inducing his seizures, we monitored his clinical seizures and epileptiform discharges while he performed comprehensive neuropsychological tests, including the Wechsler Adult Intelligence Scale-Revised (WAIS-R), spatial working memory, mental rotation, and Wisconsin Card Sorting Test (WCST) tasks. A myoclonic seizure occurred only during the WCST. Generalized 3- to 5-Hz spike-and-slow-wave bursts occurred repeatedly during the Block Design subtest of the WAIS-R and the WCST, whereas no discharges occurred during other subtests of the WAIS-R including the calculation, spatial working memory, and mental rotation tasks. These results indicate that abstract reasoning, independent of other cognitive processes, could induce the patient's epileptiform discharges, suggesting that his reflex seizures might be a distinct subtype of nonverbal thinking-induced seizures.", "title": "" }, { "docid": "7ef14aed74249f10adffe2cc49475229", "text": "We prove that idealised discriminative Bayesian neural networks, capturing perfect epistemic uncertainty, cannot have adversarial examples: Techniques for crafting adversarial examples will necessarily fail to generate perturbed images which fool the classifier. This suggests why MC dropout-based techniques have been observed to be fairly effective against adversarial examples. We support our claims mathematically and empirically. We experiment with HMC on synthetic data derived from MNIST for which we know the ground truth image density, showing that near-perfect epistemic uncertainty correlates to density under image manifold, and that adversarial images lie off the manifold. Using our new-found insights we suggest a new attack for MC dropout-based models by looking for imperfections in uncertainty estimation, and also suggest a mitigation. Lastly, we demonstrate our mitigation on a cats-vs-dogs image classification task with a VGG13 variant.", "title": "" }, { "docid": "05696249c57c4b0a52ddfd5598a34f00", "text": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.", "title": "" }, { "docid": "4725b14e7c336c720ce4eb7747fa3ad9", "text": "The support vector machine (SVM) has provided higher performance than traditional learning machines and has been widely applied in real-world classification problems and nonlinear function estimation problems. Unfortunately, the training process of the SVM is sensitive to the outliers or noises in the training set. In this paper, a common misunderstanding of Gaussian-function-based kernel fuzzy clustering is corrected, and a kernel fuzzy c-means clustering-based fuzzy SVM algorithm (KFCM-FSVM) is developed to deal with the classification problems with outliers or noises. In the KFCM-FSVM algorithm, we first use the FCM clustering to cluster each of two classes from the training set in the high-dimensional feature space. The farthest pair of clusters, where one cluster comes from the positive class and the other from the negative class, is then searched and forms one new training set with membership degrees. Finally, we adopt FSVM to induce the final classification results on this new training set. The computational complexity of the KFCM-FSVM algorithm is analyzed. A set of experiments is conducted on six benchmarking datasets and four artificial datasets for testing the generalization performance of the KFCM-FSVM algorithm. The results indicate that the KFCM-FSVM algorithm is robust for classification problems with outliers or noises.", "title": "" }, { "docid": "d7e37e65f575381e749dbf679ce651bd", "text": "BACKGROUND\nStudies in mice indicate that the gut microbiome influences both sides of the energy-balance equation by contributing to nutrient absorption and regulating host genes that affect adiposity. However, it remains uncertain as to what extent gut microbiota are an important regulator of nutrient absorption in humans.\n\n\nOBJECTIVE\nWith the use of a carefully monitored inpatient study cohort, we tested how gut bacterial community structure is affected by altering the nutrient load in lean and obese individuals and whether their microbiota are correlated with the efficiency of dietary energy harvest.\n\n\nDESIGN\nWe investigated dynamic changes of gut microbiota during diets that varied in caloric content (2400 compared with 3400 kcal/d) by pyrosequencing bacterial 16S ribosomal RNA (rRNA) genes present in the feces of 12 lean and 9 obese individuals and by measuring ingested and stool calories with the use of bomb calorimetry.\n\n\nRESULTS\nThe alteration of the nutrient load induced rapid changes in the gut microbiota. These changes were directly correlated with stool energy loss in lean individuals such that a 20% increase in Firmicutes and a corresponding decrease in Bacteroidetes were associated with an increased energy harvest of ≈150 kcal. A high degree of overfeeding in lean individuals was accompanied by a greater fractional decrease in stool energy loss.\n\n\nCONCLUSIONS\nThese results show that the nutrient load is a key variable that can influence the gut (fecal) bacterial community structure over short time scales. Furthermore, the observed associations between gut microbes and nutrient absorption indicate a possible role of the human gut microbiota in the regulation of the nutrient harvest. This trial was registered at clinicaltrials.gov as NCT00414063.", "title": "" }, { "docid": "68649624bbd2aa73acd98df12f06fd28", "text": "Grey wolf optimizer (GWO) is one of recent metaheuristics swarm intelligence methods. It has been widely tailored for a wide variety of optimization problems due to its impressive characteristics over other swarm intelligence methods: it has very few parameters, and no derivation information is required in the initial search. Also it is simple, easy to use, flexible, scalable, and has a special capability to strike the right balance between the exploration and exploitation during the search which leads to favourable convergence. Therefore, the GWO has recently gained a very big research interest with tremendous audiences from several domains in a very short time. Thus, in this review paper, several research publications using GWO have been overviewed and summarized. Initially, an introductory information about GWO is provided which illustrates the natural foundation context and its related optimization conceptual framework. The main operations of GWO are procedurally discussed, and the theoretical foundation is described. Furthermore, the recent versions of GWO are discussed in detail which are categorized into modified, hybridized and paralleled versions. The main applications of GWO are also thoroughly described. The applications belong to the domains of global optimization, power engineering, bioinformatics, environmental applications, machine learning, networking and image processing, etc. The open source software of GWO is also provided. The review paper is ended by providing a summary conclusion of the main foundation of GWO and suggests several possible future directions that can be further investigated.", "title": "" }, { "docid": "2ff3238a25fd7055517a2596e5e0cd7c", "text": "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.", "title": "" }, { "docid": "b2db53f203f2b168ec99bd8e544ff533", "text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.", "title": "" }, { "docid": "138e0d07a9c22224c04bf4b983819f01", "text": "The olfactory receptor gene family is the largest in the mammalian genome (and larger than any other gene family in any other species), comprising 1% of genes. Beginning with a genetic radiation in reptiles roughly 200 million years ago, terrestrial vertebrates can detect millions of odorants. Each species has an olfactory repertoire unique to the genetic makeup of that species. The human olfactory repertoire is quite diverse. Contrary to erroneously reported estimates, humans can detect millions of airborne odorants (volatiles) in quite small concentrations. We exhibit tremendous variation in our genes that control the receptors in our olfactory epithelium, and this may relate to variation in cross-cultural perception of and preference for odors. With age, humans experience differential olfactory dysfunction, with some odors remaining strong and others becoming increasingly faint. Olfactory dysfunction has been pathologically linked to depression and quality of life issues, neurodegenerative disorders, adult and childhood obesity, and decreased nutrition in elderly females. Human pheromones, a controversial subject, seem to be a natural phenomenon, with a small number identified in clinical studies. The consumer product industry (perfumes, food and beverage, and pesticides) devotes billions of dollars each year supporting olfactory research in an effort to enhance product design and marketing. With so many intersecting areas of research, anthropology has a tremendous contribution to make to this growing body of work that crosses traditional disciplinary lines and has a clear applied component. Also, anthropology could benefit from considering the power of the olfactory system in memory, behavioral and social cues, evolutionary history, mate choice, food decisions, and overall health.", "title": "" }, { "docid": "952651d9d93496e04baa97f03e446b98", "text": "We present a state-of-the-art system for performing spoken term detection on continuous telephone speech in multiple languages. The system compiles a search index from deep word lattices generated by a large-vocabulary HMM speech recognizer. It estimates word posteriors from the lattices and uses them to compute a detection threshold that minimizes the expected value of a user-specified cost function. The system accommodates search terms outside the vocabulary of the speechto-text engine by using approximate string matching on induced phonetic transcripts. Its search index occupies less than 1Mb per hour of processed speech and it supports sub-second search times for a corpus of hundreds of hours of audio. This system had the highest reported accuracy on the telephone speech portion of the 2006 NIST Spoken Term Detection evaluation, achieving 83% of the maximum possible accuracy score in English.", "title": "" }, { "docid": "fefa533d5abb4be0afe76d9a7bbd9435", "text": "Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents).", "title": "" }, { "docid": "7775c00550a6042c38f38bac257ec334", "text": "Real-world face recognition datasets exhibit long-tail characteristics, which results in biased classifiers in conventionally-trained deep neural networks, or insufficient data when long-tail classes are ignored. In this paper, we propose to handle long-tail classes in the training of a face recognition engine by augmenting their feature space under a center-based feature transfer framework. A Gaussian prior is assumed across all the head (regular) classes and the variance from regular classes are transferred to the long-tail class representation. This encourages the long-tail distribution to be closer to the regular distribution, while enriching and balancing the limited training data. Further, an alternating training regimen is proposed to simultaneously achieve less biased decision boundaries and a more discriminative feature representation. We conduct empirical studies that mimic long-tail datasets by limiting the number of samples and the proportion of long-tail classes on the MS-Celeb-1M dataset. We compare our method with baselines not designed to handle long-tail classes and also with state-of-the-art methods on face recognition benchmarks. State-of-the-art results on LFW, IJB-A and MS-Celeb-1M datasets demonstrate the effectiveness of our feature transfer approach and training strategy. Finally, our feature transfer allows smooth visual interpolation, which demonstrates disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations.", "title": "" }, { "docid": "a261f7df775cbcc1f2b3a5f68fba6029", "text": "As the role of virtual teams in organizations becomes increasingly important, it is crucial that companies identify and leverage team members’ knowledge. Yet, little is known of how virtual team members come to recognize one another’s knowledge, trust one another’s expertise, and coordinate their knowledge effectively. In this study, we develop a model of how three behavioral dimensions associated with transactive memory systems (TMS) in virtual teams—expertise location, Ritu Agarwal was the accepting senior editor for this paper. Alberto Espinosa and Susan Gasson served as reviewers. The associate editor and a third reviewer chose to remain anonymous. Authors are listed alphabetically. Each contributed equally to the paper. task–knowledge coordination, and cognition-based trust—and their impacts on team performance change over time. Drawing on the data from a study that involves 38 virtual teams of MBA students performing a complex web-based business simulation game over an 8-week period, we found that in the early stage of the project, the frequency and volume of task-oriented communications among team members played an important role in forming expertise location and cognition-based trust. Once TMS were established, however, task-oriented communication became less important. Instead, toward the end of the project, task–knowledge coordination emerges as a key construct that influences team performance, mediating the impact of all other constructs. Our study demonstrates that TMS can be formed even in virtual team environments where interactions take place solely through electronic media, although they take a relatively long time to develop. Furthermore, our findings show that, once developed, TMS become essential to performing tasks effectively in virtual teams.", "title": "" }, { "docid": "2f43e72d8a202cd124044af54b5f433a", "text": "A knowledge base (KB) contains a set of concepts, instances, and relationships. Over the past decade, numerous KBs have been built, and used to power a growing array of applications. Despite this flurry of activities, however, surprisingly little has been published about the end-to-end process of building, maintaining, and using such KBs in industry. In this paper we describe such a process. In particular, we describe how we build, update, and curate a large KB at Kosmix, a Bay Area startup, and later at WalmartLabs, a development and research lab of Walmart. We discuss how we use this KB to power a range of applications, including query understanding, Deep Web search, in-context advertising, event monitoring in social media, product search, social gifting, and social mining. Finally, we discuss how the KB team is organized, and the lessons learned. Our goal with this paper is to provide a real-world case study, and to contribute to the emerging direction of building, maintaining, and using knowledge bases for data management applications.", "title": "" } ]
scidocsrr
8ea55164cabccfab554e3e6a0bc34ea0
Interactive virtual try-on clothing design systems
[ { "docid": "f3abf5a6c20b6fff4970e1e63c0e836b", "text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.", "title": "" } ]
[ { "docid": "623f303fd7fbcd88bfdb6f55855dce3c", "text": "Causation relations are a pervasive feature of human language. Despite this, the automatic acquisition of causal information in text has proved to be a difficult task in NLP. This paper provides a method for the automatic detection and extraction of causal relations. We also present an inductive learning approach to the automatic discovery of lexical and semantic constraints necessary in the disambiguation of causal relations that are then used in question answering. We devised a classification of causal questions and tested the procedure on a QA system.", "title": "" }, { "docid": "9b55e6dc69517848ae5e5040cd9d0d55", "text": "In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.", "title": "" }, { "docid": "65f4e93ac371d72b93c40f4fe9215805", "text": "Trie memory is a way of storing and retrieving information. ~ It is applicable to information that consists of function-argument (or item-term) pairs--information conventionally stored in unordered lists, ordered lists, or pigeonholes. The main advantages of trie memory over the other memoIw plans just mentioned are shorter access time, greater ease of addition or up-dating, greater convenience in handling arguments of diverse lengths, and the ability to take advantage of redundancies in the information stored. The main disadvantage is relative inefficiency in using storage space, but this inefficiency is not great when the store is large. In this paper several paradigms of trie memory are described and compared with other memory paradigms, their advantages and disadvantages are examined in detail, and applications are discussed. Many essential features of trie memory were mentioned by de la Briandais [1] in a paper presented to the Western Joint Computer Conference in 1959. The present development is essentially independent of his, having been described in memorandum form in January 1959 [2], and it is fuller in that it considers additional paradigms (finitedimensional trie memories) and includes experimental results bearing on the efficiency of utilization of storage space.", "title": "" }, { "docid": "7fc92ce3f51a0ad3e300474e23cf7401", "text": "Dependency parsers are critical components within many NLP systems. However, currently available dependency parsers each exhibit at least one of several weaknesses, including high running time, limited accuracy, vague dependency labels, and lack of nonprojectivity support. Furthermore, no commonly used parser provides additional shallow semantic interpretation, such as preposition sense disambiguation and noun compound interpretation. In this paper, we present a new dependency-tree conversion of the Penn Treebank along with its associated fine-grain dependency labels and a fast, accurate parser trained on it. We explain how a non-projective extension to shift-reduce parsing can be incorporated into non-directional easy-first parsing. The parser performs well when evaluated on the standard test section of the Penn Treebank, outperforming several popular open source dependency parsers; it is, to the best of our knowledge, the first dependency parser capable of parsing more than 75 sentences per second at over 93% accuracy.", "title": "" }, { "docid": "95fbf262f9e673bd646ad7e02c5cbd53", "text": "Department of Finance Stern School of Business and NBER, New York University, 44 W. 4th Street, New York, NY 10012; mkacperc@stern.nyu.edu; http://www.stern.nyu.edu/∼mkacperc. Department of Finance Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; svnieuwe@stern.nyu.edu; http://www.stern.nyu.edu/∼svnieuwe. Department of Economics Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; lveldkam@stern.nyu.edu; http://www.stern.nyu.edu/∼lveldkam. We thank John Campbell, Joseph Chen, Xavier Gabaix, Vincent Glode, Ralph Koijen, Jeremy Stein, Matthijs van Dijk, and seminar participants at NYU Stern (economics and finance), Harvard Business School, Chicago Booth, MIT Sloan, Yale SOM, Stanford University (economics and finance), University of California at Berkeley (economics and finance), UCLA economics, Duke economics, University of Toulouse, University of Vienna, Australian National University, University of Melbourne, University of New South Wales, University of Sydney, University of Technology Sydney, Erasmus University, University of Mannheim, University of Alberta, Concordia, Lugano, the Amsterdam Asset Pricing Retreat, the Society for Economic Dynamics meetings in Istanbul, CEPR Financial Markets conference in Gerzensee, UBC Summer Finance conference, and Econometric Society meetings in Atlanta for useful comments and suggestions. Finally, we thank the Q-group for their generous financial support.", "title": "" }, { "docid": "80541e2df85384fa15074d4178cfa4ae", "text": "For the first time, we demonstrate the possibility of realizing low-cost mm-Wave antennas using inkjet printing of silver nano-particles. It is widely spread that fabrication of mm-Wave antennas and microwave circuits using the typical (deposit/pattern/etch) scheme is a challenging and costly process, due to the strict limitations on permissible tolerances. Such fabrication technique becomes even more challenging when dealing with flexible substrate materials, such as liquid crystal polymers. On the other hand, inkjet printing of conductive inks managed to form an emerging fabrication technology that has gained lots of attention over the last few years. Such process allows the deposition of conductive particles directly at the desired location on a substrate of interest, without need for mask productions, alignments, or etching. This means the inkjet printing of conductive materials could present the future of environment-friendly low-cost rapid manufacturing of RF circuits and antennas.", "title": "" }, { "docid": "340cceb987594709e207de5bd14965e7", "text": "Objective: Neuromuscular injury prevention programs (IPP) can reduce injury rate by about 40% in youth sport. Multimodal IPP include, for instance, balance, strength, power, and agility exercises. Our systematic review and meta-analysis aimed to evaluate the effects of multimodal IPP on neuromuscular performance in youth sports. Methods: We conducted a systematic literature search including selected search terms related to youth sports, injury prevention, and neuromuscular performance. Inclusion criteria were: (i) the study was a (cluster-)randomized controlled trial (RCT), and (ii) investigated healthy participants, up to 20 years of age and involved in organized sport, (iii) an intervention arm performing a multimodal IPP was compared to a control arm following a common training regime, and (iv) neuromuscular performance parameters (e.g., balance, power, strength, sprint) were assessed. Furthermore, we evaluated IPP effects on sport-specific skills. Results: Fourteen RCTs (comprising 704 participants) were analyzed. Eight studies included only males, and five only females. Seventy-one percent of all studies investigated soccer players with basketball, field hockey, futsal, Gaelic football, and hurling being the remaining sports. The average age of the participants ranged from 10 years up to 19 years and the level of play from recreational to professional. Intervention durations ranged from 4 weeks to 4.5 months with a total of 12 to 57 training sessions. We observed a small overall effect in favor of IPP for balance/stability (Hedges' g = 0.37; 95%CI 0.17, 0.58), leg power (g = 0.22; 95%CI 0.07, 0.38), and isokinetic hamstring and quadriceps strength as well as hamstrings-to-quadriceps ratio (g = 0.38; 95%CI 0.21, 0.55). We found a large overall effect for sprint abilities (g = 0.80; 95%CI 0.50, 1.09) and sport-specific skills (g = 0.83; 95%CI 0.34, 1.32). Subgroup analyses revealed larger effects in high-level (g = 0.34-1.18) compared to low-level athletes (g = 0.22-0.75), in boys (g = 0.27-1.02) compared to girls (g = 0.09-0.38), in older (g = 0.32-1.16) compared to younger athletes (g = 0.18-0.51), and in studies with high (g = 0.35-1.16) compared to low (g = 0.12-0.38) overall number of training sessions. Conclusion: Multimodal IPP beneficially affect neuromuscular performance. These improvements may substantiate the preventative efficacy of IPP and may support the wide-spread implementation and dissemination of IPP. The study has been a priori registered in PROSPERO (CRD42016053407).", "title": "" }, { "docid": "d2ce4df3be70141a3ab55aa0750f19ca", "text": "Agile methods have become popular in recent years because the success rate of project development using Agile methods is better than structured design methods. Nevertheless, less than 50 percent of projects implemented using Agile methods are considered successful, and selecting the wrong Agile method is one of the reasons for project failure. Selecting the most appropriate Agile method is a challenging task because there are so many to choose from. In addition, potential adopters believe that migrating to an Agile method involves taking a drastic risk. Therefore, to assist project managers and other decision makers, this study aims to identify the key factors that should be considered when selecting an appropriate Agile method. A systematic literature review was performed to elicit these factors in an unbiased manner, and then content analysis was used to analyze the resultant data. It was found that the nature of the project, development team skills, project constraints, customer involvement and organizational culture are the key factors that should guide decision makers in the selection of an appropriate Agile method based on the value these factors have for different organizations and/or different projects. Keywords— Agile method selection; factors of selecting Agile methods; SLR", "title": "" }, { "docid": "debf183822616eabc57b95f5e6037d4f", "text": "A new algorithm is proposed which accelerates the mini-batch k-means algorithm of Sculley (2010) by using the distance bounding approach of Elkan (2003). We argue that, when incorporating distance bounds into a mini-batch algorithm, already used data should preferentially be reused. To this end we propose using nested mini-batches, whereby data in a mini-batch at iteration t is automatically reused at iteration t+ 1. Using nested mini-batches presents two difficulties. The first is that unbalanced use of data can bias estimates, which we resolve by ensuring that each data sample contributes exactly once to centroids. The second is in choosing mini-batch sizes, which we address by balancing premature fine-tuning of centroids with redundancy induced slow-down. Experiments show that the resulting nmbatch algorithm is very effective, often arriving within 1% of the empirical minimum 100× earlier than the standard mini-batch algorithm.", "title": "" }, { "docid": "3a29bbe76a53c8284123019eba7e0342", "text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.", "title": "" }, { "docid": "64306a76b61bbc754e124da7f61a4fbe", "text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.", "title": "" }, { "docid": "f82135fc9034ce8308d3d1da156f65e3", "text": "Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.", "title": "" }, { "docid": "65118dccb8d5d9be4e21c46e7dde315c", "text": "In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results.", "title": "" }, { "docid": "ffa5ae359807884c2218b92d2db2a584", "text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.", "title": "" }, { "docid": "0c33a3eeaffb9afb76851a97d28cbdcc", "text": "We consider the cell-free massive multiple-input multiple-output (MIMO) downlink, where a very large number of distributed multiple-antenna access points (APs) serve many single-antenna users in the same time-frequency resource. A simple (distributed) conjugate beamforming scheme is applied at each AP via the use of local channel state information (CSI). This CSI is acquired through time-division duplex operation and the reception of uplink training signals transmitted by the users. We derive a closed-form expression for the spectral efficiency taking into account the effects of channel estimation errors and power control. This closed-form result enables us to analyze the effects of backhaul power consumption, the number of APs, and the number of antennas per AP on the total energy efficiency, as well as, to design an optimal power allocation algorithm. The optimal power allocation algorithm aims at maximizing the total energy efficiency, subject to a per-user spectral efficiency constraint and a per-AP power constraint. Compared with the equal power control, our proposed power allocation scheme can double the total energy efficiency. Furthermore, we propose AP selections schemes, in which each user chooses a subset of APs, to reduce the power consumption caused by the backhaul links. With our proposed AP selection schemes, the total energy efficiency increases significantly, especially for large numbers of APs. Moreover, under a requirement of good quality-of-service for all users, cell-free massive MIMO outperforms the colocated counterpart in terms of energy efficiency.", "title": "" }, { "docid": "37b1f275438471b89a226877a1783a6b", "text": "This paper presents the implementation of a wearable wireless sensor network aimed at monitoring harmful gases in industrial environments. The proposed solution is based on a customized wearable sensor node using a low-power low-rate wireless personal area network (LR-WPAN) communications protocol, which as a first approach measures CO₂ concentration, and employs different low power strategies for appropriate energy handling which is essential to achieving long battery life. These wearables nodes are connected to a deployed static network and a web-based application allows data storage, remote control and monitoring of the complete network. Therefore, a complete and versatile remote web application with a locally implemented decision-making system is accomplished, which allows early detection of hazardous situations for exposed workers.", "title": "" }, { "docid": "ef598ba4f9a4df1f42debc0eabd1ead8", "text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.", "title": "" }, { "docid": "34a7ae3283c4f3bcb3e9afff2383de72", "text": "Latent variable models have been a preferred choice in conversational modeling compared to sequence-to-sequence (seq2seq) models which tend to generate generic and repetitive responses. Despite so, training latent variable models remains to be difficult. In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model at each generation step. We study in details how the latent representation is learnt in both the vanilla model and LTCM. Our extensive experiments contribute to better understanding and training of conditional latent models for languages. Our results show that by sampling from the learnt latent representations, LTCM can generate diverse and interesting responses. In a subjective human evaluation, the judges also confirm that LTCM is the overall preferred option.", "title": "" }, { "docid": "17e1c5d4c8ff360cae2ec7ff2e8e7b4b", "text": "The mutual information term I(c;G(z, c)) requires the posterior P (c|G(z, c)), thus, it is hard to maximize directly. ST-GAN uses a technique called Variational Information Maximization Barber and Agakov (2003) by defining an auxiliary distribution Q(c|x) to approximate P (c|x) as InfoGAN Chen et al. (2016) does. The variational lower bound, LI(G,Q), of the local mutual information I(c;G(z, c)) is defined as:", "title": "" }, { "docid": "4d51e2a6f1ddfb15753117b0f22e0fad", "text": "We describe distributed algorithms for two widely-used topic models, namely the Latent Dirichlet Allocation (LDA) model, and the Hierarchical Dirichet Process (HDP) model. In our distributed algorithms the data is partitioned across separate processors and inference is done in a parallel, distributed fashion. We propose two distributed algorithms for LDA. The first algorithm is a straightforward mapping of LDA to a distributed processor setting. In this algorithm processors concurrently perform Gibbs sampling over local data followed by a global update of topic counts. The algorithm is simple to implement and can be viewed as an approximation to Gibbs-sampled LDA. The second version is a model that uses a hierarchical Bayesian extension of LDA to directly account for distributed data. This model has a theoretical guarantee of convergence but is more complex to implement than the first algorithm. Our distributed algorithm for HDP takes the straightforward mapping approach, and merges newly-created topics either by matching or by topic-id. Using five real-world text corpora we show that distributed learning works well in practice. For both LDA and HDP, we show that the converged test-data log probability for distributed learning is indistinguishable from that obtained with single-processor learning. Our extensive experimental results include learning topic models for two multi-million document collections using a 1024-processor parallel computer.", "title": "" } ]
scidocsrr
5c6a745ed4268e1fc763fcacf1403dc7
Learning to Integrate Occlusion-Specific Detectors for Heavily Occluded Pedestrian Detection
[ { "docid": "330329a7ce02b89373b935c99e4f1471", "text": "Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9% reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.", "title": "" } ]
[ { "docid": "f267f44fe9463ac0114335959f9739fa", "text": "HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-to-display delay by 90.1% and the average start-up delay by 40.1%.", "title": "" }, { "docid": "5fd10b2277918255133f2e37a55e1103", "text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.", "title": "" }, { "docid": "1683cf711705b78b9465d8053a94b473", "text": "In this paper, we investigate the problem of counting rosette leaves from an RGB image, an important task in plant phenotyping. We propose a data-driven approach for this task generalized over different plant species and imaging setups. To accomplish this task, we use state-of-the-art deep learning architectures: a deconvolutional network for initial segmentation and a convolutional network for leaf counting. Evaluation is performed on the leaf counting challenge dataset at CVPPP-2017. Despite the small number of training samples in this dataset, as compared to typical deep learning image sets, we obtain satisfactory performance on segmenting leaves from the background as a whole and counting the number of leaves using simple data augmentation strategies. Comparative analysis is provided against methods evaluated on the previous competition datasets. Our framework achieves mean and standard deviation of absolute count difference of 1.62 and 2.30 averaged over all five test datasets.", "title": "" }, { "docid": "bf8216ad7caf73cf63b988993b439412", "text": "Clothing retrieval and clothing style recognition are important and practical problems. They have drawn a lot of attention in recent years. However, the clothing photos collected in existing datasets are mostly of front- or near-front view. There are no datasets designed to study the influences of different viewing angles on clothing retrieval performance. To address view-invariant clothing retrieval problem properly, we construct a challenge clothing dataset, called Multi-View Clothing dataset. This dataset not only has four different views for each clothing item, but also provides 264 attributes for describing clothing appearance. We adopt a state-of-the-art deep learning method to present baseline results for the attribute prediction and clothing retrieval performance. We also evaluate the method on a more difficult setting, cross-view exact clothing item retrieval. Our dataset will be made publicly available for further studies towards view-invariant clothing retrieval.", "title": "" }, { "docid": "73af8236cc76e386aa76c6d20378d774", "text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).", "title": "" }, { "docid": "98533f4c358f7999ab37bda31575e68e", "text": "Predicting query execution time is useful in many database management issues including admission control, query scheduling, progress monitoring, and system sizing. Recently the research community has been exploring the use of statistical machine learning approaches to build predictive models for this task. An implicit assumption behind this work is that the cost models used by query optimizers are insufficient for query execution time prediction. In this paper we challenge this assumption and show while the simple approach of scaling the optimizer's estimated cost indeed fails, a properly calibrated optimizer cost model is surprisingly effective. However, even a well-tuned optimizer cost model will fail in the presence of errors in cardinality estimates. Accordingly we investigate the novel idea of spending extra resources to refine estimates for the query plan after it has been chosen by the optimizer but before execution. In our experiments we find that a well calibrated query optimizer model along with cardinality estimation refinement provides a low overhead way to provide estimates that are always competitive and often much better than the best reported numbers from the machine learning approaches.", "title": "" }, { "docid": "493c45304bd5b7dd1142ace56e94e421", "text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.", "title": "" }, { "docid": "9a454ccc77edb739a327192dafd5d974", "text": "In the present time, due to attractive features of cloud computing, the massive amount of data has been stored in the cloud. Though cloud-based services offer many benefits but privacy and security of the sensitive data is a big issue. These issues are resolved by storing sensitive data in encrypted form. Encrypted storage protects the data against unauthorized access, but it weakens some basic and important functionality like search operation on the data, i.e. searching the required data by the user on the encrypted data requires data to be decrypted first and then search, so this eventually, slows down the process of searching. To achieve this many encryption schemes have been proposed, however, all of the schemes handle exact Query matching but not Similarity matching. While user uploads the file, features are extracted from each document. When the user fires a query, trapdoor of that query is generated and search is performed by finding the correlation among documents stored on cloud and query keyword, using Locality Sensitive Hashing.", "title": "" }, { "docid": "520e87ff9133c15f534b3e8eccb048a3", "text": "The greater trochanter of the femur is a bony protuberance arising at the femoral neck and shaft interface. The greater trochanter has 4 distinct facets (anterior, superoposterior, lateral, and posterior) that serve for attachments of the abductor tendons and/or sites for bursae [1] (Figures 1 and 2). The gluteus minimus and medius muscles arise from the external iliac fossa and their corresponding tendons insert onto the greater trochanter (Figures 1-3). The gluteus medius muscle almost completely covers the gluteus minimus muscle. The gluteus minimus tendon attaches to the anterior facet (main insertion) (Figures 1-3) and to the anterior and superior hip joint capsule. From posterior to anterior, the gluteus medius tendon attaches to the superoposterior facet (main tendinous attachment), the inferior aspect of the lateral facet, and more anteriorly to the gluteus minimus tendon [2]. The posterior facet is devoid of tendon attachments (Figures 1-3). A variety of bursae have been described in the vicinity of the greater trochanter [3]. The 3 most consistently identified bursae are the subgluteus minimus, subgluteus medius, and subgluteus maximus bursae. The subgluteus minimus bursa lies deep to the gluteus minimus tendon. The subgluteus medius bursa is located between the lateral insertion of the gluteus medius tendon and the superior part of the lateral facet (this portion of the lateral facet is devoid of tendon insertion and is known as the trochanteric bald spot) [4] (Figure 1). The largest bursa is the subgluteus maximus. This bursa covers the posterior facet and lies deep to the gluteus maximus muscle (Figure 4).", "title": "" }, { "docid": "6825c5294da2dfe7a26b6ac89ba8f515", "text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.", "title": "" }, { "docid": "3a4da0cf9f4fdcc1356d25ea1ca38ca4", "text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.", "title": "" }, { "docid": "11d3a9ab56e27873413a8ce5519b5a5c", "text": "In this work we propose a novel approach to perform segmentation by leveraging the abstraction capabilities of convolutional neural networks (CNNs). Our method is based on Hough voting, a strategy that allows for fully automatic localisation and segmentation of the anatomies of interest. This approach does not only use the CNN classification outcomes, but it also implements voting by exploiting the features produced by the deepest portion of the network. We show that this learning-based segmentation method is robust, multi-region, flexible and can be easily adapted to different modalities. In the attempt to show the capabilities and the behaviour of CNNs when they are applied to medical image analysis, we perform a systematic study of the performances of six different network architectures, conceived according to state-of-the-art criteria, in various situations. We evaluate the impact of both different amount of training data and different data dimensionality (2D, 2.5D and 3D) on the final results. We show results on both MRI and transcranial US volumes depicting respectively 26 regions of the basal ganglia and the midbrain. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "4a6d48bd0f214a94f2137f424dd401eb", "text": "During the past decade, scientific research has provided new insight into the development from an acute, localised musculoskeletal disorder towards chronic widespread pain/fibromyalgia (FM). Chronic widespread pain/FM is characterised by sensitisation of central pain pathways. An in-depth review of basic and clinical research was performed to design a theoretical framework for manual therapy in these patients. It is explained that manual therapy might be able to influence the process of chronicity in three different ways. (I) In order to prevent chronicity in (sub)acute musculoskeletal disorders, it seems crucial to limit the time course of afferent stimulation of peripheral nociceptors. (II) In the case of chronic widespread pain and established sensitisation of central pain pathways, relatively minor injuries/trauma at any locations are likely to sustain the process of central sensitisation and should be treated appropriately with manual therapy accounting for the decreased sensory threshold. Inappropriate pain beliefs should be addressed and exercise interventions should account for the process of central sensitisation. (III) However, manual therapists ignoring the processes involved in the development and maintenance of chronic widespread pain/FM may cause more harm then benefit to the patient by triggering or sustaining central sensitisation.", "title": "" }, { "docid": "2eb542371eac4ce8fabed599bd5cd2c6", "text": "The purpose of this study was to examine the influence of past travel experience (i.e., number of trips and number of days away from home in last year), and on mature travelers’ quality of life (i.e., self-perceived health and global life satisfaction). A total number of 217 respondents (50+) in a southern state were used in this study. Path analysis (PROC CALIS in SAS) was performed to test the proposed model. An estimation of the proposed theoretical model revealed that the model fit the data. However, the model should be further examined and applied with caution.", "title": "" }, { "docid": "beb59e93d6e9e4d27cba95b428faec19", "text": "Landslides cause lots of damage to life and property world over. There has been research in machine-learning that aims to predict landslides based on the statistical analysis of historical landslide events and its triggering factors. However, prediction of landslides suffers from a class-imbalance problem as landslides and land-movement are very rare events. In this paper, we apply state-of-the-art techniques to correct the class imbalance in landslide datasets. More specifically, to overcome the class-imbalance problem, we use different synthetic and oversampling techniques to a real-world landslide data collected from the Chandigarh - Manali highway. Also, we apply several machine-learning algorithms to the landslide data set for predicting landslides and evaluating our algorithms. Different algorithms have been assessed using techniques like the area under the ROC curve (AUC) and sensitivity index (d'). Results suggested that random forest algorithm performed better compared to other classification techniques like neural networks, logistic regression, support vector machines, and decision trees. Furthermore, among class-imbalance methods, the Synthetic Minority Oversampling Technique with iterative partitioning filter (SMOTE-IPF) performed better than other techniques. We highlight the implications of our results and methods for predicting landslides in the real world.", "title": "" }, { "docid": "56ccaaf0acd4b1f654a86fff4c2ebdb2", "text": "Chemically converted graphene aerogels with ultralight density and high compressibility are prepared by diamine-mediated functionalization and assembly, followed by microwave irradiation. The resulting graphene aerogels with density as low as 3 mg cm(-3) show excellent resilience and can completely recover after more than 90% compression. The ultralight graphene aerogels possessing high elasticity are promising as compliant and energy-absorbing materials.", "title": "" }, { "docid": "64f4a275dce1963b281cd0143f5eacdc", "text": "Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.", "title": "" }, { "docid": "5491c265a1eb7166bb174097b49d258e", "text": "The importance of service quality for business performance has been recognized in the literature through the direct effect on customer satisfaction and the indirect effect on customer loyalty. The main objective of the study was to measure hotels' service quality performance from the customer perspective. To do so, a performance-only measurement scale (SERVPERF) was administered to customers stayed in three, four and five star hotels in Aqaba and Petra. Although the importance of service quality and service quality measurement has been recognized, there has been limited research that has addressed the structure and antecedents of the concept for the hotel industry. The clarification of the dimensions is important for managers in the hotel industry as it identifies the bundles of service attributes consumers find important. The results of the study demonstrate that SERVPERF is a reliable and valid tool to measure service quality in the hotel industry. The instrument consists of five dimensions, namely \"tangibles\", \"responsiveness\", \"empathy\", \"assurance\" and \"reliability\". Hotel customers are expecting more improved services from the hotels in all service quality dimensions. However, hotel customers have the lowest perception scores on empathy and tangibles. In the light of the results, possible managerial implications are discussed and future research subjects are recommended.", "title": "" }, { "docid": "4e824eed79769b706655b6e6a7d39019", "text": "Over the years, researchers have developed many efficient techniques, such as the planners FF (Hoffmann and Nebel 2001), LPG (Gerevini, Saetti, and Serina 2003), SATPLAN (Kautz, Selman, and Hoffmann 2006), SGPLAN (Hsu et al. 2006), and others, for planning in classical (i.e., deterministic) domains. Some of these planning techniques have been adapted for planning under uncertainty and provide some impressive performance results. For example, the FF-REPLAN (Yoon, Fern, and Givan 2007) is a reactive online planning algorithm that has been demonstrated to be very effective for many MDP planning problems. As another example, planning-graph techniques inspired by FF has also been generalized to planning under nondeterminism and partial observability (Hoffmann and Brafman 2005; Bryce, Kambhampati, and Smith 2006). Finally, the conformantplanning approach of (Palacios and Geffner 2006; 2007) describes how to translate any conformant planning problem into a classical problem for a classical planner. Their approach generates a single translation, and can only find conformant solutions. In this paper, we describe our work that investigates a somewhat middle-ground between the previous approaches described above. In particular, we present an MDP planning algorithm, called RFF, for generating offline robust policies in probabilistic domains. Like both of the above approaches, RFF first creates a relaxation of an MDP planning problem by translating it into a deterministic planning problem in which each action corresponds to an effect of a probabilistic action in the original MDP and for every such effect in the original MDP there is a deterministic action, and there are no probabilities, costs, and rewards. In this relaxed planning problem, RFF computes a policy by generating successive execution paths leading to the goal from the initial states by using FF. The policy returned by RFF has a low probability of failing. In our approach, we interpret this not as the probability of reaching a goal but as the probability of causing any replanning during execution. In this work, we use a Monte-Carlo simulation in order to compute the probability that a partial policy would fail during execution (i.e., the probability that the execution of", "title": "" }, { "docid": "f3fb98614d1d8ff31ca977cbf6a15a9c", "text": "Paraphrase Identification and Semantic Similarity are two different yet well related tasks in NLP. There are many studies on these two tasks extensively on structured texts in the past. However, with the strong rise of social media data, studying these tasks on unstructured texts, particularly, social texts in Twitter is very interesting as it could be more complicated problems to deal with. We investigate and find a set of simple features which enables us to achieve very competitive performance on both tasks in Twitter data. Interestingly, we also confirm the significance of using word alignment techniques from evaluation metrics in machine translation in the overall performance of these tasks.", "title": "" } ]
scidocsrr
a5449215413c4d7dc4400bafe7d556b8
MuJoCo: A physics engine for model-based control
[ { "docid": "473dc6c3b4e5d34b469be6b8e2dcea5f", "text": "Trajectory optimization is done most efficiently when an inverse dynamics model is available. Here we develop the first model of contact dynamics defined in both the forward and inverse directions. The contact impulse is the solution to a convex optimization problem: minimize kinetic energy in contact space subject to non-penetration and friction-cone constraints. We use a custom interior-point method to make the optimization problem unconstrained; this is key to defining the forward and inverse dynamics in a consistent way. The resulting model has a parameter which sets the amount of contact smoothing, facilitating continuation methods for optimization. We implemented the proposed contact solver in our new physics engine (MuJoCo). A full Newton step of trajectory optimization for a 3D walking gait takes only 160 msec, on a 12-core PC.", "title": "" } ]
[ { "docid": "b5f9c4a71f5e752211f8467987a0f214", "text": "In fine-grained action (object manipulation) recognition, it is important to encode object semantic (contextual) information, i.e., which object is being manipulated and how it is being operated. However, previous methods for action recognition often represent the semantic information in a global and coarse way and therefore cannot cope with fine-grained actions. In this work, we propose a representation and classification pipeline which seamlessly incorporates localized semantic information into every processing step for fine-grained action recognition. In the feature extraction stage, we explore the geometric information between local motion features and the surrounding objects. In the feature encoding stage, we develop a semantic-grouped locality-constrained linear coding (SG-LLC) method that captures the joint distributions between motion and object-in-use information. Finally, we propose a semantic-aware multiple kernel learning framework (SA-MKL) by utilizing the empirical joint distribution between action and object type for more discriminative action classification. Extensive experiments are performed on the large-scale and difficult fine-grained MPII cooking action dataset. The results show that by effectively accumulating localized semantic information into the action representation and classification pipeline, we significantly improve the fine-grained action classification performance over the existing methods.", "title": "" }, { "docid": "3c3ae987e018322ca45b280c3d01eba8", "text": "Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established realworld video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of “intuitive physics” that can be applied to novel scenes.", "title": "" }, { "docid": "7e8aa83749e09fd4838e9b345fc61de3", "text": "Though neuropsychological data indicate that the right hemisphere (RH) plays a major role in metaphor processing, other studies suggest that, at least during some phases of this processing, a RH advantage may not exist. The present study explores, through a temporally agile neural signal--the event-related potentials (ERPs)--, and through source-localization algorithms applied to ERP recordings, whether the crucial phase of metaphor comprehension presents or not a RH advantage. Participants (n=24) were submitted to a S1-S2 experimental paradigm. S1 consisted of visually presented metaphoric sentences (e.g., \"Green lung of the city\"), followed by S2, which consisted of words that could (i.e., \"Park\") or could not (i.e., \"Semaphore\") be defined by S1. ERPs elicited by S2 were analyzed using temporal principal component analysis (tPCA) and source-localization algorithms. These analyses revealed that metaphorically related S2 words showed significantly higher N400 amplitudes than non-related S2 words. Source-localization algorithms showed differential activity between the two S2 conditions in the right middle/superior temporal areas. These results support the existence of an important RH contribution to (at least) one phase of metaphor processing and, furthermore, implicate the temporal cortex with respect to that contribution.", "title": "" }, { "docid": "70df369be2c95afd04467cd291e60175", "text": "In this paper, we introduce two novel metric learning algorithms, χ-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ-LMNN, obtain best results in 19 out of 20 learning settings.", "title": "" }, { "docid": "d820fa9b47c51d4ab35fc8ebbe4a5ba7", "text": "In current era of modernization and globalization it is observed that individuals are in the quest for mental peace and spiritual comfort even though they have achieved many scientific advancements. The major reason behind this uncomfortable condition is that they have indulged in exterior world and became careless about the religion and its practices. The most alarming fact regarding faith inculcation is the transformation of wrong, vague and ambiguous concepts to children at early ages which prolongs for whole life. Childhood is the period when concepts of right and wrong are strongly developed and most important agent that contributes to this concept making is parents. Keeping in mind this fact, the present study has highlighted the role of family in providing religious and moral values to their children. The qualitative approach has been used by adopting purposive sampling method. The focus group discussion has been conducted with families having an urban background. Some surprising facts has led the researcher to the conclusion of deterioration in the family as the social institution as a major cause which is resulted The Role of Family in Teaching Religious Moral Values to their Children 259 into not only a moral decay in the society but also the reason of socio-economic problems in country.", "title": "" }, { "docid": "4b5336c5f2352fb7cd79b19d2538049b", "text": "Energy-efficient computation is critical if we are going to continue to scale performance in power-limited systems. For floating-point applications that have large amounts of data parallelism, one should optimize the throughput/mm2 given a power density constraint. We present a method for creating a trade-off curve that can be used to estimate the maximum floating-point performance given a set of area and power constraints. Looking at FP multiply-add units and ignoring register and memory overheads, we find that in a 90 nm CMOS technology at 1 W/mm2, one can achieve a performance of 27 GFlops/mm2 single precision, and 7.5 GFlops/mm double precision. Adding register file overheads reduces the throughput by less than 50 percent if the compute intensity is high. Since the energy of the basic gates is no longer scaling rapidly, to maintain constant power density with scaling requires moving the overall FP architecture to a lower energy/performance point. A 1 W/mm2 design at 90 nm is a \"high-energy\" design, so scaling it to a lower energy design in 45 nm still yields a 7× performance gain, while a more balanced 0.1 W/mm2 design only speeds up by 3.5× when scaled to 45 nm. Performance scaling below 45 nm rapidly decreases, with a projected improvement of only ~3x for both power densities when scaling to a 22 nm technology.", "title": "" }, { "docid": "766dad31b4d67f4a4732384f8ca6b8a6", "text": "Ascites is a pathologic accumulation of peritoneal fluidcommonly observed in decompensated cirrhotic states.Its causes are multi-factorial, but principally involve significant volume and hormonal dysregulation in the setting of portal hypertension. The diagnosis of ascites is considered in cirrhotic patients given a constellation of clinical and laboratory findings, and ultimately confirmed, with insight into etiology, by imaging and paracentesis procedures. Treatment for ascites is multi-modal including dietary sodium restriction, pharmacologic therapies, diagnostic and therapeutic paracentesis, and in certain cases transjugular intra-hepatic portosystemic shunt. Ascites is associated with numerous complications including spontaneous bacterial peritonitis, hepato-hydrothorax and hepatorenal syndrome. Given the complex nature of ascites and associatedcomplications, it is not surprising that it heralds increased morbidity and mortality in cirrhotic patients and increased cost-utilization upon the health-care system. This review will detail the pathophysiology of cirrhotic ascites, common complications derived from it, and pertinent treatment modalities.", "title": "" }, { "docid": "c890c635dd0f2dcb6827f59707b5dcd4", "text": "We presenttwo families of reflective surfacesthat are capableof providing a wide field of view, andyet still approximatea perspecti ve projectionto a high degree.These surfacesarederivedby consideringaplaneperpendicular to theaxisof a surfaceof revolutionandfinding theequations governingthedistortionof theimageof theplanein thissurface. We thenview this relationasa differentialequation and prescribethe distortion term to be linear. By choosing appropriateinitial conditionsfor the differentialequation andsolvingit numerically, wederivethesurfaceshape andobtaina preciseestimateasto what degreethe resulting sensorcanapproximatea perspecti ve projection.Thus thesesurfacesactascomputational sensors, allowing for a wide-angleperspecti ve view of a scenewithout processing the imagein software. The applicationsof sucha sensor shouldbe numerous,including surveillance,roboticsand traditionalphotography . Recently, many researchersin the roboticsand vision communityhave begun to considervisual sensorsthat are ableto obtainwidefieldsof view. Suchdevicesarethenatural solution to variousdifficulties encounteredwith conventionalimagingsystems. Thetwo mostcommonmeansof obtainingwidefieldsof view arefish-eye lensesandreflectivesurfaces,alsoknown ascatoptrics.Whencatoptricsarecombinedwith conventional lenssystems,known asdioptrics,the resultingsensors are known as catadioptrics. The possibleusesof thesesystemsincludeapplicationssuchasrobotcontroland surveillance. In this paperwe will consideronly catadioptric basedsensors.Oftensuchsystemsconsistof a camera pointingataconvex mirror, asin figure(1). How to interpretand make useof the visual information obtainedby suchsystems,e.g. how they shouldbe usedto control robots,is not at all obvious. Thereareinfinitely many differentshapesthat a mirror canhave, and at leasttwo differentcameramodels(perspecti ve and orthographicprojection)with which to combineeachmirror. Convex mirror", "title": "" }, { "docid": "13a4dccde0ae401fc39b50469a0646b6", "text": "The stability theorem for persistent homology is a central result in topological data analysis. While the original formulation of the result concerns the persistence barcodes of R-valued functions, the result was later cast in a more general algebraic form, in the language of persistence modules and interleavings. In this paper, we establish an analogue of this algebraic stability theorem for zigzag persistence modules. To do so, we functorially extend each zigzag persistence module to a two-dimensional persistence module, and establish an algebraic stability theorem for these extensions. One part of our argument yields a stability result for free two-dimensional persistence modules. As an application of our main theorem, we strengthen a result of Bauer et al. on the stability of the persistent homology of Reeb graphs. Our main result also yields an alternative proof of the stability theorem for level set persistent homology of Carlsson et al.", "title": "" }, { "docid": "6bd63f7176788e31c704118c6070f0e2", "text": "In this paper, we present and analyze a simple and robust spectral algorithm for the stochastic block model with k blocks, for any k fixed. Our algorithm works with graphs having constant edge density, under an optimal condition on the gap between the density inside a block and the density between the blocks. As a co-product, we settle an open question posed by Abbe et. al. concerning censor block models.", "title": "" }, { "docid": "be3aef7708c1d1ea8db33f3ec0021919", "text": "Tuberculosis [TB] has afflicted numerous nations in the world. As per a report by the World Health Organization [WHO], an estimated 1.4 million TB deaths in 2015 and an additional 0.4 million deaths resulting from TB disease among people living with HIV, were observed. Most of the TB deaths can be prevented if it is detected at an early stage. The existing processes of diagnosis like blood tests or sputum tests are not only tedious but also take a long time for analysis and cannot differentiate between different drug resistant stages of TB. The need to find newer prompt methods for disease detection has been aided by the latest Artificial Intelligence [AI] tools. Artificial Neural Network [ANN] is one of the important tools that is being used widely in diagnosis and evaluation of medical conditions. This review aims at providing brief introduction to various AI tools that are used in TB detection and gives a detailed description about the utilization of ANN as an efficient diagnostic technique. The paper also provides a critical assessment of ANN and the existing techniques for their diagnosis of TB. Researchers and Practitioners in the field are looking forward to use ANN and other upcoming AI tools such as Fuzzy-logic, genetic algorithms and artificial intelligence simulation as a promising current and future technology tools towards tackling the global menace of Tuberculosis. Latest advancements in the diagnostic field include the combined use of ANN with various other AI tools like the Fuzzy-logic, which has led to an increase in the efficacy and specificity of the diagnostic techniques.", "title": "" }, { "docid": "5faef1f7afae4ccb3a701a11f60ac80b", "text": "State of the art deep learning models have made steady progress in the fields of computer vision and natural language processing, at the expense of growing model sizes and computational complexity. Deploying these models on low power and mobile devices poses a challenge due to their limited compute capabilities and strict energy budgets. One solution that has generated significant research interest is deploying highly quantized models that operate on low precision inputs and weights less than eight bits, trading off accuracy for performance. These models have a significantly reduced memory footprint (up to 32x reduction) and can replace multiply-accumulates with bitwise operations during compute intensive convolution and fully connected layers. Most deep learning frameworks rely on highly engineered linear algebra libraries such as ATLAS or Intel’s MKL to implement efficient deep learning operators. To date, none of the popular deep learning directly support low precision operators, partly due to a lack of optimized low precision libraries. In this paper we introduce a work flow to quickly generate high performance low precision deep learning operators for arbitrary precision that target multiple CPU architectures and include optimizations such as memory tiling and vectorization. We present an extensive case study on low power ARM Cortex-A53 CPU, and show how we can generate 1-bit, 2-bit convolutions with speedups up to 16x over an optimized 16-bit integer baseline and 2.3x better than handwritten implementations.", "title": "" }, { "docid": "e7b42688ce3936604aefa581802040a4", "text": "Identity management through biometrics offer potential advantages over knowledge and possession based methods. A wide variety of biometric modalities have been tested so far but several factors paralyse the accuracy of mono modal biometric systems. Usually, the analysis of multiple modalities offers better accuracy. An extensive review of biometric technology is presented here. Besides the mono modal systems, the article also discusses multi modal biometric systems along with their architecture and information fusion levels. The paper along with the exemplary evidences highlights the potential for biometric technology, market value and prospects. Keywords— Biometrics, Fingerprint, Face, Iris, Retina, Behavioral biometrics, Gait, Voice, Soft biometrics, Multi-modal biometrics.", "title": "" }, { "docid": "07038d108e0fe7bc32e3a88c749e6dfd", "text": "People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics. The prevalence of the belief and its unfortunate consequences for psychological research are illustrated by the responses of professional psychologists to a questionnaire concerning research decisions. ------------------------------------------------------------------------------------------------------------------------", "title": "" }, { "docid": "2cc93a5ba7bfd29578b7fe183c7f2fe6", "text": "Erasure coding schemes provide higher durability at lower storage cost, and thus constitute an attractive alternative to replication in distributed storage systems, in particular for storing rarely accessed \"cold\" data. These schemes, however, require an order of magnitude higher recovery bandwidth for maintaining a constant level of durability in the face of node failures. In this paper we propose lazy recovery, a technique to reduce recovery bandwidth demands down to the level of replicated storage. The key insight is that a careful adjustment of recovery rate substantially reduces recovery bandwidth, while keeping the impact on read performance and data durability low. We demonstrate the benefits of lazy recovery via extensive simulation using a realistic distributed storage configuration and published component failure parameters. For example, when applied to the commonly used RS(14, 10) code, lazy recovery reduces repair bandwidth by up to 76% even below replication, while increasing the amount of degraded stripes by 0.1 percentage points. Lazy recovery works well with a variety of erasure coding schemes, including the recently introduced bandwidth efficient codes, achieving up to a factor of 2 additional bandwidth savings.", "title": "" }, { "docid": "839b6bd24c7e020b0feef197cd6d9f92", "text": "We consider training a deep neural network to generate samples from an unknown distribution given i.i.d. data. We frame learning as an optimization minimizing a two-sample test statistic—informally speaking, a good generator network produces samples that cause a twosample test to fail to reject the null hypothesis. As our two-sample test statistic, we use an unbiased estimate of the maximum mean discrepancy, which is the centerpiece of the nonparametric kernel two-sample test proposed by Gretton et al. [2]. We compare to the adversarial nets framework introduced by Goodfellow et al. [1], in which learning is a two-player game between a generator network and an adversarial discriminator network, both trained to outwit the other. From this perspective, the MMD statistic plays the role of the discriminator. In addition to empirical comparisons, we prove bounds on the generalization error incurred by optimizing the empirical MMD.", "title": "" }, { "docid": "778cdfbae36117c9119099ab2a4b6fca", "text": "Smartphone notifications provide application-specific information in real-time, but could distract users from in-person social interactions when delivered at inopportune moments. We explore breakpoint-based notification management, in which the smartphone defers notifications until an opportune moment. With a video survey where participants selected appropriate moments for notifications from a video-recorded social interaction, we identify four breakpoint types: long silence, a user leaving the table, others using smartphones, and a user left alone. We introduce a Social Context-Aware smartphone Notification system, SCAN, that uses build-in sensors to detect social context and identifies breakpoints to defer smartphone notifications until a breakpoint. We conducted a controlled study with ten friend groups who had SCAN installed on their smartphones while dining at a restaurant. Results show that SCAN accurately detects breakpoints (precision=92.0%, recall=82.5%), and reduces notification interruptions by 54.1%. Most participants reported that SCAN helped them to focus better on in-person social interaction and found selected breakpoints appropriate.", "title": "" }, { "docid": "753d840a62fc4f4b57f447afae07ba84", "text": "Feature selection has been proven to be effective and efficient in preparing high-dimensional data for data mining and machine learning problems. Since real-world data is usually unlabeled, unsupervised feature selection has received increasing attention in recent years. Without label information, unsupervised feature selection needs alternative criteria to define feature relevance. Recently, data reconstruction error emerged as a new criterion for unsupervised feature selection, which defines feature relevance as the capability of features to approximate original data via a reconstruction function. Most existing algorithms in this family assume predefined, linear reconstruction functions. However, the reconstruction function should be data dependent and may not always be linear especially when the original data is high-dimensional. In this paper, we investigate how to learn the reconstruction function from the data automatically for unsupervised feature selection, and propose a novel reconstruction-based unsupervised feature selection framework REFS, which embeds the reconstruction function learning process into feature selection. Experiments on various types of realworld datasets demonstrate the effectiveness of the proposed framework REFS.", "title": "" }, { "docid": "bbd5a204986f546b00dbcba8fbca75be", "text": "We present a novel keyword spotting (KWS) system that uses contextual automatic speech recognition (ASR). For voice-activated devices, it is common that a KWS system is run on the device in order to quickly detect a trigger phrase (e.g. “Ok Google”). After the trigger phrase is detected, the audio corresponding to the voice command that follows is streamed to the server. The audio is transcribed by the server-side ASR system and semantically processed to generate a response which is sent back to the device. Due to limited resources on the device, the device KWS system might introduce false accepts (FA) and false rejects (FR) that can cause an unsatisfactory user experience. We describe a system that uses server-side contextual ASR and trigger phrase non-terminals to improve overall KWS accuracy. We show that this approach can significantly reduce the FA rate (by 89%) while minimally increasing the FR rate (by 0.2%). Furthermore, we show that this system significantly improves the ASR quality, reducing Word Error Rate (WER) (by 10% to 50% relative), and allows the user to speak seamlessly, without pausing between the trigger phrase and the voice command.", "title": "" }, { "docid": "4bc7687ba89699a537329f37dda4e74d", "text": "At the same time as cities are growing, their share of older residents is increasing. To engage and assist cities to become more “age-friendly,” the World Health Organization (WHO) prepared the Global Age-Friendly Cities Guide and a companion “Checklist of Essential Features of Age-Friendly Cities”. In collaboration with partners in 35 cities from developed and developing countries, WHO determined the features of age-friendly cities in eight domains of urban life: outdoor spaces and buildings; transportation; housing; social participation; respect and social inclusion; civic participation and employment; communication and information; and community support and health services. In 33 cities, partners conducted 158 focus groups with persons aged 60 years and older from lower- and middle-income areas of a locally defined geographic area (n = 1,485). Additional focus groups were held in most sites with caregivers of older persons (n = 250 caregivers) and with service providers from the public, voluntary, and commercial sectors (n = 515). No systematic differences in focus group themes were noted between cities in developed and developing countries, although the positive, age-friendly features were more numerous in cities in developed countries. Physical accessibility, service proximity, security, affordability, and inclusiveness were important characteristics everywhere. Based on the recurring issues, a set of core features of an age-friendly city was identified. The Global Age-Friendly Cities Guide and companion “Checklist of Essential Features of Age-Friendly Cities” released by WHO serve as reference for other communities to assess their age readiness and plan change.", "title": "" } ]
scidocsrr
ab09694ee248b8430aab5e77271eddfd
Coarse-to-Fine Description for Fine-Grained Visual Categorization
[ { "docid": "892661d87138d49aab2a54b7557a7021", "text": "Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the CaltechUCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.", "title": "" } ]
[ { "docid": "4d1ea9da68cc3498b413371f12c90433", "text": "Transfer Learning (TL) plays a crucial role when a given dataset has insufficient labeled examples to train an accurate model. In such scenarios, the knowledge accumulated within a model pre-trained on a source dataset can be transferred to a target dataset, resulting in the improvement of the target model. Though TL is found to be successful in the realm of imagebased applications, its impact and practical use in Natural Language Processing (NLP) applications is still a subject of research. Due to their hierarchical architecture, Deep Neural Networks (DNN) provide flexibility and customization in adjusting their parameters and depth of layers, thereby forming an apt area for exploiting the use of TL. In this paper, we report the results and conclusions obtained from extensive empirical experiments using a Convolutional Neural Network (CNN) and try to uncover thumb rules to ensure a successful positive transfer. In addition, we also highlight the flawed means that could lead to a negative transfer. We explore the transferability of various layers and describe the effect of varying hyper-parameters on the transfer performance. Also, we present a comparison of accuracy value and model size against state-of-the-art methods. Finally, we derive inferences from the empirical results and provide best practices to achieve a successful positive transfer.", "title": "" }, { "docid": "c366303728d2a8ee47fe4cbfe67dec24", "text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.", "title": "" }, { "docid": "849280927a79cee3f6580ec837d89797", "text": "BACKGROUND\nGlenohumeral pain and rotator cuff tendinopathy (RCT) are common musculoskeletal complaints with high prevalence among working populations. The primary proposed pathophysiologic mechanisms are sub-acromial RC tendon impingement and reduced tendon blood flow. Some sleep postures may increase subacromial pressure, potentially contributing to these postulated mechanisms. This study uses a large population of workers to investigate whether there is an association between preferred sleeping position and prevalence of: (1) shoulder pain, and (2) rotator cuff tendinopathy.\n\n\nMETHODS\nA cross-sectional analysis was performed on baseline data from a multicenter prospective cohort study. Participants were 761 workers who were evaluated by questionnaire using a body diagram to determine the presence of glenohumeral pain within 30 days prior to enrollment. The questionnaire also assessed primary and secondary preferred sleep position(s) using 6 labeled diagrams. All workers underwent a structured physical examination to determine whether RCT was present. For this study, the case definition of RCT was glenohumeral pain plus at least one of a positive supraspinatus test, painful arc and/or Neer's test. Prevalence of glenohumeral pain and RCT were individually calculated for the primary and secondary sleep postures and odds ratios were calculated.\n\n\nRESULTS\nAge, sex, Framingham cardiovascular risk score and BMI had significant associations with glenohumeral pain. For rotator cuff tendinopathy, increasing age, Framingham risk score and Hand Activity Level (HAL) showed significant associations. The sleep position anticipated to have the highest risk of glenohumeral pain and RCT was paradoxically associated with a decreased prevalence of glenohumeral pain and also trended toward being protective for RCT. Multivariable logistic regression showed no further significant associations.\n\n\nCONCLUSION\nThis cross-sectional study unexpectedly found a reduced association between one sleep posture and glenohumeral pain. This cross-sectional study may be potentially confounded, by participants who are prone to glenohumeral pain and RCT may have learned to avoid sleeping in the predisposing position. Longitudinal studies are needed to further evaluate a possible association between glenohumeral pain or RCT and sleep posture as a potential risk factor.", "title": "" }, { "docid": "907de88b781d58610b0a09313014017f", "text": "This study was conducted to determine the seroprevalence of antibodies against Newcastle disease virus (NDV), Chicken infectious anemia virus (CIAV) and Avian influenza virus (AIV) in indigenous chickens in Grenada, West Indies. Indigenous chickens are kept for eggs and meat for either domestic consumption or local sale. These birds are usually kept in the backyard of the house with little or no shelter. The mean size of the flock per household was 14 birds (range 5-40 birds). Blood was collected from 368 birds from all the six parishes of Grenada and serum samples were tested for antibodies against NDV, CIAV and AIV using commercial enzyme-linked immunosorbent assay (ELISA) kits. The seroprevalence of antibodies against NDV, CIA and AI was 66.3% (95% CI; 61.5% to 71.1%), 59.5% (95% CI; 54.4% to 64.5%) and 10.3% (95% CI; 7.2% to 13.4%), respectively. Since indigenous chickens in Grenada are not vaccinated against poultry pathogens, these results indicate exposure of chickens to NDV, AIV and CIAV Indigenous chickens are thus among the risk factors acting as vectors of pathogens that can threaten commercial poultry and other avian species in Grenada", "title": "" }, { "docid": "10a0f370ad3e9c3d652e397860114f90", "text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.", "title": "" }, { "docid": "3fdd81a3e2c86f43152f72e159735a42", "text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "title": "" }, { "docid": "71c6c714535ae1bfd749cbb8bbb34f5e", "text": "This paper tackles the problem of relative pose estimation between two monocular camera images in textureless scenes. Due to a lack of point matches, point-based approaches such as the 5-point algorithm often fail when used in these scenarios. Therefore we investigate relative pose estimation from line observations. We propose a new approach in which the relative pose estimation from lines is extended by a 3D line direction estimation step. The estimated line directions serve to improve the robustness and the efficiency of all processing phases: they enable us to guide the matching of line features and allow an efficient calculation of the relative pose. First, we describe in detail the novel 3D line direction estimation from a single image by clustering of parallel lines in the world. Secondly, we propose an innovative guided matching in which only clusters of lines with corresponding 3D line directions are considered. Thirdly, we introduce the new relative pose estimation based on 3D line directions. Finally, we combine all steps to a visual odometry system. We evaluate the different steps on synthetic and real sequences and demonstrate that in the targeted scenarios we outperform the state-of-the-art in both accuracy and computation time.", "title": "" }, { "docid": "da5ad61c492419515e8449b435b42e80", "text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.", "title": "" }, { "docid": "b08f67bc9b84088f8298b35e50d0b9c5", "text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.", "title": "" }, { "docid": "ce3cd1edffb0754e55658daaafe18df6", "text": "Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached. ⃰ Max Planck Institute for Research on Collective Goods, Bonn", "title": "" }, { "docid": "be3e02812e35000b39e4608afc61f229", "text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.", "title": "" }, { "docid": "28d8be0cd581a9696c533b457ceb6628", "text": "Nowadays, people usually participate in multiple social networks simultaneously, e.g., Facebook and Twitter. Formally, the correspondences of the accounts that belong to the same user are defined as anchor links, and the networks aligned by anchor links can be denoted as aligned networks. In this paper, we study the problem of anchor link prediction (ALP) across a pair of aligned networks based on social network structure. First, three similarity metrics (CPS, CCS, and CPS+) are proposed. Different from the previous works, we focus on the theoretical guarantees of our metrics. We prove mathematically that the node pair with the maximum CPS or CPS+ should be an anchor link with high probability and a correctly predicted anchor link must have a high value of CCS. Second, using the CPS+ and CCS, we present a two-stage iterative algorithm CPCC to solve the problem of the ALP. More specifically, we present an early termination strategy to make a tradeoff between precision and recall. At last, a series of experiments are conducted on both synthetic and real-world social networks to demonstrate the effectiveness of the CPCC.", "title": "" }, { "docid": "80b3337b5a0161990358bd9da0119471", "text": "In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform stateof-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.", "title": "" }, { "docid": "344db754658e580ea441c44987b09286", "text": "Online learning to rank for information retrieval (IR) holds promise for allowing the development of \"self-learning\" search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.\n In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our pre-selection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.", "title": "" }, { "docid": "e50842fc8438af7fe6ce4b6d9a5439a7", "text": "OBJECTIVE\nTimely recognition and optimal management of atherogenic dyslipidemia (AD) and residual vascular risk (RVR) in family medicine.\n\n\nBACKGROUND\nThe global increase of the incidence of obesity is accompanied by an increase in the incidence of many metabolic and lipoprotein disorders, in particular AD, as an typical feature of obesity, metabolic syndrome, insulin resistance and diabetes type 2. AD is an important factor in cardio metabolic risk, and is characterized by a lipoprotein profile with low levels of high-density lipoprotein (HDL), high levels of triglycerides (TG) and high levels of low-density lipoprotein (LDL) cholesterol. Standard cardiometabolic risk assessment using the Framingham risk score and standard treatment with statins is usually sufficient, but not always that effective, because it does not reduce RVR that is attributed to elevated TG and reduced HDL cholesterol. RVR is subject to reduction through lifestyle changes or by pharmacological interventions. In some studies it was concluded that dietary interventions should aim to reduce the intake of calories, simple carbohydrates and saturated fats, with the goal of reaching cardiometabolic suitability, rather than weight reduction. Other studies have found that the reduction of carbohydrates in the diet or weight loss can alleviate AD changes, while changes in intake of total or saturated fat had no significant influence. In our presented case, a lifestyle change was advised as a suitable diet with reduced intake of carbohydrates and a moderate physical activity of walking for at least 180 minutes per week, with an recommendation for daily intake of calories alignment with the total daily (24-hour) energy expenditure (24-EE), depending on the degree of physical activity, type of food and the current health condition. Such lifestyle changes together with combined medical therapy with Statins, Fibrates and Omega-3 fatty acids, resulted in significant improvement in atherogenic lipid parameters.\n\n\nCONCLUSION\nUnsuitable atherogenic nutrition and insufficient physical activity are the new risk factors characteristic for AD. Nutritional interventions such as diet with reduced intake of carbohydrates and calories, moderate physical activity, combined with pharmacotherapy can improve atherogenic dyslipidemic profile and lead to loss of weight. Although one gram of fat release twice more kilo calories compared to carbohydrates, carbohydrates seems to have a greater atherogenic potential, which should be explored in future.", "title": "" }, { "docid": "bf338661988fd28c9bafe7ea1ca59f34", "text": "We propose a system for landing unmanned aerial vehicles (UAV), specifically an autonomous rotorcraft, in uncontrolled, arbitrary, terrains. We present plans for and progress on a vision-based system for the recovery of the geometry and material properties of local terrain from a mounted stereo rig for the purposes of finding an optimal landing site. A system is developed which integrates motion estimation from tracked features, and an algorithm for approximate estimation of a dense elevation map in a world coordinate system.", "title": "" }, { "docid": "00fa68c8e80e565c6fc4e0fdf053bac8", "text": "This work partially reports the results of a study aiming at the design and analysis of the performance of a multi-cab metropolitan transportation system. In our model we investigate a particular multi-vehicle many-to-many dynamic request dial-a-ride problem. We present a heuristic algorithm for this problem and some preliminary results. The algorithm is based on iteratively solving a singlevehicle subproblem at optimality: a pretty efficient dynamic programming routine has been devised for this purpose. This work has been carried out by researchers from both University of Rome “Tor Vergata” and Italian Energy Research Center ENEA as a line of a reasearch program, regarding urban mobility optimization, funded by ENEA and the Italian Ministry of Environment.", "title": "" }, { "docid": "7431ee071307189e58b5c7a9ce3a2189", "text": "Among tangible threats and vulnerabilities facing current biometric systems are spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access and advantages. Recently, an increasing attention has been given to this research problem. This can be attested by the growing number of articles and the various competitions that appear in major biometric forums. We have recently participated in a large consortium (TABULARASA) dealing with the vulnerabilities of existing biometric systems to spoofing attacks with the aim of assessing the impact of spoofing attacks, proposing new countermeasures, setting standards/protocols, and recording databases for the analysis of spoofing attacks to a wide range of biometrics including face, voice, gait, fingerprints, retina, iris, vein, electro-physiological signals (EEG and ECG). The goal of this position paper is to share the lessons learned about spoofing and anti-spoofing in face biometrics, and to highlight open issues and future directions.", "title": "" }, { "docid": "e19ca2e4f2dbf4bd808f2f7a1a4aba18", "text": "BACKGROUND\nCurrent ventricular assist devices (VADs) in the United States are designed primarily for adult use. Data on VADs as a bridge to transplantation in children are limited.\n\n\nMETHODS AND RESULTS\nA multi-institutional, prospectively maintained database of outcomes in children after listing for heart transplantation (n=2375) was used to analyze outcomes of VAD patients (n=99, 4%) listed between January 1993 and December 2003. Median age at VAD implantation was 13.3 years (range, 2 days to 17.9 years); diagnoses were cardiomyopathy (78%) and congenital heart disease (22%). Mean duration of support was 57 days (range, 1 to 465 days). Seventy-three percent were supported with a long-term device, with 39% requiring biventricular support. Seventy-seven patients (77%) survived to transplantation, 5 patients were successfully weaned from support and recovered, and 17 patients (17%) died on support. In the recent era (2000 to 2003), successful bridge to transplantation with VAD was achieved in 86% of patients. Peak hazard for death while waiting was the first 2 weeks after VAD placement. Risk factors for death while awaiting a transplant included earlier era of implantation (P=0.05), female gender (P=0.02), and congenital disease diagnosis (P=0.05). There was no difference in 5-year survival after transplantation for patients on VAD at time of transplantation as compared with those not requiring VAD.\n\n\nCONCLUSIONS\nVAD support in children successfully bridged 77% of patients to transplantation, with posttransplantation outcomes comparable to those not requiring VAD. These encouraging results emphasize the need to further understand patient selection and to delineate the impact of VAD technology for children.", "title": "" } ]
scidocsrr
809129805e63c6d179f5f0a40f1d7443
Differential Evolution Training Algorithm for Feed-Forward Neural Networks
[ { "docid": "3293e4e0d7dd2e29505db0af6fbb13d1", "text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.", "title": "" } ]
[ { "docid": "77b1507ce0e732b3ac93d83f1a5971b3", "text": "Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technology for high data rate communication system. The basic principle of OFDM i s to divide the available spectrum into parallel channel s in order to transmit data on these channels at a low rate. The O FDM concept is based on the fact that the channels refe rr d to as carriers are orthogonal to each other. Also, the fr equency responses of the parallel channels are overlapping. The aim of this paper is to simulate, using GNU Octave, an OFD M transmission under Additive White Gaussian Noise (AWGN) and/or Rayleigh fading and to analyze the effects o f these phenomena.", "title": "" }, { "docid": "fdcea57edbe935ec9949247fd47888e6", "text": "Maintenance of skeletal muscle mass is contingent upon the dynamic equilibrium (fasted losses-fed gains) in protein turnover. Of all nutrients, the single amino acid leucine (Leu) possesses the most marked anabolic characteristics in acting as a trigger element for the initiation of protein synthesis. While the mechanisms by which Leu is 'sensed' have been the subject of great scrutiny, as a branched-chain amino acid, Leu can be catabolized within muscle, thus posing the possibility that metabolites of Leu could be involved in mediating the anabolic effect(s) of Leu. Our objective was to measure muscle protein anabolism in response to Leu and its metabolite HMB. Using [1,2-(13)C2]Leu and [(2)H5]phenylalanine tracers, and GC-MS/GC-C-IRMS we studied the effect of HMB or Leu alone on MPS (by tracer incorporation into myofibrils), and for HMB we also measured muscle proteolysis (by arteriovenous (A-V) dilution). Orally consumed 3.42 g free-acid (FA-HMB) HMB (providing 2.42 g of pure HMB) exhibited rapid bioavailability in plasma and muscle and, similarly to 3.42 g Leu, stimulated muscle protein synthesis (MPS; HMB +70% vs. Leu +110%). While HMB and Leu both increased anabolic signalling (mechanistic target of rapamycin; mTOR), this was more pronounced with Leu (i.e. p70S6K1 signalling 90 min vs. 30 min for HMB). HMB consumption also attenuated muscle protein breakdown (MPB; -57%) in an insulin-independent manner. We conclude that exogenous HMB induces acute muscle anabolism (increased MPS and reduced MPB) albeit perhaps via distinct, and/or additional mechanism(s) to Leu.", "title": "" }, { "docid": "ca331150e60e24f038f9c440b8125ddc", "text": "Class imbalance is one of the challenges of machine learning and data mining fields. Imbalance data sets degrades the performance of data mining and machine learning techniques as the overall accuracy and decision making be biased to the majority class, which lead to misclassifying the minority class samples or furthermore treated them as noise. This paper proposes a general survey for class imbalance problem solutions and the most significant investigations recently introduced by researchers.", "title": "" }, { "docid": "b03d88449eaf4e393dc842340f6951ea", "text": "Use of mobile personal computers in open networked environment is revolutionalising the way we use computers. Mobile networked computing is raising important information security and privacy issues. This paper is concerned with the design of authentication protocols for a mobile computing environment. The paper rst analyses the authenti-cation initiator protocols proposed by Beller,Chang and Yacobi (BCY) and the modiications considered by Carlsen and points out some weaknesses. The paper then suggests improvements to these protocols. The paper proposes secure end-to-end protocols between mobile users using both symmetric and public key based systems. These protocols enable mutual authentication and establish a shared secret key between mobile users. Furthermore, these protocols provide a certain degree of anonymity of the communicating users to be achieved visa -vis other system users.", "title": "" }, { "docid": "f3b76c5ad1841a56e6950f254eda8b17", "text": "Due to the complexity of human languages, most of sentiment classification algorithms are suffered from a huge-scale dimension of vocabularies which are mostly noisy and redundant. Deep Belief Networks (DBN) tackle this problem by learning useful information in input corpus with their several hidden layers. Unfortunately, DBN is a time-consuming and computationally expensive process for large-scale applications. In this paper, a semi-supervised learning algorithm, called Deep Belief Networks with Feature Selection (DBNFS) is developed. Using our chi-squared based feature selection, the complexity of the vocabulary input is decreased since some irrelevant features are filtered which makes the learning phase of DBN more efficient. The experimental results of our proposed DBNFS shows that the proposed DBNFS can achieve higher classification accuracy and can speed up training time compared with others well-known semi-supervised learning algorithms.", "title": "" }, { "docid": "3c635de0cc71f3744b3496069633bdd2", "text": "Where malaria prospers most, human societies have prospered least. The global distribution of per-capita gross domestic product shows a striking correlation between malaria and poverty, and malaria-endemic countries also have lower rates of economic growth. There are multiple channels by which malaria impedes development, including effects on fertility, population growth, saving and investment, worker productivity, absenteeism, premature mortality and medical costs.", "title": "" }, { "docid": "04a15b226d2466ea03306e3f413b4bd0", "text": "More and more people express their opinions on social media such as Facebook and Twitter. Predictive analysis on social media time-series allows the stake-holders to leverage this immediate, accessible and vast reachable communication channel to react and proact against the public opinion. In particular, understanding and predicting the sentiment change of the public opinions will allow business and government agencies to react against negative sentiment and design strategies such as dispelling rumors and post balanced messages to revert the public opinion. In this paper, we present a strategy of building statistical models from the social media dynamics to predict collective sentiment dynamics. We model the collective sentiment change without delving into micro analysis of individual tweets or users and their corresponding low level network structures. Experiments on large-scale Twitter data show that the model can achieve above 85% accuracy on directional sentiment prediction.", "title": "" }, { "docid": "f657ec927e0cd39d06428dc3ee37e5e2", "text": "Muscle hernias of the lower leg involving the tibialis anterior, peroneus brevis, and lateral head of the gastrocnemius were found in three different patients. MRI findings allowed recognition of herniated muscle in all cases and identification of fascial defect in two of them. MR imaging findings and the value of dynamic MR imaging is emphasized.", "title": "" }, { "docid": "12ee85d0fa899e4e864bc1c30dedcd22", "text": "An object-oriented simulation (OOS) consists of a set of objects that interact with each other over time. This paper provides a thorough introduction to OOS, addresses the important issue of composition versus inheritance, describes frames and frameworks for OOS, and presents an example of a network simulation language as an illustration of OOS.", "title": "" }, { "docid": "6836e08a29fa9aea26284a0ff799019a", "text": "Mastering the game of Go has remained a longstanding challenge to the field of AI. Modern computer Go programs rely on processing millions of possible future positions to play well, but intuitively a stronger and more ‘humanlike’ way to play the game would be to rely on pattern recognition rather than brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to ‘hard code’ symmetries that are expected to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction systems have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go and win some games against state of the art Go playing program Fuego while using a fraction of the play time.", "title": "" }, { "docid": "07fc4ce339369ecd744ab180c5b56b45", "text": "The main objective of this study was to identify successful factors in implementing an e-learning program. Existing literature has identified several successful factors in implementing an e-learning program. These factors include program content, web page accessibility, learners’ participation and involvement, web site security and support, institution commitment, interactive learning environment, instructor competency, and presentation and design. All these factors were tested together with other related criteria which are important for e-learning program implementation. The samples were collected based on quantitative methods, specifically, self-administrated questionnaires. All the criteria that were tested to see if they were important in an e-learning program implementation.", "title": "" }, { "docid": "cd7c2eee84942324c77b6acd2b3e3e86", "text": "Learning word embeddings has received a significant amount of attention recently. Often, word embeddings are learned in an unsupervised manner from a large collection of text. The genre of the text typically plays an important role in the effectiveness of the resulting embeddings. How to effectively train word embedding models using data from different domains remains a problem that is underexplored. In this paper, we present a simple yet effective method for learning word embeddings based on text from different domains. We demonstrate the effectiveness of our approach through extensive experiments on various down-stream NLP tasks.", "title": "" }, { "docid": "776cba62170ee8936629aabca314fd46", "text": "While the Global Positioning System (GPS) tends to be not useful anymore in terms of precise localization once one gets into a building, Low Energy beacons might come in handy instead. Navigating free of signal reception problems throughout a building when one has never visited that place before is a challenge tackled with indoors localization. Using Bluetooth Low Energy1 (BLE) beacons (either iBeacon or Eddystone formats) is the medium to accomplish that. Indeed, different purpose oriented applications can be designed, developed and shaped towards the needs of any person in the context of a certain building. This work presents a series of post-processing filters to enhance the outcome of the estimated position applying trilateration as the main and straightforward technique to locate someone within a building. A later evaluation tries to give enough evidence around the feasibility of this indoor localization technique. A mobile app should be everything a user would need to have within a building in order to navigate inside.", "title": "" }, { "docid": "59f64fc8452026f266e6a6d84297d921", "text": "OBJECTIVES\nTo report 2 cases of penile duplication and review the literature in an attempt to categorize the associated anomalies in relation to the degree of penile duplication. Embryologic considerations of this rare anomaly are also reviewed.\n\n\nMETHODS\nWe report 2 distinct cases of diphallia. In the first case, true complete penile duplication was associated with multiple malformations, including a cloacal anomaly, colon and bladder duplication, a horseshoe kidney, a bifid scrotum with undescended testes, a hypoplastic right leg, and a ventricular septum defect. The second patient presented with true, complete diphallia and bladder and urethral duplication but an absence of other anomalies. The patients were individually treated according to the concomitant malformations. A review of published reports allowed a classification of associated anomalies in 77 cases of diphallia, according to the degree of penile duplication.\n\n\nRESULTS\nThe first patient underwent a series of staged surgical repairs, including correction of the congenital heart anomaly, separation of the urogenital and gastrointestinal tract and resection of the duplicate terminal colon, excision of the smaller bladder and underdeveloped duplicate penis, bilateral orchiopexy, and hypospadias correction. The second patient underwent bladder fusion and excision of a urethrorectal fistula. Penile reconstruction was left for a later stage. An analysis of the cases available in published studies suggests that diphallia is often associated with a wide spectrum of anomalies that vary from severe malformations to less significant variations of human anatomy.\n\n\nCONCLUSIONS\nPenile duplication is a rare anomaly. Thorough investigations are mandatory in all cases to reveal underlying congenital malformations that are potentially life threatening and require immediate surgical correction. Treatment should always be individualized according to the degree of penile duplication and the extent of the concomitant anomalies.", "title": "" }, { "docid": "a62dc7e25b050addad1c27d92deee8b7", "text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.", "title": "" }, { "docid": "68649624bbd2aa73acd98df12f06fd28", "text": "Grey wolf optimizer (GWO) is one of recent metaheuristics swarm intelligence methods. It has been widely tailored for a wide variety of optimization problems due to its impressive characteristics over other swarm intelligence methods: it has very few parameters, and no derivation information is required in the initial search. Also it is simple, easy to use, flexible, scalable, and has a special capability to strike the right balance between the exploration and exploitation during the search which leads to favourable convergence. Therefore, the GWO has recently gained a very big research interest with tremendous audiences from several domains in a very short time. Thus, in this review paper, several research publications using GWO have been overviewed and summarized. Initially, an introductory information about GWO is provided which illustrates the natural foundation context and its related optimization conceptual framework. The main operations of GWO are procedurally discussed, and the theoretical foundation is described. Furthermore, the recent versions of GWO are discussed in detail which are categorized into modified, hybridized and paralleled versions. The main applications of GWO are also thoroughly described. The applications belong to the domains of global optimization, power engineering, bioinformatics, environmental applications, machine learning, networking and image processing, etc. The open source software of GWO is also provided. The review paper is ended by providing a summary conclusion of the main foundation of GWO and suggests several possible future directions that can be further investigated.", "title": "" }, { "docid": "7ed1fabaa95eaa1afb52c2f73230b3b0", "text": "BACKGROUND\nAdult circumcision is an extremely common surgical operation. As such, we developed a simple model to teach junior doctors the various techniques of circumcision in a safe, reliable, and realistic manner.\n\n\nMATERIALS AND METHODS\nA commonly available simulated model penis (Pharmabotics, Limited, Winchester, United Kingdom) is used, which is then covered with a 30-mm diameter, 400-mm long, double-layered simulated bowel (Limbs & Things, Bristol, United Kingdom). The 2 layers of the prepuce are simulated by folding the simulated bowel on itself. The model has been officially adopted in the UroEmerge hands-on practical skills course and all participants were asked to provide feedback about their experience on a scale from 1 to 10 (1 = extremely unsatisfied and 10 = excellent).\n\n\nRESULTS\nThe model has been used successfully to demonstrate, teach, and practice adult circumcision as well as other penile procedures with rating by trainees ranged from 7 to 10 (median: 9), and 9 of 12 trainees commented on the model using expressions such as \"life like,\" \"excellent idea,\" or \"extremely beneficial.\"\n\n\nCONCLUSIONS\nThe model is particularly useful as it is life like, realistic, easy to set up, and can be used to repeatedly demonstrate circumcision, as well as other surgical procedures, such as dorsal slit and paraphimosis reduction.", "title": "" }, { "docid": "b5f2b13b5266c30ba02ff6d743e4b114", "text": "The increasing scale, technology advances and services of modern networks have dramatically complicated their management such that in the near future it will be almost impossible for human administrators to monitor them. To control this complexity, IBM has introduced a promising approach aiming to create self-managed systems. This approach, called Autonomic Computing, aims to design computing equipment able to self-adapt its configuration and to self-optimize its performance depending on its situation in order to fulfill high-level objectives defined by the human operator. In this paper, we present our autonomic network management architecture (ANEMA) that implements several policy forms to achieve autonomic behaviors in the network equipments. In ANEMA, the high-level objectives of the human administrators and the users are captured and expressed in terms of ‘Utility Function’ policies. The ‘Goal’ policies describe the high-level management directives needed to guide the network to achieve the previous utility functions. Finally, the ‘behavioral’ policies describe the behaviors that should be followed by network equipments to react to changes in their context and to achieve the given ‘Goal’ policies. In order to highlight the benefits of ANEMA architecture and the continuum of policies to introduce autonomic management in a multiservice IP network, a testbed has been implemented and several scenarios have been executed. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e10319d1eb6dd93fe0d98b6d3303efe9", "text": "This paper presents a novel fast optical flow estimation algorithm and its application to real-time obstacle avoidance of a guide-dog robot. The function of the laboratory-developed robot is to help blind or visually impaired pedestrians to move safely among obstacles. The proposed algorithm features a combination of the conventional correlation-based principle and the differential-based method for optical flow estimation. Employing image intensity gradients as features for pattern matching, we set up a brightness constraint to configure the search area. The merit of this scheme is that the computation load can be greatly reduced and in the mean time the possibility of estimation error is decreased. The vision system has been established on board the robot to provide depth information of the immediate environment. The depth data are transformed to a safety distribution histogram and used for real-time obstacle avoidance. Experimental results demonstrate that the proposed method is effective for a guidance robot in a dynamic environment.", "title": "" }, { "docid": "9c9e3bcd8213739d2fab740b7010a1cd", "text": "Data anonymization techniques have been the subject of intense investigation in recent years, for many kinds of structured data, including tabular, graph and item set data. They enable publication of detailed information, which permits ad hoc queries and analyses, while guaranteeing the privacy of sensitive information in the data against a variety of attacks. In this tutorial, we aim to present a unified framework of data anonymization techniques, viewed through the lens of uncertainty. Essentially, anonymized data describes a set of possible worlds, one of which corresponds to the original data. We show that anonymization approaches such as suppression, generalization, perturbation and permutation generate different working models of uncertain data, some of which have been well studied, while others open new directions for research. We demonstrate that the privacy guarantees offered by methods such as k-anonymization and l-diversity can be naturally understood in terms of similarities and differences in the sets of possible worlds that correspond to the anonymized data. We describe how the body of work in query evaluation over uncertain databases can be used for answering ad hoc queries over anonymized data in a principled manner. A key benefit of the unified approach is the identification of a rich set of new problems for both the Data Anonymization and the Uncertain Data communities.", "title": "" } ]
scidocsrr
b384cee62a8454cf87dd629f010d7dc5
Deep Active Learning for Civil Infrastructure Defect Detection and Classification
[ { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" } ]
[ { "docid": "43919b011f7d65d82d03bb01a5e85435", "text": "Self-inflicted burns are regularly admitted to burns units worldwide. Most of these patients are referred to psychiatric services and are successfully treated however some return to hospital with recurrent self-inflicted burns. The aim of this study is to explore the characteristics of the recurrent self-inflicted burn patients admitted to the Royal North Shore Hospital during 2004-2011. Burn patients were drawn from a computerized database and recurrent self-inflicted burn patients were identified. Of the total of 1442 burn patients, 40 (2.8%) were identified as self-inflicted burns. Of these patients, 5 (0.4%) were identified to have sustained previous self-inflicted burns and were interviewed by a psychiatrist. Each patient had been diagnosed with a borderline personality disorder and had suffered other forms of deliberate self-harm. Self-inflicted burns were utilized to relieve or help regulate psychological distress, rather than to commit suicide. Most patients had a history of emotional neglect, physical and/or sexual abuse during their early life experience. Following discharge from hospital, the patients described varying levels of psychiatric follow-up, from a post-discharge review at a local community mental health centre to twice-weekly psychotherapy. The patients who engaged in regular psychotherapy described feeling more in control of their emotions and reported having a longer period of abstinence from self-inflicted burn. Although these patients represent a small proportion of all burns, the repeat nature of their injuries led to a significant use of clinical resources. A coordinated and consistent treatment pathway involving surgical and psychiatric services for recurrent self-inflicted burns may assist in the management of these challenging patients.", "title": "" }, { "docid": "186ba2180a44b8a4a52ffba6f46751c4", "text": "Affective characteristics are crucial factors that influence human behavior, and often, the prevalence of either emotions or reason varies on each individual. We aim to facilitate the development of agents’ reasoning considering their affective characteristics. We first identify core processes in an affective BDI agent, and we integrate them into an affective agent architecture (GenIA3). These tasks include the extension of the BDI agent reasoning cycle to be compliant with the architecture, the extension of the agent language (Jason) to support affect-based reasoning, and the adjustment of the equilibrium between the agent’s affective and rational sides.", "title": "" }, { "docid": "a1530b82b61fc6fc8eceb083fc394e9b", "text": "The performance of any algorithm will largely depend on the setting of its algorithm-dependent parameters. The optimal setting should allow the algorithm to achieve the best performance for solving a range of optimization problems. However, such parameter tuning itself is a tough optimization problem. In this paper, we present a framework for self-tuning algorithms so that an algorithm to be tuned can be used to tune the algorithm itself. Using the firefly algorithm as an example, we show that this framework works well. It is also found that different parameters may have different sensitivities and thus require different degrees of tuning. Parameters with high sensitivities require fine-tuning to achieve optimality.", "title": "" }, { "docid": "2e9d0bf42b8bb6eb8752e89eb46f2fc5", "text": "What is the growth pattern of social networks, like Facebook and WeChat? Does it truly exhibit exponential early growth, as predicted by textbook models like the Bass model, SI, or the Branching Process? How about the count of links, over time, for which there are few published models?\n We examine the growth of several real networks, including one of the world's largest online social network, ``WeChat'', with 300 million nodes and 4.75 billion links by 2013; and we observe power law growth for both nodes and links, a fact that completely breaks the sigmoid models (like SI, and Bass). In its place, we propose NETTIDE, along with differential equations for the growth of the count of nodes, as well as links. Our model accurately fits the growth patterns of real graphs; it is general, encompassing as special cases all the known, traditional models (including Bass, SI, log-logistic growth); while still remaining parsimonious, requiring only a handful of parameters. Moreover, our NETTIDE for link growth is the first one of its kind, accurately fitting real data, and naturally leading to the densification phenomenon. We validate our model with four real, time-evolving social networks, where NETTIDE gives good fitting accuracy, and, more importantly, applied on the WeChat data, our NETTIDE forecasted more than 730 days into the future, with 3% error.", "title": "" }, { "docid": "7fcd8eee5f2dccffd3431114e2b0ed5a", "text": "Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.", "title": "" }, { "docid": "7499f88de9d2f76008dc38e96b08ca0a", "text": "Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate.", "title": "" }, { "docid": "257c9fda9808cb173e3b22f927864c21", "text": "Salesforce.com has recently completed an agile transformation of a two hundred person team within a three month window. This is one of the largest and fastest \"big-bang\" agile rollouts. This experience report discusses why we chose to move to an agile process, how we accomplished the transformation and what we learned from applying agile at scale.", "title": "" }, { "docid": "3bdd30d2c6e63f2e5540757f1db878b6", "text": "The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disruption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users’ engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of 4,709 intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users’ aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information. ∗Corresponding author General Terms Misinformation, Virality, Attention Patterns", "title": "" }, { "docid": "78cf38ee62d5501c3119552cb70b0997", "text": "This document discusses the status of research on detection and prevention of financial fraud undertaken as part of the IST European Commission funded FF POIROT (Financial Fraud Prevention Oriented Information Resources Using Ontology Technology) project. A first task has been the specification of the user requirements that define the functionality of the financial fraud ontology to be designed by the FF POIROT partners. It is claimed here that modeling fraudulent activity involves a mixture of law and facts as well as inferences about facts present, facts presumed or facts missing. The purpose of this paper is to explain this abstract model and to specify the set of user requirements.", "title": "" }, { "docid": "512ecda05fae6cb333c89833c489dbff", "text": "This review examines protein complexes in the Brookhaven Protein Databank to gain a better understanding of the principles governing the interactions involved in protein-protein recognition. The factors that influence the formation of protein-protein complexes are explored in four different types of protein-protein complexes--homodimeric proteins, heterodimeric proteins, enzyme-inhibitor complexes, and antibody-protein complexes. The comparison between the complexes highlights differences that reflect their biological roles.", "title": "" }, { "docid": "0fd147227c10a243f4209ffc1295d279", "text": "Increases in server power dissipation time placed significant pressure on traditional data center thermal management systems. Traditional systems utilize computer room air conditioning (CRAC) units to pressurize a raised floor plenum with cool air that is passed to equipment racks via ventilation tiles distributed throughout the raised floor. Temperature is typically controlled at the hot air return of the CRAC units away from the equipment racks. Due primarily to a lack of distributed environmental sensing, these CRAC systems are often operated conservatively resulting in reduced computational density and added operational expense. This paper introduces a data center environmental control system that utilizes a distributed sensor network to manipulate conventional CRAC units within an air-cooled environment. The sensor network is attached to standard racks and provides a direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. The combined controller and sensor network has been deployed in a production data center environment. Results from the algorithm will be presented that demonstrate the performance of the system and evaluate the energy savings compared with conventional data center environmental control architecture", "title": "" }, { "docid": "08fdb69b893ee37285a98fc447b9748e", "text": "We introduce a novel robust hybrid 3D face tracking framework from RGBD video streams, which is capable of tracking head pose and facial actions without pre-calibration or intervention from a user. In particular, we emphasize on improving the tracking performance in instances where the tracked subject is at a large distance from the cameras, and the quality of point cloud deteriorates severely. This is accomplished by the combination of a flexible 3D shape regressor and the joint 2D+3D optimization on shape parameters. Our approach fits facial blendshapes to the point cloud of the human head, while being driven by an efficient and rapid 3D shape regressor trained on generic RGB datasets. As an on-line tracking system, the identity of the unknown user is adapted on-the-fly resulting in improved 3D model reconstruction and consequently better tracking performance. The result is a robust RGBD face tracker capable of handling a wide range of target scene depths, whose performances are demonstrated in our extensive experiments better than those of the state-of-the-arts.", "title": "" }, { "docid": "9cb832657be4d4d80682c1a49249a319", "text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.08.023 ⇑ Corresponding author. Tel.: +47 73593602; fax: + E-mail address: Marielle.Christiansen@iot.ntnu.no This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships’ cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ba4d30e7ea09d84f8f7d96c426e50f34", "text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "046207a87b7b01f6bc12f08a195670b9", "text": "Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.", "title": "" }, { "docid": "54bf44e04920bdaa7388dbbbbd34a1a8", "text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.", "title": "" }, { "docid": "9ba6656cb67dcb72d4ebadcaf9450f40", "text": "OBJECTIVE\nThe Japan Ankylosing Spondylitis Society conducted a nationwide questionnaire survey of spondyloarthropathies (SpA) in 1990 and 1997, (1) to estimate the prevalence and incidence, and (2) to validate the criteria of Amor and the European Spondylarthropathy Study Group (ESSG) in Japan.\n\n\nMETHODS\nJapan was divided into 9 districts, to each of which a survey supervisor was assigned. According to unified criteria, each supervisor selected all the clinics and hospitals with potential for SpA patients in the district. The study population consisted of all patients with SpA seen at these institutes during a 5 year period (1985-89) for the 1st survey and a 7 year period (1990-96) for the 2nd survey.\n\n\nRESULTS\nThe 1st survey recruited 426 and the 2nd survey 638 cases, 74 of which were registered in both studies. The total number of patients with SpA identified 1985-96 was 990 (760 men, 227 women). They consisted of patients with ankylosing spondylitis (68.3%), psoriatic arthritis (12.7%), reactive arthritis (4.0%), undifferentiated SpA (5.4%), inflammatory bowel disease (2.2%), pustulosis palmaris et plantaris (4.7%), and others (polyenthesitis, etc.) (0.8%). The maximum onset number per year was 49. With the assumption that at least one-tenth of the Japanese population with SpA was recruited, incidence and prevalence were estimated not to exceed 0.48/100,000 and 9.5/100,000 person-years, respectively. The sensitivity was 84.0% for Amor criteria and 84.6 for ESSG criteria.\n\n\nCONCLUSION\nThe incidence and prevalence of SpA in Japanese were estimated to be less than 1/10 and 1/200, respectively, of those among Caucasians. The adaptability of the Amor and ESSG criteria was validated for the Japanese population.", "title": "" }, { "docid": "516ef94fad7f7e5801bf1ef637ffb136", "text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1", "title": "" }, { "docid": "00e315b8baf0ce6548ec7139c8ce105c", "text": "We revisit the well-known problem of boolean group testing which attempts to discover a sparse subset of faulty items in a large set of mostly good items using a small number of pooled (or grouped) tests. This problem originated during the second WorldWar, and has been the subject of active research during the 70's, and 80's. Recently, there has been a resurgence of interest due to the striking parallels between group testing and the now highly popular field of compressed sensing. In fact, boolean group testing is nothing but compressed sensing in a different algebra - with boolean `AND' and `OR' operations replacing vector space multiplication and addition. In this paper we review existing solutions for non-adaptive (batch) group testing and propose a linear programming relaxation solution, which has a resemblance to the basis pursuit algorithm for sparse recovery in linear models. We compare its performance to alternative methods for group testing.", "title": "" } ]
scidocsrr
5fba261922e35c65c366971b921e9fce
Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving
[ { "docid": "e584e7e0c96bc78bc2b2166d1af272a6", "text": "In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \"projective generative adversarial networks\" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views. The addition of a projection module allows us to infer the underlying 3D shape distribution without using any 3D, viewpoint information, or annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained on 3D data for a number of shape categories including chairs, airplanes, and cars. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage is that our model allows us to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.", "title": "" }, { "docid": "6ad90319d07abce021eda6f3a1d3886e", "text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
[ { "docid": "fce21a54f6319bcc798914a6fc4a8125", "text": "CRISPR-Cas systems have rapidly transitioned from intriguing prokaryotic defense systems to powerful and versatile biomolecular tools. This article reviews how these systems have been translated into technologies to manipulate bacterial genetics, physiology, and communities. Recent applications in bacteria have centered on multiplexed genome editing, programmable gene regulation, and sequence-specific antimicrobials, while future applications can build on advances in eukaryotes, the rich natural diversity of CRISPR-Cas systems, and the untapped potential of CRISPR-based DNA acquisition. Overall, these systems have formed the basis of an ever-expanding genetic toolbox and hold tremendous potential for our future understanding and engineering of the bacterial world.", "title": "" }, { "docid": "27582287aeb1abccda7c7582d75de676", "text": "Affect Control Theory is a mathematical representation of the interactions between two persons, in which it is posited that people behave in a way so as to minimize the amount of deflection between their cultural emotional sentiments and the transient emotional sentiments that are created by each situation. Affect Control Theory presents a maximum likelihood solution in which optimal behaviours or identities can be predicted based on past interactions. Here, we formulate a probabilistic and decision theoretic model of the same underlying principles, and show this to be a generalisation of the basic theory. The model is more expressive than the original theory, as it can maintain multiple hypotheses about behaviours and identities simultaneously as a probability distribution. This allows the model to generate affectively believable interactions with people by learning about their identity and predicting their behaviours. We demonstrate this generalisation with a set of simulations. We then show how our model can be used as an emotional \"plug-in\" for systems that interact with humans. We demonstrate human-interactive capability by building a simple intelligent tutoring application and pilot-testing it in an experiment with 20 participants.", "title": "" }, { "docid": "fb75e0c18c4852afac162b60554b67b1", "text": "OBJECTIVE\nTo evaluate the feasibility and safety of home rehabilitation of the hand using a robotic glove, and, in addition, its effectiveness, in hemiplegic patients after stroke.\n\n\nMETHODS\nIn this non-randomized pilot study, 21 hemiplegic stroke patients (Ashworth spasticity index ≤ 3) were prescribed, after in-hospital rehabilitation, a 2-month home-program of intensive hand training using the Gloreha Lite glove that provides computer-controlled passive mobilization of the fingers. Feasibility was measured by: number of patients who completed the home-program, minutes of exercise and number of sessions/patient performed. Safety was assessed by: hand pain with a visual analog scale (VAS), Ashworth spasticity index for finger flexors, opponents of the thumb and wrist flexors, and hand edema (circumference of forearm, wrist and fingers), measured at start (T0) and end (T1) of rehabilitation. Hand motor function (Motricity Index, MI), fine manual dexterity (Nine Hole Peg Test, NHPT) and strength (Grip test) were also measured at T0 and T1.\n\n\nRESULTS\nPatients performed, over a mean period 56 (49-63) days, a total of 1699 (1353-2045) min/patient of exercise with Gloreha Lite, 5.1 (4.3-5.8) days/week. Seventeen patients (81%) completed the full program. The mean VAS score of hand pain, Ashworth spasticity index and hand edema did not change significantly at T1 compared to T0. The MI, NHPT and Grip test improved significantly (p = 0.0020, 0.0156 and 0.0024, respectively) compared to baseline.\n\n\nCONCLUSION\nGloreha Lite is feasible and safe for use in home rehabilitation. The efficacy data show a therapeutic effect which need to be confirmed by a randomized controlled study.", "title": "" }, { "docid": "565ba6935c4fd6afdb4d393553a70d0b", "text": "This paper presents the problem definition and guidelines of the next generation stru control benchmark problem for seismically excited buildings. Focusing on a 20-story steel s ture representing a typical midto high-rise building designed for the Los Angeles, Califo region, the goal of this study is to provide a clear basis to evaluate the efficacy of various tural control strategies. An evaluationmodel has been developed that portrays the salient feat of the structural system. Control constraints and evaluation criteria are presented for the problem. The task of each participant in this benchmark study is to define (including devices sors and control algorithms), evaluate and report on their proposed control strategies. Thes egies may be either passive, active, semi-active or a combination thereof. A simulation pro has been developed and made available to facilitate direct comparison of the efficiency and of the various control strategies. To illustrate some of the design challenges a sample contr tem design is presented, although this sample is not intended to be viewed as a comp design. Introduction The protection of civil structures, including material content and human occupants, is out a doubt a world-wide priority. The extent of protection may range from reliable operation occupant comfort to human and structural survivability. Civil structures, including existing future buildings, towers and bridges, must be adequately protected from a variety of e including earthquakes, winds, waves and traffic. The protection of structures is now moving relying entirely on the inelastic deformation of the structure to dissipate the energy of s dynamic loadings, to the application of passive, active and semi-active structural control de to mitigate undesired responses to dynamic loads. In the last two decades, many control algorithms and devices have been proposed fo engineering applications (Soong 1990; Housner, et al. 1994; Soong and Constantinou 199 Fujino,et al. 1996; Spencer and Sain 1997), each of which has certain advantages, depend the specific application and the desired objectives. At the present time, structural control res is greatly diversified with regard to these specific applications and desired objectives. A com basis for comparison of the various algorithms and devices does not currently exist. Deter 1. Prof., Dept. of Civil Engrg. and Geo. Sci., Univ. of Notre Dame, Notre Dame, IN 46556-0767. 2. Doc. Cand., Dept. of Civil Engrg. and Geo. Sci., Univ. of Notre Dame, Notre Dame, IN 46556-0767. 3. Assist. Prof., Dept. of Civil Engrg., Washington Univ., St. Louis, MO 63130-4899. March 22, 1999 1 Spencer, et al.", "title": "" }, { "docid": "ae05afb899ac3a5bda26b20bde5af7ec", "text": "A compact microstrip rat-race hybrid with a 50% bandwidth employing space-filling curves is reported in this letter. The footprint of the proposed design occupies 31% of the area of the conventional similar design. Across the frequency bandwidth, the maximum amplitude unbalance is 0.5 dB, the phase variation is plusmn5deg , the isolation is better than 25 dB and the return loss is greater than 10 dB. Moreover, the circuit is planar, easy to design, and consists of only one layer without requiring plated thru holes, slots or bonding wires.", "title": "" }, { "docid": "4f239b400afd7299e6b3aebde38f4e36", "text": "Software architecture can be seen as a decision making process; it involves making the right decisions at the right time. Typically, these design decisions are not explicitly represented in the artifacts describing the design. They reside in the minds of the designers and are therefore easily lost. Rationale management is often proposed as a solution, but lacks a close relationship with software architecture artifacts. Explicit modeling of design decisions in the software architecture bridges this gap, as it allows for a close integration of rationale management with software architecture. This improves the understandability of the software architecture. Consequently, the software architecture becomes easier to communicate, maintain and evolve. Furthermore, it allows for analysis, improvement, and reuse of design decisions in the design process.", "title": "" }, { "docid": "b2d334cc7d79d2e3ebd573bbeaa2dfbe", "text": "Objectives\nTo measure the occurrence and levels of depression, anxiety and stress in undergraduate dental students using the Depression, Anxiety and Stress Scale (DASS-21).\n\n\nMethods\nThis cross-sectional study was conducted in November and December of 2014. A total of 289 dental students were invited to participate, and 277 responded, resulting in a response rate of 96%. The final sample included 247 participants. Eligible participants were surveyed via a self-reported questionnaire that included the validated DASS-21 scale as the assessment tool and questions about demographic characteristics and methods for managing stress.\n\n\nResults\nAbnormal levels of depression, anxiety and stress were identified in 55.9%, 66.8% and 54.7% of the study participants, respectively. A multiple linear regression analysis revealed multiple predictors: gender (for anxiety b=-3.589, p=.016 and stress b=-4.099, p=.008), satisfaction with faculty relationships (for depression b=-2.318, p=.007; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), satisfaction with peer relationships (for depression b=-3.527, p<.001; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), and dentistry as the first choice for field of study (for stress b=-2.648, p=.045). The standardized coefficients demonstrated the relationship and strength of the predictors for each subscale. To cope with stress, students engaged in various activities such as reading, watching television and seeking emotional support from others.\n\n\nConclusions\nThe high occurrence of depression, anxiety and stress among dental students highlights the importance of providing support programs and implementing preventive measures to help students, particularly those who are most susceptible to higher levels of these psychological conditions.", "title": "" }, { "docid": "c5d7d29f4001aca1fbfc6e605e62933d", "text": "A space efficient and simple circuit for ultra wideband (UWB) balanced pulse generation is presented. The pulse generator uses a single step recovery diode to provide a truly balanced output. The diode biasing is integrated with the switching circuitry to improve the compactness of the design. Two versions of the circuit with lumped and distributed pulse forming networks have been tested. The pulse parameters for distributed pulse shaping network were: rise/fall time (10-90%) 183 ps, pulse width (50-50%) 340 ps, pulse peak to peak voltage 896 mV (12.05 dBm peak power) and for the lumped case: rise time (10-90%) 272 ps, fall time (90-10%) 566 ps pulse width (50-50%) 511 ps, pulse amplitude /spl plusmn/1.6V (17 dBm peak power). In both cases excellent balance of the two pulses at the output ports can be observed. It should be noted that above parameters were obtained with typical inexpensive RF components. The circuit reduces the complexity of the design because of the lack of broadband baluns required for UWB balanced antennas. The circuit may be used as part of a UWB transmitter.", "title": "" }, { "docid": "eef7fcdcb53070709a231cb132c48004", "text": "Social networks have known an important development since the appea ranc of web 2.0 platforms. This leads to a growing need for social network mining and social network analysis (SN A) methods and tools in order to provide deeper analysis of the network but also to detect communities in view of various applications. For this reason, a lot of works have focused on graph characterization or clustering and several new SNA tools have be en developed over these last years. The purpose of this article is to compare some of these tools which implement algorithms dedicated to social network analysis.", "title": "" }, { "docid": "c851bad8a1f7c8526d144453b3f2aa4f", "text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.", "title": "" }, { "docid": "c438965615449efd728acec42be0b6d1", "text": "Human adults generally find fast tempos more arousing than slow tempos, with tempo frequently manipulated in music to alter tension and emotion. We used a previously published method [McDermott, J., & Hauser, M. (2004). Are consonant intervals music to their ears? Spontaneous acoustic preferences in a nonhuman primate. Cognition, 94(2), B11-B21] to test cotton-top tamarins and common marmosets, two new-World primates, for their spontaneous responses to stimuli that varied systematically with respect to tempo. Across several experiments, we found that both tamarins and marmosets preferred slow tempos to fast. It is possible that the observed preferences were due to arousal, and that this effect is homologous to the human response to tempo. In other respects, however, these two monkey species showed striking differences compared to humans. Specifically, when presented with a choice between slow tempo musical stimuli, including lullabies, and silence, tamarins and marmosets preferred silence whereas humans, when similarly tested, preferred music. Thus despite the possibility of homologous mechanisms for tempo perception in human and nonhuman primates, there appear to be motivational ties to music that are uniquely human.", "title": "" }, { "docid": "01875eeb7da3676f46dd9d3f8bf3ecac", "text": "It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D C , has the shortest road distance T HE TRAVELING-SALESMAN PROBLEM might be described as follows: Find the shortest route (tour) for a salesman starting from a given city, visiting each of a specified group of cities, and then returning to the original point of departure. More generally, given an n by n symmetric matrix D={d,j), where du represents the 'distance' from / to J, arrange the points in a cyclic order in such a way that the sum of the du between consecutive points is minimal. Since there are only a finite number of possibilities (at most 3>' 2 (« —1)0 to consider, the problem is to devise a method of picking out the optimal arrangement which is reasonably efficient for fairly large values of n. Although algorithms have been devised for problems of similar nature, e.g., the optimal assignment problem,''** little is known about the traveling-salesman problem. We do not claim that this note alters the situation very much; what we shall do is outline a way of approaching the problem that sometimes, at least, enables one to find an optimal path and prove it so. In particular, it will be shown that a certain arrangement of 49 cities, one m each of the 48 states and Washington, D. C, is best, the du used representing road distances as taken from an atlas. * HISTORICAL NOTE-The origin of this problem is somewhat obscure. It appears to have been discussed informally among mathematicians at mathematics meetings for many years. Surprisingly little in the way of results has appeared in the mathematical literature.'\" It may be that the minimal-distance tour problem was stimulated by the so-called Hamiltonian game' which is concerned with finding the number of different tours possible over a specified network The latter problem is cited by some as the origin of group theory and has some connections with the famou8 Four-Color Conjecture ' Merrill Flood (Columbia Universitj') should be credited with stimulating interest in the traveling-salesman problem in many quarters. As early as 1937, he tried to obtain near optimal solutions in reference to routing of school buses. Both Flood and A W. Tucker (Princeton University) recall that they heard about the problem first in a seminar talk by Hassler Whitney at Princeton in 1934 (although Whitney, …", "title": "" }, { "docid": "881b8494bce595080f5831693af161ef", "text": "The emergence of several new computing applications, such as virtual reality and smart environments, has become possible due to availability of large pool of cloud resources and services. However, the delay-sensitive applications pose strict delay requirements that transforms euphoria into a problem. The cloud computing paradigm is unable to meet the requirements of low latency, location awareness, and mobility support. In this context, Mobile Edge Computing (MEC) was introduced to bring the cloud services and resources closer to the user proximity by leveraging the available resources in the edge networks. In this paper, we present the definitions of the MEC given by researchers. Further, motivation of the MEC is highlighted by discussing various applications. We also discuss the opportunities brought by the MEC and some of the important research challenges are highlighted in MEC environment. A brief overview of accepted papers in our Special Issue on MEC is presented. Finally we conclude this paper by highlighting the key points and summarizing the paper.", "title": "" }, { "docid": "c49dbeeeb1ce4d0d5a528caf8fd595ff", "text": "Interpretation of medical images for diagnosis and treatment of complex disease from highdimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical imaging and image analysis. Unlike supervised learning which is biased towards how it is being supervised and manual efforts to create class label for the algorithm, unsupervised learning derive insights directly from the data itself, group the data and help to make data driven decisions without any external bias. This review systematically presents various unsupervised models applied to medical image analysis, including autoencoders and its several variants, Restricted Boltzmann machines, Deep belief networks, Deep Boltzmann machine and Generative adversarial network. Future research opportunities and challenges of unsupervised techniques for medical image analysis have also been discussed.", "title": "" }, { "docid": "d1b20385d90fe1e98a07f9cf55af6adb", "text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.", "title": "" }, { "docid": "be5f0369897c1a3b8120f75232bb37fa", "text": "Probiotics are defined as live microorganisms that, when administered in adequate amounts, confer a health benefit on the host. There is now mounting evidence that selected probiotic strains can provide health benefits to their human hosts. Numerous clinical trials show that certain strains can improve the outcome of intestinal infections by reducing the duration of diarrhea. Further investigations have shown benefits in reducing the recurrence of urogenital infections in women, while promising studies in cancer and allergies require research into the mechanisms of activity for particular strains and better-designed trials. At present, only a small percentage of physicians either know of probiotics or understand their potential applicability to patient care. Thus, probiotics are not yet part of the clinical arsenal for prevention and treatment of disease or maintenance of health. The establishment of accepted standards and guidelines, proposed by the Food and Agriculture Organization of the United Nations and the World Health Organization, represents a key step in ensuring that reliable products with suitable, informative health claims become available. Based upon the evidence to date, future advances with single- and multiple-strain therapies are on the horizon for the management of a number of debilitating and even fatal conditions.", "title": "" }, { "docid": "d6dc54ea8db074c5337673e8de0b0982", "text": "In this study, the attitudes, expectations and views of 206 students in four high schools within the scope of the FAT_ IH project in Turkey were assessed regarding tablet PC technology after six months of a pilot plan that included the distribution of tablet PCs to students. The research questions of this study are whether there is a meaningful difference between tablet PC use by male and female students and the effect of computer and Internet by students on attitudes toward tablet PC use. Qualitative and quantitative data collection tools were used in the research. The Computer Attitude Measure for Young students (CAMYS) developed by Teo and Noyes (2008) was used in evaluating the students’ attitudes toward the tablet PC usage. Interviews were conducted with eight teachers at pilot schools concerning the integration of tablet PCs into their classes; the positive and negative dimensions of tablet PCs were analyzed. The findings indicate that students have a positive attitude toward tablet PCs. There was not a meaningful difference between the attitudes of male and female students toward tablet PCs. The length of computer and Internet by the students did not affect their attitudes toward tablet PCs. The ways that teachers used tablet PCs in classes, the positive and negative aspects of tablet PC usage and the students’ expectations of tablet PCs were discussed in the study. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2829448aabaa6149e1490eadf206abae", "text": "This paper presents an algorithm for detecting nudity in color images. A skin color distribution model based on the RGB, Normalized RGB, and HSV color spaces is constructed using correlation and linear regression. The skin color model is used to identify and locate skin regions in an image. These regions are analyzed for clues indicating nudity or nonnudity such as their sizes and relative distances from each other. Based on these clues and the percentage of skin in the image, an image is classified nude or non-nude. The skin color distribution model performs with 96.29% recall and 6.76% false positive rate on a test set consisting of 2,303,824 manually labeled skin pixels and 24,285,952 manually labeled non-skin pixels. The Nudity Detection Algorithm is able to detect nudity with a 94.77% recall and a false positive rate of 5.04% on a set of images consisting of 421 nude images and 635 non-nude images.", "title": "" }, { "docid": "b84ffcc2c642896f88b261d983d47021", "text": "Most successful works in simultaneous localization and mapping (SLAM) aim to build a metric map under a probabilistic viewpoint employing Bayesian filtering techniques. This work introduces a new hybrid metric-topological approach, where the aim is to reconstruct the path of the robot in a hybrid continuous-discrete state space which naturally combines metric and topological maps. Our fundamental contributions are: (i) the estimation of the topological path, an improvement similar to that of Rao-Blackwellized particle filters (RBPF) and FastSLAM in the field of metric map building; and (ii) the application of grounded methods to the abstraction of topology (including loop closure) from raw sensor readings. It is remarkable that our approach could be still represented as a Bayesian inference problem, becoming an extension of purely metric SLAM. Besides providing the formal definitions and the basics for our approach, we also describe a practical implementation aimed to real-time operation. Promising experimental results mapping large environments with multiple nested loops (~30.000 m2, ~2Km robot path) validate our work.", "title": "" }, { "docid": "be8eb6c72936af75c1e41f9e17ba2579", "text": "The use of unmanned aerial vehicles (UAVs) is growing rapidly across many civil application domains including realtime monitoring, providing wireless coverage, remote sensing, search and rescue, delivery of goods, security and surveillance, precision agriculture, and civil infrastructure inspection. Smart UAVs are the next big revolution in UAV technology promising to provide new opportunities in different applications, especially in civil infrastructure in terms of reduced risks and lower cost. Civil infrastructure is expected to dominate the more that $45 Billion market value of UAV usage. In this survey, we present UAV civil applications and their challenges. We also discuss current research trends and provide future insights for potential UAV uses. Furthermore, we present the key challenges for UAV civil applications, including: charging challenges, collision avoidance and swarming challenges, and networking and security related challenges. Based on our review of the recent literature, we discuss open research challenges and draw high-level insights on how these challenges might be approached.", "title": "" } ]
scidocsrr
e7484c76a651be8cea9e9fbcff516623
Efficient coflow scheduling with Varys
[ { "docid": "b93022efa40379ca7cc410d8b10ba48e", "text": "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue.\n To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.", "title": "" }, { "docid": "2b0fa1c4dceb94a2d8c1395dae9fad99", "text": "Among the major problems facing technical management today are those involving the coordination of many diverse activities toward a common goal. In a large engineering project, for example, almost all the engineering and craft skills are involved as well as the functions represented by research, development, design, procurement, construction, vendors, fabricators and the customer. Management must devise plans which will tell with as much accuracy as possible how the efforts of the people representing these functions should be directed toward the project's completion. In order to devise such plans and implement them, management must be able to collect pertinent information to accomplish the following tasks:\n (1) To form a basis for prediction and planning\n (2) To evaluate alternative plans for accomplishing the objective\n (3) To check progress against current plans and objectives, and\n (4) To form a basis for obtaining the facts so that decisions can be made and the job can be done.", "title": "" } ]
[ { "docid": "1819f17297b526e69b345c0c723f4de4", "text": "Boosted by recent legislations, data anonymization is fast becoming a norm. However, as of yet no generic solution has been found to safely release data. As a consequence, data custodians often resort to ad-hoc means to anonymize datasets. Both past and current practices indicate that hashing is often believed to be an effective way to anonymize data. Unfortunately, in practice it is only rarely effective. This paper is a tutorial to explain the limits of cryptographic hash functions as an anonymization technique. Anonymity set is the best privacy model that can be achieved by hash functions. However, this model has several shortcomings. We provide three case studies to illustrate how hashing only yields a weakly anonymized data. The case studies include MAC and email address anonymization as well as the analysis of Google safe browsing.", "title": "" }, { "docid": "66a340007a66bcb3b890aad6c81ea3bc", "text": "[Purpose] The purpose of this study was to investigate the somatotype and physical characteristic differences among elite youth soccer players. [Subjects and Methods] In the present study, we evaluated twenty-two Korean youth soccer players in different playing positions. The playing positions were divided into forward (FW), midfielder (MF), defender (DF), and goalkeeper (GK). The participants' lean body mass (LBM), fat free mass (FFM), fat mass (FM), and basal metabolic rate (BMR) were measured and their somatotype determined according to the Heath-Carter method. [Results] The youth soccer players had twelve ectomorphic, eight mesomorphic, and two central predominant types. The DFs were taller than, but otherwise similar in physical characteristics to the FWs and MFs. The GKs were taller and heavier than the other players; however, their somatotype components were not significantly different. LBM, FFM, and BMR were significantly higher in GKs than in FWs and MFs. Although LBM, FFM, and BMR values between GKs and DFs showed large differences, they were not statistically significant. [Conclusion] The present study may contribute to our understanding of the differences in somatotype and body composition of Korean youth soccer players involved in sports physiotherapy research.", "title": "" }, { "docid": "f0c98e316755e53fab7cee1ba46841f9", "text": "We consider the task of suggesting related queries to users after they issue their initial query to a web search engine. We propose a machine learning approach to learn the probability that a user may find a follow-up query both useful and relevant, given his initial query. Our approach is based on a machine learning model which enables us to generalize to queries that have never occurred in the logs as well. The model is trained on co-occurrences mined from the search logs, with novel utility and relevance models, and the machine learning step is done without any labeled data by human judges. The learning step allows us to generalize from the past observations and generate query suggestions that are beyond the past co-occurred queries. This brings significant gains in coverage while yielding modest gains in relevance. Both offline (based on human judges) and online (based on millions of user interactions) evaluations demonstrate that our approach significantly outperforms strong baselines.", "title": "" }, { "docid": "ef0de93e98d08952d6321b8d2b4be22d", "text": "Gold deposits and occurrences located in the Nubian Shield have been known in Egypt since Predynastic times. Despite the fact that these deposits were long under exploitation and investigated many times, they are still insufficiently classified in harmony with the crustal evolution models suggested for the evolution of the Nubian Shield. Several plate tectonic models were proposed for the development of the Nubian Shield and the present classification relies heavily on the model that implies collision of arc-inferred continent through subduction and obduction of oceanic lithosphere. A three-fold classification of gold deposits of Egypt is offered here in harmony with this evolutionary model. These are stratabound deposits and non-stratabound deposits hosted in igneous and metamorphic rocks, as well as placer gold deposits. The stratabound deposits are hosted in island arc volcanic and volcaniclastic rocks of comparable composition formed in ensimatic island arcs. They are thought to have formed by exhalative hydrothermal processes during the waning phases of sub-marine volcanic activity. Stratabound deposits are sub-divided into three main types: gold-bearing Algoma-type Banded Iron Formation, gold-bearing tuffaceous sediments and gold-bearing volcanogenic massive sulphide deposits. Non-stratabound deposits occur in a wide range of igneous and metamorphic rocks. They were formed during orogenic and post-cratonization periods by mineralizing fluids of different sources. Non-stratabound deposits are divided into veintype mineralization, which constituted the main target for gold in Egypt since Pharaonic times, and disseminated-type mineralization hosted in hydrothermally altered rocks (alteration zones) which are taken recently into consideration as a new target for gold in Egypt. Placer gold deposits are divided into modern placers and lithified placers. The former are sub-divided into alluvial placers and beach placers. Conglomerates occurring on or near ancient eroded surfaces represent lithified placers. D 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "33e03ac5663f72166e17d76861fb69c7", "text": "The critical-period hypothesis for second-language acquisition was tested on data from the 1990 U.S. Census using responses from 2.3 million immigrants with Spanish or Chinese language backgrounds. The analyses tested a key prediction of the hypothesis, namely, that the line regressing second-language attainment on age of immigration would be markedly different on either side of the critical-age point. Predictions tested were that there would be a difference in slope, a difference in the mean while controlling for slope, or both. The results showed large linear effects for level of education and for age of immigration, but a negligible amount of additional variance was accounted for when the parameters for difference in slope and difference in means were estimated. Thus, the pattern of decline in second-language acquisition failed to produce the discontinuity that is an essential hallmark of a critical period.", "title": "" }, { "docid": "08f9717de25d01f07b96b2c9bc851b31", "text": "This paper addresses the imaging of objects located under a forest cover using polarimetric synthetic aperture radar tomography (POLTOMSAR) at L-band. High-resolution spectral estimators, able to accurately discriminate multiple scattering centers in the vertical direction, are used to separate the response of objects and vehicles embedded in a volumetric background. A new polarimetric spectral analysis technique is introduced and is shown to improve the estimation accuracy of the vertical position of both artificial scatterers and natural environments. This approach provides optimal polarimetric features that may be used to further characterize the objects under analysis. The effectiveness of this novel technique for POLTOMSAR is demonstrated using fully polarimetric L-band airborne data sets acquired by the German Aerospace Center (DLR)'s E-SAR system over the test site in Dornstetten, Germany.", "title": "" }, { "docid": "060cf7fd8a97c1ddf852373b63fe8ae1", "text": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "title": "" }, { "docid": "61d6400d7c9cb1979becffd2b8c3e8ec", "text": "Since its earliest days, harassment and abuse have plagued the Internet. Recent research has focused on in-domain methods to detect abusive content and faces several challenges, most notably the need to obtain large training corpora. In this paper, we introduce a novel computational approach to address this problem called Bag of Communities (BoC)---a technique that leverages large-scale, preexisting data from other Internet communities. We then apply BoC toward identifying abusive behavior within a major Internet community. Specifically, we compute a post's similarity to 9 other communities from 4chan, Reddit, Voat and MetaFilter. We show that a BoC model can be used on communities \"off the shelf\" with roughly 75% accuracy---no training examples are needed from the target community. A dynamic BoC model achieves 91.18% accuracy after seeing 100,000 human-moderated posts, and uniformly outperforms in-domain methods. Using this conceptual and empirical work, we argue that the BoC approach may allow communities to deal with a range of common problems, like abusive behavior, faster and with fewer engineering resources.", "title": "" }, { "docid": "1cd2270eb217e6233a60e002478c1ea0", "text": "We describe work on the visualization of bibliographic data and, to aid in this task, the application of numerical techniques for multidimensional scaling.\nMany areas of scientific research involve complex multivariate data. One example of this is Information Retrieval. Document comparisons may be done using a large number of variables. Such conditions do not favour the more well-known methods of visualization and graphical analysis, as it is rarely feasible to map each variable onto one aspect of even a three-dimensional, coloured and textured space.\nBead is a prototype system for the graphically-based exploration of information. In this system, articles in a bibliography are represented by particles in 3-space. By using physically-based modelling techniques to take advantage of fast methods for the approximation of potential fields, we represent the relationships between articles by their relative spatial positions. Inter-particle forces tend to make similar articles move closer to one another and dissimilar ones move apart. The result is a 3D scene which can be used to visualize patterns in the high-D information space.", "title": "" }, { "docid": "d09039cb99a4be3369fa4f049bfe0f11", "text": "OBJECTIVE\nFew interventions have combined life-style and psychosocial approaches in the context of Type 2 diabetes management. The purpose of this study was to determine the effect of a multicomponent behavioral intervention on weight, glycemic control, renal function, and depressive symptoms in a sample of overweight/obese adults with Type 2 diabetes and marked depressive symptoms.\n\n\nMETHODS\nA sample of 111 adults with Type 2 diabetes were randomly assigned to a 1-year intervention (n = 57) or usual care (n = 54) in a parallel groups design. Primary outcomes included weight, glycosylated hemoglobin, and Beck Depression Inventory II score. Estimated glomerular filtration rate served as a secondary outcome. All measures were assessed at baseline and 6 and 12 months after randomization by assessors blind to randomization. Latent growth modeling was used to examine intervention effects on each outcome.\n\n\nRESULTS\nThe intervention resulted in decreased weight (mean [M] = 0.322 kg, standard error [SE] = 0.124 kg, p = .010) and glycosylated hemoglobin (M = 0.066%, SE = 0.028%, p = .017), and Beck Depression Inventory II scores (M = 1.009, SE = 0.226, p < .001), and improved estimated glomerular filtration rate (M = 0.742 ml·min·1.73 m, SE = 0.318 ml·min·1.73 m, p = .020) each month during the first 6 months relative to usual care.\n\n\nCONCLUSIONS\nMulticomponent behavioral interventions targeting weight loss and depressive symptoms as well as diet and physical activity are efficacious in the management of Type 2 diabetes.\n\n\nTRIAL REGISTRATION\nThis study is registered at Clinicaltrials.gov ID: NCT01739205.", "title": "" }, { "docid": "f94ba438b2c5079069c25602c57ef705", "text": "Search with local intent is becoming increasingly useful due to the popularity of the mobile device. The creation and maintenance of accurate listings of local businesses world wide is time consuming and expensive. In this paper, we propose an approach to automatically discover businesses that are visible on street level imagery. Precise business store-front detection enables accurate geo-location of bu sinesses, and further provides input for business categoriza tion, listing generation,etc. The large variety of business categories in different countries makes this a very challen ging problem. Moreover, manual annotation is prohibitive due to the scale of this problem. We propose the use of a MultiBox [4] based approach that takes input image pixels and directly outputs store front bounding boxes. This end-to-end learning approach instead preempts the need for hand modelling either the proposal generation phase or the post-processing phase, leveraging large labelled trai ning datasets. We demonstrate our approach outperforms the state of the art detection techniques with a large margin in terms of performance and run-time efficiency. In the evaluation, we show this approach achieves human accuracy in the low-recall settings. We also provide an end-to-end eval uation of business discovery in the real world.", "title": "" }, { "docid": "2d2fbd74afd90843f7d604968d9915c2", "text": "Knowledge has played a significant role on human activities since his development. Data mining is the process of knowledge discovery where knowledge is gained by analyzing the data store in very large repositories, which are analyzed from various perspectives and the result is summarized it into useful information. Due to the importance of extracting knowledge/information from the large data repositories, data mining has become a very important and guaranteed branch of engineering affecting human life in various spheres directly or indirectly. The purpose of this paper is to survey many of the future trends in the field of data mining, with a focus on those which are thought to have the most promise and applicability to future data mining applications.", "title": "" }, { "docid": "f3c7a2eb1f76a5c72ae8de2134f6a61d", "text": "The amyloid hypothesis has driven drug development strategies for Alzheimer's disease for over 20 years. We review why accumulation of amyloid-beta (Aβ) oligomers is generally considered causal for synaptic loss and neurodegeneration in AD. We elaborate on and update arguments for and against the amyloid hypothesis with new data and interpretations, and consider why the amyloid hypothesis may be failing therapeutically. We note several unresolved issues in the field including the presence of Aβ deposition in cognitively normal individuals, the weak correlation between plaque load and cognition, questions regarding the biochemical nature, presence and role of Aβ oligomeric assemblies in vivo, the bias of pre-clinical AD models toward the amyloid hypothesis and the poorly explained pathological heterogeneity and comorbidities associated with AD. We also illustrate how extensive data cited in support of the amyloid hypothesis, including genetic links to disease, can be interpreted independently of a role for Aβ in AD. We conclude it is essential to expand our view of pathogenesis beyond Aβ and tau pathology and suggest several future directions for AD research, which we argue will be critical to understanding AD pathogenesis.", "title": "" }, { "docid": "a6b2dd2f7aa481f20d314b060985b079", "text": "Bayesian Network has an advantage in dealing with uncertainty. But It is difficult to construct a scientific and rational Bayesian Network model in practice application. In order to solve this problem, a novel method for constructing Bayesian Network by integrating Failure Mode and Effect Analysis (FMEA) with Fault Tree Analysis (FTA) was proposed. Firstly, the structure matrix representations of FMEA, FTA and Bayesian Network were shown and a structure matrix integration algorithm was explained. Then, an approach for constructing Bayesian Network by obtaining information on node, structure and parameter from FMEA and FTA based on structure matrix was put forward. Finally, in order to verify the feasibility of the method, an illustrative example was given. This method can simplify the modeling process and improve the modeling efficiency for constructing Bayesian Network and promote the application of Bayesian Network in the system reliability and safety analysis.", "title": "" }, { "docid": "67ec0d34f235a20513d0384e6d55a9dc", "text": "We present a paper abstract writing system based on an attentive neural sequenceto-sequence model that can take a title as input and automatically generate an abstract. We design a novel Writing-editing Network that can attend to both the title and the previously generated abstract drafts and then iteratively revise and polish the abstract. With two series of Turing tests, where the human judges are asked to distinguish the system-generated abstracts from human-written ones, our system passes Turing tests by junior domain experts at a rate up to 30% and by nonexpert at a rate up to 80%.1", "title": "" }, { "docid": "6d766690805f74495c5b29b889320908", "text": "With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data - while preserving identity privacy - remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.", "title": "" }, { "docid": "bbb1e41d86ec2507f829febf22dc6c13", "text": "Chirp-sequence-based Frequency Modulation Continuous Wave (FMCW) radar is effective at detecting range and velocity of a target. However, the target detection algorithm is based on two-dimensional Fast Fourier Transform, which uses a great deal of data over several PRIs (Pulse Repetition Intervals). In particular, if the multiple-receive channel is employed to estimate the angle position of a target; even more computational complexity is required. In this paper, we report on how a newly developed signal processing module is implemented in the FPGA, and on its performance measured under test conditions. Moreover, we have presented results from analysis of the use of hardware resources and processing times.", "title": "" }, { "docid": "5e09b2302bc3dc9ca6ae8f4a3812ec1d", "text": "Learning to Reconstruct 3D Objects", "title": "" }, { "docid": "6cf9456d2fe55d2115fd40efbb1a8f96", "text": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.", "title": "" }, { "docid": "241f5a88f53c929cc11ce0edce191704", "text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.", "title": "" } ]
scidocsrr
c4b8f4b931be2fac3dec141e5d05690f
SGDLibrary: A MATLAB library for stochastic gradient descent algorithms
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "367268c67657a43d1b981347e8175153", "text": "In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.", "title": "" } ]
[ { "docid": "17cd41a64a845ba400ee5018eb899d15", "text": "Structured prediction requires searching over a combinatorial number of structures. To tackle it, we introduce SparseMAP: a new method for sparse structured inference, and its natural loss function. SparseMAP automatically selects only a few global structures: it is situated between MAP inference, which picks a single structure, and marginal inference, which assigns nonzero probability to all structures, including implausible ones. SparseMAP can be computed using only calls to a MAP oracle, making it applicable to problems with intractable marginal inference, e.g., linear assignment. Sparsity makes gradient backpropagation efficient regardless of the structure, enabling us to augment deep neural networks with generic and sparse structured hidden layers. Experiments in dependency parsing and natural language inference reveal competitive accuracy, improved interpretability, and the ability to capture natural language ambiguities, which is attractive for pipeline systems.", "title": "" }, { "docid": "023514bca28bf91e74ebcf8e473b4573", "text": "As a result of technological advances on robotic systems, electronic sensors, and communication techniques, the production of unmanned aerial vehicle (UAV) systems has become possible. Their easy installation and flexibility led these UAV systems to be used widely in both the military and civilian applications. Note that the capability of one UAV is however limited. Nowadays, a multi-UAV system is of special interest due to the ability of its associate UAV members either to coordinate simultaneous coverage of large areas or to cooperate to achieve common goals / targets. This kind of cooperation / coordination requires reliable communication network with a proper network model to ensure the exchange of both control and data packets among UAVs. Such network models should provide all-time connectivity to avoid the dangerous failures or unintended consequences. Thus, the multi-UAV system relies on communication to operate. In this paper, current literature about multi-UAV system regarding its concepts and challenges is presented. Also, both the merits and drawbacks of the available networking architectures and models in a multi-UAV system are presented. Flying Ad Hoc Network (FANET) is moreover considered as a sophisticated type of wireless ad hoc network among UAVs, which solved the communication problems into other network models. Along with the FANET unique features, challenges and open issues are also discussed.", "title": "" }, { "docid": "c94a9083fa847c72bdf11ac5f4689eae", "text": "Despite the great success of word embedding, sentence embedding remains a not-well-solved problem. In this paper, we present a supervised learning framework to exploit sentence embedding for the medical question answering task. The learning framework consists of two main parts: 1) a sentence embedding producing module, and 2) a scoring module. The former is developed with contextual self-attention and multi-scale techniques to encode a sentence into an embedding tensor. This module is shortly called Contextual self-Attention Multi-scale Sentence Embedding (CAMSE). The latter employs two scoring strategies: Semantic Matching Scoring (SMS) and Semantic Association Scoring (SAS). SMS measures similarity while SAS captures association between sentence pairs: a medical question concatenated with a candidate choice, and a piece of corresponding supportive evidence. The proposed framework is examined by two Medical Question Answering(MedicalQA) datasets which are collected from real-world applications: medical exam and clinical diagnosis based on electronic medical records (EMR). The comparison results show that our proposed framework achieved significant improvements compared to competitive baseline approaches. Additionally, a series of controlled experiments are also conducted to illustrate that the multi-scale strategy and the contextual self-attention layer play important roles for producing effective sentence embedding, and the two kinds of scoring strategies are highly complementary to each other for question answering problems.", "title": "" }, { "docid": "de3ff51b6344fae401f22f8ccc0c290a", "text": "Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with an neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.", "title": "" }, { "docid": "38540d5bd40dd3f606073100537b5a69", "text": "Fabric is a modular and extensible open-source system for deploying and operating permissioned blockchains and one of the Hyperledger projects hosted by the Linux Foundation (www.hyperledger.org).\n Fabric is the first truly extensible blockchain system for running distributed applications. It supports modular consensus protocols, which allows the system to be tailored to particular use cases and trust models. Fabric is also the first blockchain system that runs distributed applications written in standard, general-purpose programming languages, without systemic dependency on a native cryptocurrency. This stands in sharp contrast to existing block-chain platforms that require \"smart-contracts\" to be written in domain-specific languages or rely on a cryptocurrency. Fabric realizes the permissioned model using a portable notion of membership, which may be integrated with industry-standard identity management. To support such flexibility, Fabric introduces an entirely novel blockchain design and revamps the way blockchains cope with non-determinism, resource exhaustion, and performance attacks.\n This paper describes Fabric, its architecture, the rationale behind various design decisions, its most prominent implementation aspects, as well as its distributed application programming model. We further evaluate Fabric by implementing and benchmarking a Bitcoin-inspired digital currency. We show that Fabric achieves end-to-end throughput of more than 3500 transactions per second in certain popular deployment configurations, with sub-second latency, scaling well to over 100 peers.", "title": "" }, { "docid": "c773b7385362b5fe3dc9f91f80b0eba5", "text": "The purpose of this study was to test the effect of a jump-training program on landing mechanics and lower extremity strength in female athletes involved in jumping sports. These parameters were compared before and after training with those of male athletes. The program was designed to decrease landing forces by teaching neuromuscular control of the lower limb during landing and to increase vertical jump height. After training, peak landing forces from a volleyball block jump decreased 22%, and knee adduction and abduction moments (medially and laterally directed torques) decreased approximately 50%. Multiple regression analysis revealed that these moments were significant predictors of peak landing forces. Female athletes demonstrated lower landing forces than male athletes and lower adduction and abduction moments after training. External knee extension moments (hamstring muscle-dominant) of male athletes were threefold higher than those of female athletes. Hamstring-to-quadriceps muscle peak torque ratios increased 26% on the nondominant side and 13% on the dominant side, correcting side-to-side imbalances. Hamstring muscle power increased 44% with training on the dominant side and 21% on the nondominant. Peak torque ratios of male athletes were significantly greater than those of untrained female athletes, but similar to those of trained females. Mean vertical jump height increased approximately 10%. This training may have a significant effect on knee stabilization and prevention of serious knee injury among female athletes.", "title": "" }, { "docid": "eda242b58e5ed2a2736cb7cccc73220e", "text": "This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.", "title": "" }, { "docid": "438094ef7913de0236b57a85e7d511c2", "text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.", "title": "" }, { "docid": "f462676b5ef96aefc443a2e8b3d5ed36", "text": "In this brief paper, we honour the contributions of the late Prof. Manny Lehman to the study of software evolution. We do so by means of a kind of evolutionary case study: First, we discuss his background in engineering and explore how this helped to shape his views on software systems and their development; next, we discuss the laws of software evolution that he postulated based on his industrial experiences; and finally, we examine how the nature of software systems and their development are undergoing radical change, and we consider what this means for future evolutionary studies of software. I. LEHMAN’S INTELLECTUAL JOURNEY Meir “Manny” Lehman did not follow a traditional career path for an academic. His father died when he was young, and Lehman had to enter the workforce to help support his family instead of attending university.1 He got his first job in 1941 “performing maintenance on” (i.e., repairing) civilian radios; in England at the height of the second world war, this was a job of real importance. For the most part, his work involved replacing the components that the “tester” (or “debugger”, in software engineering parlance) had determined to be problematic. He found the work repetitive and dull; he decided that he really wanted to be a tester. One day, his foreman called in sick and Lehman was allowed to do testing, but when the foreman returned, Lehman was told to go back to maintenance. When Lehman protested that he thought he had been promoted, the foreman replied “Well, you’re not paid to think” [9]. Lehman would later dedicate a large portion of his life to demonstrate that those who do “maintenance” of software should be paid to think too. Like many other pioneers of computer science, Lehman lived the computer revolution from its conception. His early work was dedicated to building some of the first computers, and by the end of his life he was witnessing the ubiquity of mobile computing. It is likely that he would have spend his life working on hardware, had it not been for the few years he spent at IBM between 1964 and 1972. IBM had originally hired him to help build physical computers; but in 1968 in a radical change of direction, he was asked to investigate programming practices within the company.2 This project took him to study the development of the operating system IBM S/360 and its successor, IBM S/370. 1He would later attend Imperial College London, where he received a PhD in 1957. 2According to Laszlo Belady, “Lehman at the time ’was on the shelf’ [...] IBM never fired anyone. Instead, they put them ’on the shelf.” [1] To put it into context today: IBM S/360 was an operating system for the IBM 360, a computer with up to 8 MBytes of memory; its fastest models could not reach speeds of 0.2 MIPS.3 Lehman discovered that programmers were becoming increasingly interested in assessing their productivity, which they measured in terms of daily SLOCs and passing unittests. He noticed that productivity was indeed increasing, but at the same time the developers appeared to be losing sight of the overall product. In his words, “the gross productivity for the project as a whole, and particularly the gross productivity as measured over the lifetime of the software product had probably gone down” [9]. It was at this time that he developed a close friendship with Laszlo Belady. Together they would challenge the prevailing models and assumptions of software maintenance processes, and champion the study of software evolution as a field in its own right. In 1972, Lehman left IBM to join Imperial College London, where he would continue his work in software engineering research.4 While he did not consider himself a programmer, his work at IBM had allowed him to study and understand programmers and their products better than most. He had witnessed first-hand the challenges of producing industrial programs for a real-world environment. He observed that the processes involved in developing and maintaining software formed a kind of feedback system, where the environment provided a signal that had profound impact upon the continued evolution of the system. Lehman’s engineering-influenced views on software systems and their development were in stark contrast to other well known computer scientists, such as Edsger Dijkstra, who had a more formal view on what a programs is. For Dijkstra, a program was a mathematical entity that should be derived, iteratively, from a formal statement what it was supposed to do. In this model, you start with a precise specification, and then implement a series of formal “step-wise” refinements, gradually making it more concrete, and ultimately executable. He preferred “formal derivation of program from spec through a series of correctness-preserving transformations” over more informal and traditional views of software development, and he championed the view that teaching programming should 3The project manager was Fred Brooks, later to become a professor and win the Turing Award. 4He was also instrumental in creating, at Imperial College, one the first programs in software engineering. emphasize creating a specification and then progressively transforming it into a program satisfies it [5].5 Lehman recognized the value of Dijkstra’s position, but at the same time felt that it was not a practical model for the problem space of industrial software or for the style of development he had observed at IBM. So he postulated that programs could be divided into two main categories: S-type programs, which are derived from a rigorous specification that is stated up-front, and can be proven correct if required; and E-type (evolutionary) programs, which are strongly affected by their environment and must be adaptable to changing needs [8]. In his view, E-type systems are those that are embedded in the real world, implicitly suggesting that S-type programs were less common — and thus less important — outside of the research world.6 II. LEHMAN’S LAWS OF EVOLUTION In Lehman’s view, “The moment you install that program, the environment changes.” Hence, a program that is expected to operate in the real world cannot be fully specified for two reasons: it is impossible to anticipate all of the complexities of the real world environment in which it will run; and, equally importantly, the program will affect the environment the moment it starts being used. As time passes the environment in which the software system is embedded will inevitably evolve, often in unexpected directions; the environment of a program — including its users — thus becomes input to a feedback loop that drives further evolution. A program might, at some point, perfectly satisfy the requirements of its users, but as its environment changes, it will have to be adapted to continue doing so. In the words of Lehman, “Evolution is an essential property of real-world software” and “As your needs change, your criteria for satisfaction changes”. Over time requirements will change, and software must evolve to continue to satisfy these new requirements. If the environment is the one that drives the evolution, programmers are the ones who evolve the program. Lehman noted that evolving a software system was not an easy task. He summarized his observations in what we today call Lehman’s Laws of Software Evolution (adapted from [10], [4]): 1) Continuing change — An E-type software system7 that is used must be continually adapted, else it becomes progressively less satisfactory. 2) Increasing complexity — As an E-type software system evolves, its complexity tends to increase unless work is done to maintain or reduce it. 5There are symmetries between their positions and those of the disciplines of engineering and mathematics; this is reinforced by the fact that Lehman viewed software as a feedback system — a typical engineering model — while Dijkstra viewed it as a mathematical concept. 6Originally Lehman created a third category: P-type. P-type programs cannot be specified and their development is iterative. Later, he decided that Ptype programs were really a subset of E-type, and he reduced his classification to E-type and S-type programs only. 7Lehman used the term program, but we decided to update the descriptions to the more current term “software system”. 3) Self-regulation — The E-type software system’s evolution process is self regulating with close to normal distribution of measures of product and process attributes. 4) Conservation of organizational stability — The average effective global activity rate on an evolving E-type software system is invariant over the product lifetime. 5) Conservation of familiarity — During the active life of an evolving E-type software system, the content of successive releases is statistically invariant. 6) Continuing growth — Functional content of an E-type software system must be continually increased to maintain user satisfaction over its lifetime. 7) Declining quality — An E-type software system will be perceived as of declining quality unless rigorously maintained and adapted to a changing operational environment. 8) Feedback System — E-type software systems constitute multi-loop, multi-level feedback systems and must be treated as such to be successfully modified or improved. While Law 8 Feedback System was the last to be formulated, arguably it should have been the first to be stated, as its themes pervade the others. As discussed above, Lehman based his observations on the notion that real world software, once deployed, forms a feedback loop. The feedback comes from many different sources, including the stakeholders (e.g., they might want new features), the environment in which the system runs (technical — e.g., new versions of the operating system might render the software system unusable unless it is adapted — and non-technical — e.g., a system for tax management might need to be updated to changes in the taxation laws), and the system itself (e.g., its own defects might need to be fixed). Thi", "title": "" }, { "docid": "40043360644ded6950e1f46bd2caaf96", "text": "Recently, there has been a rapidly growing interest in deep learning research and their applications to real-world problems. In this paper, we aim at evaluating and comparing LSTM deep learning architectures for short-and long-term prediction of financial time series. This problem is often considered as one of the most challenging real-world applications for time-series prediction. Unlike traditional recurrent neural networks, LSTM supports time steps of arbitrary sizes and without the vanishing gradient problem. We consider both bidirectional and stacked LSTM predictive models in our experiments and also benchmark them with shallow neural networks and simple forms of LSTM networks. The evaluations are conducted using a publicly available dataset for stock market closing prices.", "title": "" }, { "docid": "cf056b44b0e93ad4fcbc529437cfbec3", "text": "Many advances in the treatment of cancer have been driven by the development of targeted therapies that inhibit oncogenic signaling pathways and tumor-associated angiogenesis, as well as by the recent development of therapies that activate a patient's immune system to unleash antitumor immunity. Some targeted therapies can have effects on host immune responses, in addition to their effects on tumor biology. These immune-modulating effects, such as increasing tumor antigenicity or promoting intratumoral T cell infiltration, provide a rationale for combining these targeted therapies with immunotherapies. Here, we discuss the immune-modulating effects of targeted therapies against the MAPK and VEGF signaling pathways, and how they may synergize with immunomodulatory antibodies that target PD1/PDL1 and CTLA4. We critically examine the rationale in support of these combinations in light of the current understanding of the underlying mechanisms of action of these therapies. We also discuss the available preclinical and clinical data for these combination approaches and their implications regarding mechanisms of action. Insights from these studies provide a framework for considering additional combinations of targeted therapies and immunotherapies for the treatment of cancer.", "title": "" }, { "docid": "823c0e181286d917a610f90d1c9db0c3", "text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.", "title": "" }, { "docid": "be7cc41f9e8d3c9e08c5c5ff1ea79f59", "text": "A person’s emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: “The face is the portrait of the mind; the eyes, its informers.”. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and", "title": "" }, { "docid": "32699147f4915dc4e2d7708ade19ea5b", "text": "Occlusions, complex backgrounds, scale variations and non-uniform distributions present great challenges for crowd counting in practical applications. In this paper, we propose a novel method using an attention model to exploit head locations which are the most important cue for crowd counting. The attention model estimates a probability map in which high probabilities indicate locations where heads are likely to be present. The estimated probability map is used to suppress nonhead regions in feature maps from several multi-scale feature extraction branches of a convolutional neural network for crowd density estimation, which makes our method robust to complex backgrounds, scale variations and non-uniform distributions. In addition, we introduce a relative deviation loss to compensate a commonly used training loss, Euclidean distance, to improve the accuracy of sparse crowd density estimation. Experiments on ShanghaiTech, UCF CC 50 and WorldExpo’10 datasets demonstrate the effectiveness of our method.", "title": "" }, { "docid": "9e84bd8c033bf04592b732e6c6a604c6", "text": "In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.", "title": "" }, { "docid": "ba6709c1413a1c28c99e686e065ce564", "text": "Essential oils are complex mixtures of hydrocarbons and their oxygenated derivatives arising from two different isoprenoid pathways. Essential oils are produced by glandular trichomes and other secretory structures, specialized secretory tissues mainly diffused onto the surface of plant organs, particularly flowers and leaves, thus exerting a pivotal ecological role in plant. In addition, essential oils have been used, since ancient times, in many different traditional healing systems all over the world, because of their biological activities. Many preclinical studies have documented antimicrobial, antioxidant, anti-inflammatory and anticancer activities of essential oils in a number of cell and animal models, also elucidating their mechanism of action and pharmacological targets, though the paucity of in human studies limits the potential of essential oils as effective and safe phytotherapeutic agents. More well-designed clinical trials are needed in order to ascertain the real efficacy and safety of these plant products.", "title": "" }, { "docid": "548499e5588f95e45993049dfa03723b", "text": "We present the architecture of a deep learning pipeline for natural language processing. Based on this architecture we built a set of tools both for creating distributional vector representations and for performing specific NLP tasks. Three methods are available for creating embeddings: feedforward neural network, sentiment specific embeddings and embeddings based on counts and Hellinger PCA. Two methods are provided for training a network to perform sequence tagging, a window approach and a convolutional approach. The window approach is used for implementing a POS tagger and a NER tagger, the convolutional network is used for Semantic Role Labeling. The library is implemented in Python with core numerical processing written in C++ using parallel linear algebra library for efficiency and scalability.", "title": "" }, { "docid": "88bf67ec7ff0cfa3f1dc6af12140d33b", "text": "Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. General example of cloud services is Google apps, provided by Google and Microsoft SharePoint. The rapid growth in field of “cloud computing” also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data, and examining the utilization of cloud by the cloud computing vendors. The wide acceptance www has raised security risks along with the uncountable benefits, so is the case with cloud computing. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. How the end users of cloud computing know that their information is not having any availability and security issues? Every one poses, Is their information secure? This study aims to identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors to know about the key security threats associated with cloud computing. Our work will enable researchers and security professionals to know about users and vendors concerns and critical analysis about the different security models and tools proposed.", "title": "" }, { "docid": "b7de7a1c14e3bc54cc7551ecba66e8ca", "text": "We present a new approach to capture video at high spatial and spectral resolutions using a hybrid camera system. Composed of an RGB video camera, a grayscale video camera and several optical elements, the hybrid camera system simultaneously records two video streams: an RGB video with high spatial resolution, and a multispectral video with low spatial resolution. After registration of the two video streams, our system propagates the multispectral information into the RGB video to produce a video with both high spectral and spatial resolution. This propagation between videos is guided by color similarity of pixels in the spectral domain, proximity in the spatial domain, and the consistent color of each scene point in the temporal domain. The propagation algorithm is designed for rapid computation to allow real-time video generation at the original frame rate, and can thus facilitate real-time video analysis tasks such as tracking and surveillance. Hardware implementation details and design tradeoffs are discussed. We evaluate the proposed system using both simulations with ground truth data and on real-world scenes. The utility of this high resolution multispectral video data is demonstrated in dynamic white balance adjustment and tracking.", "title": "" }, { "docid": "dfb0ff406407c5f3bdd0c50ffae2d5d8", "text": "The k-means clustering algorithm, a staple of data mining and unsupervised learning, is popular because it is simple to implement, fast, easily parallelized, and offers intuitive results. Lloyd’s algorithm is the standard batch, hill-climbing approach for minimizing the k-means optimization criterion. It spends a vast majority of its time computing distances between each of the k cluster centers and the n data points. It turns out that much of this work is unnecessary, because points usually stay in the same clusters after the first few iterations. In the last decade researchers have developed a number of optimizations to speed up Lloyd’s algorithm for both lowand high-dimensional data. In this chapter we survey some of these optimizations and present new ones. In particular we focus on those which avoid distance calculations by the triangle inequality. By caching known distances and updating them efficiently with the triangle inequality, these algorithms can provably avoid many unnecessary distance calculations. All the optimizations examined produce the same results as Lloyd’s algorithm given the same input and initialization, so are suitable as drop-in replacements. These new algorithms can run many times faster and compute far fewer distances than the standard unoptimized implementation. In our experiments, it is common to see speedups of over 30–50x compared to Lloyd’s algorithm. We examine the trade-offs for using these methods with respect to the number of examples n, dimensions d , clusters k, and structure of the data.", "title": "" } ]
scidocsrr
405c84811527f169ac8676b26636b0f7
Title On the psychology of self-prediction : Consideration ofsituational barriers to intended actions
[ { "docid": "b06dfe7836ce7340605d4b03618c8e8b", "text": "Numerous theories in social and health psychology assume that intentions cause behaviors. However, most tests of the intention- behavior relation involve correlational studies that preclude causal inferences. In order to determine whether changes in behavioral intention engender behavior change, participants should be assigned randomly to a treatment that significantly increases the strength of respective intentions relative to a control condition, and differences in subsequent behavior should be compared. The present research obtained 47 experimental tests of intention-behavior relations that satisfied these criteria. Meta-analysis showed that a medium-to-large change in intention (d = 0.66) leads to a small-to-medium change in behavior (d = 0.36). The review also identified several conceptual factors, methodological features, and intervention characteristics that moderate intention-behavior consistency.", "title": "" } ]
[ { "docid": "916767707946aaa4ade639a56e01d8be", "text": "Copyright © 2017 Massachusetts Medical Society. It is estimated that 470,000 patients receive radiotherapy each year in the United States.1 As many as half of patients with cancer will receive radiotherapy.2 Improvements in diagnosis, therapy, and supportive care have led to increasing numbers of cancer survivors.3 In response, the emphasis of radiation oncology has expanded beyond cure to include reducing side effects, particularly late effects, which may substantially affect a patient’s quality of life. Radiotherapy is used to treat benign and malignant diseases and can be used alone or in combination with chemotherapy, surgery, or both. For primary tumors or metastatic deposits, palliative radiotherapy is often used to reduce pain or mass effect (due to spinal cord compression, brain metastases, or airway obstruction). Therapeutic radiation can be delivered from outside the patient, known as external-beam radiation therapy, or EBRT (see the Glossary in the Supplementary Appendix, available with the full text of this article at NEJM.org), by implanting radioactive sources in cavities or tissues (brachytherapy), or through systemic administration of radiopharmaceutical agents. Multiple technological and biologic advances have fundamentally altered the field of radiation oncology since it was last reviewed in the Journal.4", "title": "" }, { "docid": "0a31ab53b887cf231d7ca1a286763e5f", "text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.", "title": "" }, { "docid": "6d262d30db4d6db112f40e5820393caf", "text": "This study sought to examine the effects of service quality and customer satisfaction on the repurchase intentions of customers of restaurants on University of Cape Coast Campus. The survey method was employed involving a convenient sample of 200 customers of 10 restaurants on the University of Cape Coast Campus. A modified DINESERV scale was used to measure customers’ perceived service quality. The results of the study indicate that four factors accounted for 50% of the variance in perceived service quality, namely; responsivenessassurance, empathy-equity, reliability and tangibles. Service quality was found to have a significant effect on customer satisfaction. Also, both service quality and customer satisfaction had significant effects on repurchase intention. However, customer satisfaction could not moderate the effect of service quality on repurchase intention. This paper adds to the debate on the dimensions of service quality and provides evidence on the effects of service quality and customer satisfaction on repurchase intention in a campus food service context.", "title": "" }, { "docid": "030f0d829b79593f375c97f9bbb1ee8a", "text": "The growing concern about the extent of joblessness in advanced Western economies is fuelled by the perception that the social costs of unemployment substantially exceed the costs of an economy operating below its potential. Rather, it is suspected that unemployment imposes an additional burden on the individual, a burden that might be referred to as the non-pecuniary cost of unemployment. Those costs arise primarily since employment is not only a source of income but also a provider of social relationships, identity in society and individual self-esteem. Darity and Goldsmith (1996) provide a summary of the psychological literature on the link between loss of employment and reduced wellbeing. Substantial efforts have been made in the past to quantify these nonpecuniary costs of unemployment. (See Junankar 1987; Björklund and Eriksson 1995 and Darity and Goldsmith 1996 for surveys of previous empirical studies.) To begin with, one can think of costs directly in terms of decreased psychological wellbeing. Beyond that, decreased wellbeing may express itself through adverse individual outcomes such as increased mortality, suicide risk and crime rates, or decreased marital stability. These possibilities have been explored by previous research. The general finding is that unemployment is associated with substantial negative non-pecuniary effects (see e.g. Jensen and Smith 1990; Junankar 1991). The case seems particularly strong for the direct negative association between unemployment and psychological wellbeing. For instance, Clark and Oswald (1994), using the first wave of the British Household Panel Survey, report estimates from ordered probit models in which a mental distress score is regressed on a set of individual characteristics, unemployment being one of them. They find that the effect of unemployment is both statistically significant and large: being unemployed increases mental distress by more than does suffering impaired health. Other researchers have used different measures of psychological wellbeing and yet obtained the same basic result, a large negative effect of unemployment on well being. Björklund (1985) and Korpi (1997) construct wellbeing indicators from symptoms of sleeplessness, stomach pain, depression and the like, while Goldsmith et al. (1995, 1996) measure", "title": "" }, { "docid": "9cbf4d0843196b1dcada6f60c0d0c2e8", "text": "In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition capabilities of the human user. An evolutionary search algorithm has been adapted to assist in the fuzzy logic formalization of hypotheses that aim at explaining features inside multivariate, volumetric data. Up to now, users solely rely on their knowledge and expertise when looking for explanatory theories. However, it often remains unclear whether the selected attribute ranges represent the real explanation for the feature of interest. Other selections hidden in the large number of data variables could potentially lead to similar features. Moreover, as simulation complexity grows, users are confronted with huge multidimensional data sets making it almost impossible to find meaningful hypotheses at all. We propose an interactive cycle of knowledge-based analysis and automatic hypothesis generation. Starting from initial hypotheses, created with linking and brushing, the user steers a heuristic search algorithm to look for alternative or related hypotheses. The results are analyzed in information visualization views that are linked to the volume rendering. Individual properties as well as global aggregates are visually presented to provide insight into the most relevant aspects of the generated hypotheses. This novel approach becomes computationally feasible due to a GPU implementation of the time-critical parts in the algorithm. A thorough evaluation of search times and noise sensitivity as well as a case study on data from the automotive domain substantiate the usefulness of the suggested approach.", "title": "" }, { "docid": "f6914702ebadddc3b8bc54fd87f1c571", "text": "Energy crisis is one of the biggest problems in the third world developing country like Bangladesh. There is a big gap between generation and demand of Electric energy. Almost 50% population of our country is very far away from this blessings. Renewable energy is the only solution of this problem to be an energy efficient developed country. Solar energy is one of the great resources of the renewable energy which can play a crucial role in developing a power deficient country like Bangladesh. This paper provides a proposal of using dual axis solar tracker instead of solar panel. This encompasses a design of ideal solar house model using azimuth-altitude dual axis solar tracker on rooftop. It has been proved through mathematical calculation that the solar energy increases up to 50-60% where dual axis solar tracker is used. Apart from the mentioned design, this paper presents a structure and application of a microcontroller based azimuth-altitude dual axis solar tracker which tracks the solar panel according to the direction of the solar radiation. A built-in ADC converter is used to drive motor circuit. To minimize the power consumption by dual axis solar tracker, we are in favor of using two stepper motor especially during the seasonal change. The proposed model demonstrates that we require a very small amount of power from national grid if we can install dual axis solar tracker on rooftops of our residence; this is how increasing energy demand can effectively be met.", "title": "" }, { "docid": "07817eb2722fb434b1b8565d936197cf", "text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.", "title": "" }, { "docid": "23e08b1f6886d8171fe2f46c88ea6ee2", "text": "In recent years, there has been a significant interest in integrating probability theory with first order logic and relational representations [see De Raedt and Kersting, 2003, for an overview]. Muggleton [1996] and Cussens [1999] have upgraded stochastic grammars towards Stochastic Logic Programs, Sato and Kameya [2001] have introduced Probabilistic Distributional Semantics for logic programs, and Domingos and Richardson [2004] have upgraded Markov networks towards Markov Logic Networks. Another research stream including Poole’s Independent Choice Logic [1993], Ngo and Haddawy’s Probabilistic-Logic Programs [1997], Jäger’s Relational Bayesian Networks [1997], and Pfeffer’s Probabilistic Relational Models [2000] concentrates on first order logical and relational extensions of Bayesian networks.", "title": "" }, { "docid": "02a958e650473161bd9978423258e526", "text": "The Linux kernel employs hash table data structures to store high-usage data objects such as pages, buffers, inodes, and others. In this report we find significant performance boosts with careful analysis and tuning of four critical kernel data structures.", "title": "" }, { "docid": "a90f36ff4783569fda60bde12ad9f1c8", "text": "In recent days for logistics as well as common peoples of developing and developed countries are in need of transportation policy including road safety, which can be achieved only when well-managed driving rule violation monitoring takes place to monitor the flow of increasing number of vehicles. Manual detection of driving rule breakers leads to overhead of today’s traffic monitoring body. Switching to an automated decision support driving rule violation monitoring system based on sensor network is one of the paradigms to solve this issue. Upcoming cloud researchers have already felt the demand of designing such a sensor-based driver’s driving rule violation monitoring cloud application that can release the overhead of the current traffic monitoring bodies. To accomplish the client’s expectation researchers have realized such cloud application based on sensor network or Internet of Things (IoT) is not so satisfactory. To solve these bottleneck conditions of the cloud computing, researchers have introduced the future of cloud computing called fog computing in this aspect, where computing services reside at the network edge. This paper proposed novel fog-based intelligent decision support system (DSS) for driver safety and traffic violation monitoring based on the IoT. Our conceptual framework could easily be adapted in current scenario and can also become a de facto decision support system model for future hassle-free driving rule violation monitoring system.", "title": "" }, { "docid": "af5f7910be8cbc67ac3aa0e81c8c2bd3", "text": "Manlio De Domenico, Albert Solé-Ribalta, Emanuele Cozzo, Mikko Kivelä, Yamir Moreno, Mason A. Porter, Sergio Gómez, and Alex Arenas Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain Institute for Biocomputation and Physics of Complex Systems (BIFI), University of Zaragoza, Zaragoza 50018, Spain Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, Oxford OX1 3LB, United Kingdom Department of Theoretical Physics, University of Zaragoza, Zaragoza 50009, Spain Complex Networks and Systems Lagrange Lab, Institute for Scientific Interchange, Turin 10126, Italy Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute and CABDyN Complexity Centre, University of Oxford, Oxford OX1 3LB, United Kingdom (Received 23 July 2013; published 4 December 2013)", "title": "" }, { "docid": "53633432216e383297e401753332b00a", "text": "Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock) has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR) at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR) is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears) rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right) precentral sulcus (lPCS), a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream. Results suggest that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity) help to explain why past ASSR studies of auditory spatial attention yield seemingly contradictory results.", "title": "" }, { "docid": "c87a8ee5e968d2039b29f080f773af75", "text": "The Gartner's 2014 Hype Cycle released last August moves Big Data technology from the Peak of Inflated Expectations to the beginning of the Trough of Disillusionment when interest starts to wane as reality does not live up to previous promises. As the hype is starting to dissipate it is worth asking what Big Data (however defined) means from a scientific perspective: Did the emergence of gigantic corpora exposed the limits of classical information retrieval and data mining and led to new concepts and challenges, the way say, the study of electromagnetism showed the limits of Newtonian mechanics and led to Relativity Theory, or is it all just \"sound and fury, signifying nothing\", simply a matter of scaling up well understood technologies? To answer this question, we have assembled a distinguished panel of eminent scientists, from both Industry and Academia: Lada Adamic (Facebook), Michael Franklin (University of California at Berkeley), Maarten de Rijke (University of Amsterdam), Eric Xing (Carnegie Mellon University), and Kai Yu (Baidu) will share their point of view and take questions from the moderator and the audience.", "title": "" }, { "docid": "3016b95983fbcc1ab4a9aa2a69c08bdc", "text": "Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user’s current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.", "title": "" }, { "docid": "c74c73965123e09bfbaef3e9793c38e0", "text": "We propose a one-class neural network (OC-NN) model to detect anomalies in complex data sets. OC-NN combines the ability of deep networks to extract progressively rich representation of data with the one-class objective of creating a tight envelope around normal data. The OC-NN approach breaks new ground for the following crucial reason: data representation in the hidden layer is driven by the OC-NN objective and is thus customized for anomaly detection. This is a departure from other approaches which use a hybrid approach of learning deep features using an autoencoder and then feeding the features into a separate anomaly detection method like one-class SVM (OC-SVM). The hybrid OC-SVM approach is sub-optimal because it is unable to influence representational learning in the hidden layers. A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and GTSRB), OC-NN performs on par with state-of-the-art methods and outperformed conventional shallow methods in some scenarios.", "title": "" }, { "docid": "c65cebec214fc6c45e266bfcce731676", "text": "Creativity is central to much human problem solving and innovation. Brainstorming processes attempt to leverage group creativity, but group dynamics sometimes limit their utility. We present IdeaExpander, a tool to support group brainstorming by intelligently selecting pictorial stimuli based on the group's conversation The design is based on theories of how perception, thinking, and communication interact; a pilot study (N=16) suggests that it increases individuals' idea production and that people value it.", "title": "" }, { "docid": "ad0ed4bca299c9961705cc40793ae697", "text": "An ultra-wideband mixer using standard complementery metal oxide semiconductor (CMOS) technology was first proposed in this paper. This broadband mixer achieves measured conversion gain of 11 1 5 dB with a bandwidth of 0.3 to 25 GHz. The mixer was fabricated in a commercial 0.18m CMOS technology and demonstrated the highest frequency and bandwidth of operation. It also presented better gain-bandwidth-product performance compared with that of GaAs-based HBT technologies. The chip area is 0.8 1 mm.", "title": "" }, { "docid": "a02fb872137fe7bc125af746ba814849", "text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.", "title": "" }, { "docid": "fb0fabb99d446e1edbb3fd581d16693b", "text": "Facial caricature is an art form of drawing faces in an exaggerated way to convey humor or sarcasm. In this paper, we propose the first Generative Adversarial Network (GAN) for unpaired photo-to-caricature translation, which we call \"CariGANs\". It explicitly models geometric exaggeration and appearance stylization using two components: CariGeoGAN, which only models the geometry-to-geometry transformation from face photos to caricatures, and CariStyGAN, which transfers the style appearance from caricatures to face photos without any geometry deformation. In this way, a difficult cross-domain translation problem is decoupled into two easier tasks. The perceptual study shows that caricatures generated by our CariGANs are closer to the hand-drawn ones, and at the same time better persevere the identity, compared to state-of-the-art methods. Moreover, our CariGANs allow users to control the shape exaggeration degree and change the color/texture style by tuning the parameters or giving an example caricature.", "title": "" } ]
scidocsrr
99bb3c92cbbc43f00a1be095270da6a0
Design Challenges and Misconceptions in Neural Sequence Labeling
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" } ]
[ { "docid": "995a5523c131e09f8a8f04a3cf304045", "text": "Topic models are often applied in industrial settings to discover user profiles from activity logs where documents correspond to users and words to complex objects such as web sites and installed apps. Standard topic models ignore the content-based similarity structure between these objects largely because of the inability of the Dirichlet prior to capture such side information of word-word correlation. Several approaches were proposed to replace the Dirichlet prior with more expressive alternatives. However, this added expressivity comes with a heavy premium: inference becomes intractable and sparsity is lost which renders these alternatives not suitable for industrial scale applications. In this paper we take a radically different approach to incorporating word-word correlation in topic models by applying this side information at the posterior level rather than at the prior level. We show that this choice preserves sparsity and results in a graph-based sampler for LDA whose computational complexity is asymptotically on bar with the state of the art Alias base sampler for LDA \\cite{aliasLDA}. We illustrate the efficacy of our approach over real industrial datasets that span up to billion of users, tens of millions of words and thousands of topics. To the best of our knowledge, our approach provides the first practical and scalable solution to this important problem.", "title": "" }, { "docid": "1f714aea64a7d23743e507724e4d531b", "text": "At the mo ment, S upport Ve ctor Machine ( SVM) has been widely u sed i n t he study of stock investment related topics. Stock investment can be further divided into three s trategies such as: buy, sell and hold. Using data concerning China Steel Corporation, this article adopts genetic algorithm for the search of the best SVM parameter and the selection of the best SVM prediction variable, then it will be compared with Logistic Regression for the classification prediction capability of stock investment. From the classification prediction result and the result of AUC of the models presented in this article, it can be seen that the SVM after adjustment of input variables and parameters will have classification prediction capability relatively superior to that of the other three models.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" }, { "docid": "37845c0912d9f1b355746f41c7880c3a", "text": "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.", "title": "" }, { "docid": "4d2dad29f0f02d448c78b7beda529022", "text": "This paper proposes a novel diagnosis method for detection and discrimination of two typical mechanical failures in induction motors by stator current analysis: load torque oscillations and dynamic rotor eccentricity. A theoretical analysis shows that each fault modulates the stator current in a different way: torque oscillations lead to stator current phase modulation, whereas rotor eccentricities produce stator current amplitude modulation. The use of traditional current spectrum analysis involves identical frequency signatures with the two fault types. A time-frequency analysis of the stator current with the Wigner distribution leads to different fault signatures that can be used for a more accurate diagnosis. The theoretical considerations and the proposed diagnosis techniques are validated on experimental signals.", "title": "" }, { "docid": "318904a334dfa03a6cb4720c31673dda", "text": "Choosing the most appropriate dietary assessment tool for a study can be a challenge. Through a scoping review, we characterized self-report tools used to assess diet in Canada to identify patterns in tool use and to inform strategies to strengthen nutrition research. The research databases Medline, PubMed, PsycINFO, and CINAHL were used to identify Canadian studies published from 2009 to 2014 that included a self-report assessment of dietary intake. The search elicited 2358 records that were screened to identify those that reported on self-report dietary intake among nonclinical, non-Aboriginal adult populations. A pool of 189 articles (reflecting 92 studies) was examined in-depth to assess the dietary assessment tools used. Food-frequency questionnaires (FFQs) and screeners were used in 64% of studies, whereas food records and 24-h recalls were used in 18% and 14% of studies, respectively. Three studies (3%) used a single question to assess diet, and for 3 studies the tool used was not clear. A variety of distinct FFQs and screeners, including those developed and/or adapted for use in Canada and those developed elsewhere, were used. Some tools were reported to have been evaluated previously in terms of validity or reliability, but details of psychometric testing were often lacking. Energy and fat were the most commonly studied, reported by 42% and 39% of studies, respectively. For ∼20% of studies, dietary data were used to assess dietary quality or patterns, whereas close to half assessed ≤5 dietary components. A variety of dietary assessment tools are used in Canadian research. Strategies to improve the application of current evidence on best practices in dietary assessment have the potential to support a stronger and more cohesive literature on diet and health. Such strategies could benefit from national and global collaboration.", "title": "" }, { "docid": "5f7ea9c7398ddbb5062d029e307fcf22", "text": "This paper presents a low cost and flexible home control and monitoring system using an embedded micro-web server, with IP connectivity for accessing and controlling devices and appliances remotely using Android based Smart phone app. The proposed system does not require a dedicated server PC with respect to similar systems and offers a novel communication protocol to monitor and control the home environment with more than just the switching functionality.", "title": "" }, { "docid": "5487ee527ef2a9f3afe7f689156e7e4d", "text": "Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.", "title": "" }, { "docid": "a9f6c0dfd884fb22e039b37e98f22fe0", "text": "Image semantic segmentation is a fundamental problem and plays an important role in computer vision and artificial intelligence. Recent deep neural networks have improved the accuracy of semantic segmentation significantly. Meanwhile, the number of network parameters and floating point operations have also increased notably. The realworld applications not only have high requirements on the segmentation accuracy, but also demand real-time processing. In this paper, we propose a pyramid pooling encoder-decoder network named PPEDNet for both better accuracy and faster processing speed. Our encoder network is based on VGG16 and discards the fully connected layers due to their huge amounts of parameters. To extract context feature efficiently, we design a pyramid pooling architecture. The decoder is a trainable convolutional network for upsampling the output of the encoder, and finetuning the segmentation details. Our method is evaluated on CamVid dataset, achieving 7.214% mIOU accuracy improvement while reducing 17.9% of the parameters compared with the state-of-the-art algorithm.", "title": "" }, { "docid": "be8864d6fb098c8a008bfeea02d4921a", "text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.", "title": "" }, { "docid": "14c278147defd19feb4e18d31a3fdcfb", "text": "Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.", "title": "" }, { "docid": "ce8729f088aaf9f656c9206fc67ff4bd", "text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.", "title": "" }, { "docid": "331df0bd161470558dd5f5061d2b1743", "text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.", "title": "" }, { "docid": "4ac8435b96c020231c775c4625b5ff0a", "text": "This article addresses the issue of student writing in higher education. It draws on the findings of an Economic and Social Research Council funded project which examined the contrasting expectations and interpretations of academic staff and students regarding undergraduate students' written assignments. It is suggested that the implicit models that have generally been used to understand student writing do not adequately take account of the importance of issues of identity and the institutional relationships of power and authority that surround, and are embedded within, diverse student writing practices across the university. A contrasting and therefore complementary perspective is used to present debates about 'good' and `poor' student writing. The article outlines an 'academic literacies' framework which can take account of the conflicting and contested nature of writing practices, and may therefore be more valuable for understanding student writing in today's higher education than traditional models and approaches.", "title": "" }, { "docid": "2aa298d65ad723f7c89597165c563587", "text": "Recommender systems are needed to find food items of one’s interest. We review recommender systems and recommendation methods. We propose a food personalization framework based on adaptive hypermedia. We extend Hermes framework with food recommendation functionality. We combine TF-IDF term extraction method with cosine similarity measure. Healthy heuristics and standard food database are incorporated into the knowledgebase. Based on the performed evaluation, we conclude that semantic recommender systems in general outperform traditional recommenders systems with respect to accuracy, precision, and recall, and that the proposed recommender has a better F-measure than existing semantic recommenders.", "title": "" }, { "docid": "6c6e4e776a3860d1df1ccd7af7f587d5", "text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.", "title": "" }, { "docid": "e07198de4fe8ea55f2c04ba5b6e9423a", "text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.", "title": "" }, { "docid": "826e01210bb9ce8171ed72043b4a304d", "text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.", "title": "" }, { "docid": "06113aca54d87ade86127f2844df6bfd", "text": "A growing number of people use social networking sites to foster social relationships among each other. While the advantages of the provided services are obvious, drawbacks on a users' privacy and arising implications are often neglected. In this paper we introduce a novel attack called automated social engineering which illustrates how social networking sites can be used for social engineering. Our approach takes classical social engineering one step further by automating tasks which formerly were very time-intensive. In order to evaluate our proposed attack cycle and our prototypical implementation (ASE bot), we conducted two experiments. Within the first experiment we examine the information gathering capabilities of our bot. The second evaluation of our prototype performs a Turing test. The promising results of the evaluation highlight the possibility to efficiently and effectively perform social engineering attacks by applying automated social engineering bots.", "title": "" }, { "docid": "71c34b48cd22a0a8bc9b507e05919301", "text": "Under the action of wind, tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. While the alongwind loads have been successfully treated using quasi-steady and strip theories in terms of gust loading factors, the acrosswind and torsional loads cannot be treated in this manner, since these loads cannot be related in a straightforward manner to the fluctuations in the approach flow. Accordingly, most current codes and standards provide little guidance for the acrosswind and torsional response. To fill this gap, a preliminary, interactive database of aerodynamic loads is presented, which can be accessed by any user with Microsoft Explorer at the URL address http://www.nd.edu/;nathaz/. The database is comprised of high-frequency base balance measurements on a host of isolated tall building models. Combined with the analysis procedure provided, the nondimensional aerodynamic loads can be used to compute the wind-induced response of tall buildings. The influence of key parameters, such as the side ratio, aspect ratio, and turbulence characteristics for rectangular sections, is also discussed. The database and analysis procedure are viable candidates for possible inclusion as a design guide in the next generation of codes and standards. DOI: 10.1061/~ASCE!0733-9445~2003!129:3~394! CE Database keywords: Aerodynamics; Wind loads; Wind tunnels; Databases; Random vibration; Buildings, high-rise; Turbulence. 394 / JOURNAL OF STRUCTURAL ENGINEERING © ASCE / MARCH 2003 tic model tests are presently used as routine tools in commercial design practice. However, considering the cost and lead time needed for wind tunnel testing, a simplified procedure would be desirable in the preliminary design stages, allowing early assessment of the structural resistance, evaluation of architectural or structural changes, or assessment of the need for detailed wind tunnel tests. Two kinds of wind tunnel-based procedures have been introduced in some of the existing codes and standards to treat the acrosswind and torsional response. The first is an empirical expression for the wind-induced acceleration, such as that found in the National Building Code of Canada ~NBCC! ~NRCC 1996!, while the second is an aerodynamic-load-based procedure such as those in Australian Standard ~AS 1989! and the Architectural Institute of Japan ~AIJ! Recommendations ~AIJ 1996!. The latter approach offers more flexibility as the aerodynamic load provided can be used to determine the response of any structure having generally the same architectural features and turbulence environment of the tested model, regardless of its structural characteristics. Such flexibility is made possible through the use of well-established wind-induced response analysis procedures. Meanwhile, there are some databases involving isolated, generic building shapes available in the literature ~e.g., Kareem 1988; Choi and Kanda 1993; Marukawa et al. 1992!, which can be expanded using HFBB tests. For example, a number of commercial wind tunnel facilities have accumulated data of actual buildings in their natural surroundings, which may be used to supplement the overall loading database. Though such HFBB data has been collected, it has not been assimilated and made accessible to the worldwide community, to fully realize its potential. Fortunately, the Internet now provides the opportunity to pool and archive the international stores of wind tunnel data. This paper takes the first step in that direction by introducing an interactive database of aerodynamic loads obtained from HFBB measurements on a host of isolated tall building models, accessible to the worldwide Internet community via Microsoft Explorer at the URL address http://www.nd.edu/;nathaz. Through the use of this interactive portal, users can select the Engineer, Malouf Engineering International, Inc., 275 W. Campbell Rd., Suite 611, Richardson, TX 75080; Fomerly, Research Associate, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: yzhou@nd.edu Graduate Student, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: tkijewsk@nd.edu Robert M. Moran Professor, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: kareem@nd.edu. Note. Associate Editor: Bogusz Bienkiewicz. Discussion open until August 1, 2003. Separate discussions must be submitted for individual papers. To extend the closing date by one month, a written request must be filed with the ASCE Managing Editor. The manuscript for this paper was submitted for review and possible publication on April 24, 2001; approved on December 11, 2001. This paper is part of the Journal of Structural Engineering, Vol. 129, No. 3, March 1, 2003. ©ASCE, ISSN 0733-9445/2003/3-394–404/$18.00. Introduction Under the action of wind, typical tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. It has been recognized that for many high-rise buildings the acrosswind and torsional response may exceed the alongwind response in terms of both serviceability and survivability designs ~e.g., Kareem 1985!. Nevertheless, most existing codes and standards provide only procedures for the alongwind response and provide little guidance for the critical acrosswind and torsional responses. This is partially attributed to the fact that the acrosswind and torsional responses, unlike the alongwind, result mainly from the aerodynamic pressure fluctuations in the separated shear layers and wake flow fields, which have prevented, to date, any acceptable direct analytical relation to the oncoming wind velocity fluctuations. Further, higher-order relationships may exist that are beyond the scope of the current discussion ~Gurley et al. 2001!. Wind tunnel measurements have thus served as an effective alternative for determining acrosswind and torsional loads. For example, the high-frequency base balance ~HFBB! and aeroelasgeometry and dimensions of a model building, from the available choices, and specify an urban or suburban condition. Upon doing so, the aerodynamic load spectra for the alongwind, acrosswind, or torsional response is displayed along with a Java interface that permits users to specify a reduced frequency of interest and automatically obtain the corresponding spectral value. When coupled with the concise analysis procedure, discussion, and example provided, the database provides a comprehensive tool for computation of the wind-induced response of tall buildings. Wind-Induced Response Analysis Procedure Using the aerodynamic base bending moment or base torque as the input, the wind-induced response of a building can be computed using random vibration analysis by assuming idealized structural mode shapes, e.g., linear, and considering the special relationship between the aerodynamic moments and the generalized wind loads ~e.g., Tschanz and Davenport 1983; Zhou et al. 2002!. This conventional approach yields only approximate estimates of the mode-generalized torsional moments and potential inaccuracies in the lateral loads if the sway mode shapes of the structure deviate significantly from the linear assumption. As a result, this procedure often requires the additional step of mode shape corrections to adjust the measured spectrum weighted by a linear mode shape to the true mode shape ~Vickery et al. 1985; Boggs and Peterka 1989; Zhou et al. 2002!. However, instead of utilizing conventional generalized wind loads, a base-bendingmoment-based procedure is suggested here for evaluating equivalent static wind loads and response. As discussed in Zhou et al. ~2002!, the influence of nonideal mode shapes is rather negligible for base bending moments, as opposed to other quantities like base shear or generalized wind loads. As a result, base bending moments can be used directly, presenting a computationally efficient scheme, averting the need for mode shape correction and directly accommodating nonideal mode shapes. Application of this procedure for the alongwind response has proven effective in recasting the traditional gust loading factor approach in a new format ~Zhou et al. 1999; Zhou and Kareem 2001!. The procedure can be conveniently adapted to the acrosswind and torsional response ~Boggs and Peterka 1989; Kareem and Zhou 2003!. It should be noted that the response estimation based on the aerodynamic database is not advocated for acrosswind response calculations in situations where the reduced frequency is equal to or slightly less than the Strouhal number ~Simiu and Scanlan 1996; Kijewski et al. 2001!. In such cases, the possibility of negative aerodynamic damping, a manifestation of motion-induced effects, may cause the computed results to be inaccurate ~Kareem 1982!. Assuming a stationary Gaussian process, the expected maximum base bending moment response in the alongwind or acrosswind directions or the base torque response can be expressed in the following form:", "title": "" } ]
scidocsrr
1d6f50e61ec8ba82fdffe2efce4c2f43
On the Near Impossibility of Measuring the Returns to Advertising
[ { "docid": "f7562e0540e65fdfdd5738d559b4aad1", "text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING", "title": "" } ]
[ { "docid": "1d2485f8a4e2a5a9f983bfee3e036b92", "text": "Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.", "title": "" }, { "docid": "ccb4d786a29d70ccb09dee97daae5798", "text": "Liver and intestine are tightly linked through the venous system of the portal circulation. Consequently, the liver is the primary recipient of gut-derived products, most prominently dietary nutrients and microbial components. It functions as a secondary \"firewall\" and protects the body from intestinal pathogens and other microbial products that have crossed the primary barrier of the intestinal tract. Disruption of the intestinal barrier enhances microbial exposure of the liver, which can have detrimental or beneficial effects in the organ depending on the specific circumstances. Conversely, the liver also exerts influence over intestinal microbial communities via secretion of bile acids and IgA antibodies. This mini-review highlights key findings and concepts in the area of host-microbial interactions as pertinent to the bilateral communication between liver and gut and highlights the concept of the gut-liver axis.", "title": "" }, { "docid": "617a3f1ed0164a058932cd9e96a9d103", "text": "Conventional approaches to speaker diarization use short-term features such as Mel Frequency Cepstral Co-efficients (MFCC). Features such as i-vectors have been used on longer segments (minimum 2.5 seconds of speech). Using i-vectors for speaker diarization has been shown to be beneficial as it models speaker information explicitly. In this paper, the i-vector modelling technique is adapted to be used as short term features for diarization by estimating i-vectors over a short window of MFCCs. The Information Bottleneck (IB) approach provides a convenient platform to integrate multiple features together for fast and accurate diarization of speech. Speaker models are estimated over a window of 10 frames of speech and used as features in the IB system. Experiments on the NIST RT datasets show absolute improvements of 3.9% in the best case when ivectors are used as auxiliary features to MFCC. Further, discriminative training algorithms such as LDA and PLDA are applied on the i-vectors. A best case performance improvement of 5% in absolute terms is obtained on the RT datasets.", "title": "" }, { "docid": "4539b6dda3a8b85dfb1ba0f5da6e7c8c", "text": "3D Printing promises to produce complex biomedical devices according to computer design using patient-specific anatomical data. Since its initial use as pre-surgical visualization models and tooling molds, 3D Printing has slowly evolved to create one-of-a-kind devices, implants, scaffolds for tissue engineering, diagnostic platforms, and drug delivery systems. Fueled by the recent explosion in public interest and access to affordable printers, there is renewed interest to combine stem cells with custom 3D scaffolds for personalized regenerative medicine. Before 3D Printing can be used routinely for the regeneration of complex tissues (e.g. bone, cartilage, muscles, vessels, nerves in the craniomaxillofacial complex), and complex organs with intricate 3D microarchitecture (e.g. liver, lymphoid organs), several technological limitations must be addressed. In this review, the major materials and technology advances within the last five years for each of the common 3D Printing technologies (Three Dimensional Printing, Fused Deposition Modeling, Selective Laser Sintering, Stereolithography, and 3D Plotting/Direct-Write/Bioprinting) are described. Examples are highlighted to illustrate progress of each technology in tissue engineering, and key limitations are identified to motivate future research and advance this fascinating field of advanced manufacturing.", "title": "" }, { "docid": "e6953902f5fc0bb9f98d9c632b2ac26e", "text": "In high voltage (HV) flyback charging circuits, the importance of transformer parasitics holds a significant part in the overall system parasitics. The HV transformers have a larger number of turns on the secondary side that leads to higher self-capacitance which is inevitable. The conventional wire-wound transformer (CWT) has limitation over the design with larger self-capacitance including increased size and volume. For capacitive load in flyback charging circuit these self-capacitances on the secondary side gets added with device capacitances and dominates the load. For such applications the requirement is to have a transformer with minimum self-capacitances and low profile. In order to achieve the above requirements Planar Transformer (PT) design can be implemented with windings as tracks in Printed Circuit Boards (PCB) each layer is insulated by the FR4 material which aids better insulation. Finite Element Model (FEM) has been developed to obtain the self-capacitance in between the layers for larger turns on the secondary side. The modelled hardware prototype of the Planar Transformer has been characterised for open circuit and short circuit test using Frequency Response Analyser (FRA). The results obtained from FEM and FRA are compared and presented.", "title": "" }, { "docid": "876a14dd24d3fbe3fb3de782558009b1", "text": "Correspondence: Keith r Martin Nutrition Program, Health Lifestyles research Center, College of Nursing and Health innovation, Arizona State University, 6950 east williams Field road Mesa, AZ 85212, USA Tel +1 480 727-1925 Fax +1 480 727-1064 email keith.r.martin@asu.edu Abstract: Increased consumption of fruits and vegetables is associated with a lower risk of chronic disease such as cardiovascular disease, some forms of cancer, and neurodegeneration. Pro-oxidant-induced oxidative stress contributes to the pathogenesis of numerous chronic diseases and, as such, dietary antioxidants can quench and/or retard such processes. Dietary polyphenols, ie, phenolic acids and flavonoids, are a primary source of antioxidants for humans and are derived from plants including fruits, vegetables, spices, and herbs. Based on compelling evidence regarding the health effects of polyphenol-rich foods, new dietary supplements and polyphenol-rich foods are being developed for public use. Consumption of such products can increase dietary polyphenol intake and subsequently plasma concentrations beyond expected levels associated with dietary consumption and potentially confer additional health benefits. Furthermore, bioavailability can be modified to further increase absorption and ultimately plasma concentrations of polyphenols. However, the upper limit for plasma concentrations of polyphenols before the elaboration of adverse effects is unknown for many polyphenols. Moreover, a considerable amount of evidence is accumulating which supports the hypothesis that high-dose polyphenols can mechanistically cause adverse effects through pro-oxidative action. Thus, polyphenol-rich dietary supplements can potentially confer additional benefits but high-doses may elicit toxicity thereby establishing a double-edge sword in supplement use.", "title": "" }, { "docid": "feca14524ff389c59a4d6f79954f26e3", "text": "Zero shot learning (ZSL) is about being able to recognize gesture classes that were never seen before. This type of recognition involves the understanding that the presented gesture is a new form of expression from those observed so far, and yet carries embedded information universal to all the other gestures (also referred as context). As part of the same problem, it is required to determine what action/command this new gesture conveys, in order to react to the command autonomously. Research in this area may shed light to areas where ZSL occurs, such as spontaneous gestures. People perform gestures that may be new to the observer. This occurs when the gesturer is learning, solving a problem or acquiring a new language. The ability of having a machine recognizing spontaneous gesturing, in the same manner as humans do, would enable more fluent human-machine interaction. In this paper, we describe a new paradigm for ZSL based on adaptive learning, where it is possible to determine the amount of transfer learning carried out by the algorithm and how much knowledge is acquired from a new gesture observation. Another contribution is a procedure to determine what are the best semantic descriptors for a given command and how to use those as part of the ZSL approach proposed.", "title": "" }, { "docid": "c80b01048778e5863882868774e3e98d", "text": "A new liaison role between Information Systems (IS) and users, the relationship manager (RM), has recently emerged. Accolding to the prescriptive literature, RMs add value by deep understanding of the businesses they serve and technologyleadership. Uttle is known, however, about their actual work practices. Is the RM an intermediary, filtering information and sometimes misinformation, from clients to IS, or do they play more pivotal roles as entrepreneurs and change agents? This article addresses these questions by studying four RMs in four different industries. The RMs were studied using the structured observation methodology employed by Mintzberg (CEOs), Ives and Olson (MIS managers), and Stephens et at. (CIOs), l'he findings suggest that while RMs spend less time communicating with users than one would expect, they are leaders, often mavericks, in the entrepreneurial work practices necessary to build partnerships with clients and to make the IS infrastructure more responsive to client needs.", "title": "" }, { "docid": "e94cc8dbf257878ea9b78eceb990cb3b", "text": "The past two decades have seen extensive growth of sexual selection research. Theoretical and empirical work has clarified many components of pre- and postcopulatory sexual selection, such as aggressive competition, mate choice, sperm utilization and sexual conflict. Genetic mechanisms of mate choice evolution have been less amenable to empirical testing, but molecular genetic analyses can now be used for incisive experimentation. Here, we highlight some of the currently debated areas in pre- and postcopulatory sexual selection. We identify where new techniques can help estimate the relative roles of the various selection mechanisms that might work together in the evolution of mating preferences and attractive traits, and in sperm-egg interactions.", "title": "" }, { "docid": "4019beb9fa6ec59b4b19c790fe8ff832", "text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.", "title": "" }, { "docid": "b80cd90ca314d9d48f708fc9b0ab2c3c", "text": "We give a survey of the equivariant Tamagawa number (a.k.a. Bloch-Kato) conjecture with particular emphasis on proven cases. The only new result is a proof of the 2-primary part of this conjecture for Tate-motives over abelian fields. This article is an expanded version of a survey talk given at the conference on Stark’s conjecture, Johns Hopkins University, Baltimore, August 5-9, 2002. We have tried to retain the succinctness of the talk when covering generalities but have considerably expanded the section on examples. Most of the following recapitulates well known material due to many people. Section 3 is joint work with D. Burns (for which [14], [15], [16], [17] are the main references). In section 5.1 we have given a detailed proof of the main result which also covers the prime l = 2 (unavailable in the literature so far).", "title": "" }, { "docid": "9074416729e07ba4ec11ebd0021b41ed", "text": "The purpose of this study is to examine the relationships between internet addiction and depression, anxiety, and stress. Participants were 300 university students who were enrolled in mid-size state University, in Turkey. In this study, the Online Cognition Scale and the Depression Anxiety Stress Scale were used. In correlation analysis, internet addiction was found positively related to depression, anxiety, and stress. According to path analysis results, depression, anxiety, and stress were predicted positively by internet addiction. This research shows that internet addiction has a direct impact on depression, anxiety, and stress.", "title": "" }, { "docid": "5d48cd6c8cc00aec5f7f299c346405c9", "text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of", "title": "" }, { "docid": "9d75520f138bcf7c529488f29d01efbb", "text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.", "title": "" }, { "docid": "ae3d141e473f54fa37708d393e54aee0", "text": "We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.", "title": "" }, { "docid": "32020b0a5c5dd4832f74d24bdcc83f69", "text": "Companies today are expected to engage in corporate social responsibility (CSR) and they spend a lot of time, money, and other resources on these tasks. However, in relation to their investment, the gain for most companies is marginal, because their efforts are only perceived by a small number of people. In this paper, our goal is to improve on this situation by involving a greater number of people in CSR campaigns and increasing media attention, while reducing expenses. We propose a model that utilizes storytelling on alternate realities to link social media with CSR tasks. Consumers are engaged in a story through interactive storytelling interfaces, which allow them to contribute to the CSR campaign. The company is always able to monitor and control their running campaign and can profit from social media contributions that spread the campaign goals. We describe the capabilities of the model as well as the problems it faces with storytelling on huge numbers of automatically extracted stories. The main challenge of the kind of storytelling we report is to find an adequate storytelling structure for an automatically generated story.", "title": "" }, { "docid": "a9b5b2cde37cb2403660d435a305dad1", "text": "Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-toend neural architectures for QA. Increasingly complex systems have been conceived without comparison to a simpler neural baseline system that would justify their complexity. In this work, we propose a simple heuristic that guided the development of FastQA, an efficient endto-end neural model for question answering that is very competitive with existing models. We further demonstrate, that an extended version (FastQAExt) achieves state-of-the-art results on recent benchmark datasets, namely SQuAD, NewsQA and MsMARCO, outperforming most existing models. However, we show that increasing the complexity of FastQA to FastQAExt does not yield any systematic improvements. We argue that the same holds true for most existing systems that are similar to FastQAExt. A manual analysis reveals that our proposed heuristic explains most predictions of our model, which indicates that modeling a simple heuristic is enough to achieve strong performance on extractive QA datasets. The overall strong performance of FastQA puts results of existing, more complex models into perspective.", "title": "" }, { "docid": "f7f9bd286808d885b25c3403ffd2bc4d", "text": "For scatterplots with gaussian distributions of dots, the perception of Pearson correlation r can be described by two simple laws: a linear one for discrimination, and a logarithmic one for perceived magnitude (Rensink & Baldridge, 2010). The underlying perceptual mechanisms, however, remain poorly understood. To cast light on these, four different distributions of datapoints were examined. The first had 100 points with equal variance in both dimensions. Consistent with earlier results, just noticeable difference (JND) was a linear function of the distance away from r = 1, and the magnitude of perceived correlation a logarithmic function of this quantity. In addition, these laws were linked, with the intercept of the JND line being the inverse of the bias in perceived magnitude. Three other conditions were also examined: a dot cloud with 25 points, a horizontal compression of the cloud, and a cloud with a uniform distribution of dots. Performance was found to be similar in all conditions. The generality and form of these laws suggest that what underlies correlation perception is not a geometric structure such as the shape of the dot cloud, but the shape of the probability distribution of the dots, likely inferred via a form of ensemble coding. It is suggested that this reflects the ability of observers to perceive the information entropy in an image, with this quantity used as a proxy for Pearson correlation.", "title": "" }, { "docid": "bc30f1eb3c002e2cbae2c36cfbaa8550", "text": "We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB). Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference. To accomplish this, eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network. All intermediate results from a layer are stored as binary values, as opposed to floating-points used in current BNN implementations, leading to a 32x reduction in required temporary space. We provide empirical evidence that our proposed eBNN approach allows efficient inference (10s of ms) on devices with severely limited memory (10s of KB). For example, eBNN achieves 95% accuracy on the MNIST dataset running on an Intel Curie with only 15 KB of usable memory with an inference runtime of under 50 ms per sample. To ease the development of applications in embedded contexts, we make our source code available that allows users to train and discover eBNN models for a learning task at hand, which fit within the memory constraint of the target device.", "title": "" }, { "docid": "0f44ab1a2d93ce015778e9a41063ce7b", "text": "Bullying is a serious problem in schools, and school authorities need effective solutions to resolve this problem. There is growing interest in the wholeschool approach to bullying. Whole-school programs have multiple components that operate simultaneously at different levels in the school community. This article synthesizes the existing evaluation research on whole-school programs to determine the overall effectiveness of this approach. The majority of programs evaluated to date have yielded nonsignificant outcomes on measures of self-reported victimization and bullying, and only a small number have yielded positive outcomes. On the whole, programs in which implementation was systematically monitored tended to be more effective than programs without any monitoring. show little empathy for their victims (Roberts & Morotti, 2000). Bullying may be a means of increasing one’s social status and access to valued resources, such as the attention of opposite-sex peers (Pellegrini, 2001). Victims tend to be socially isolated, lack social skills, and have more anxiety and lower self-esteem than students in general (Olweus, 1997). They also tend to have a higher than normal risk for depression and suicide (e.g., Sourander, Helstelae, Helenius, & Piha, 2000). A subgroup of victims reacts aggressively to abuse and has a distinct pattern of psychosocial maladjustment encompassing both the antisocial behavior of bullies and the social and emotional difficulties of victims (Glover, Gough, Johnson, & Cartwright, 2000). Bullying is a relatively stable and long-term problem for those involved, particularly children fitting the profile Bullying is a particularly vicious kind of aggressive behavior distinguished by repeated acts against weaker victims who cannot easily defend themselves (Farrington, 1993; Smith & Brain, 2000). Its consequences are severe, especially for those victimized over long periods of time. Bullying is a complex psychosocial problem influenced by a myriad of variables. The repetition and imbalance of power involved may be due to physical strength, numbers, or psychological factors. Both bullies and victims evidence poorer psychological adjustment than individuals not involved in bullying (Kumpulainen, Raesaenen, & Henttonen, 1999; Nansel et al., 2001). Children who bully tend to be involved in alcohol consumption and smoking, have poorer academic records than noninvolved students, display a strong need for dominance, and", "title": "" } ]
scidocsrr
dfde60ed374d38d4a7a7815e60eeb29c
Enhancing Chinese Word Segmentation Using Unlabeled Data
[ { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" } ]
[ { "docid": "839de75206c99c88fbc10f9f322235be", "text": "This paper proposes a new fault-tolerant sensor network architecture for monitoring pipeline infrastructures. This architecture is an integrated wired and wireless network. The wired part of the network is considered the primary network while the wireless part is used as a backup among sensor nodes when there is any failure in the wired network. This architecture solves the current reliability issues of wired networks for pipelines monitoring and control. This includes the problem of disabling the network by disconnecting the network cables due to artificial or natural reasons. In addition, it solves the issues raised in recently proposed network architectures using wireless sensor networks for pipeline monitoring. These issues include the issues of power management and efficient routing for wireless sensor nodes to extend the life of the network. Detailed advantages of the proposed integrated network architecture are discussed under different application and fault scenarios.", "title": "" }, { "docid": "85eb1b34bf15c6b5dcd8778146bfcfca", "text": "A novel face recognition algorithm is presented in this paper. Histogram of Oriented Gradient features are extracted both for the test image and also for the training images and given to the Support Vector Machine classifier. The detailed steps of HOG feature extraction and the classification using SVM is presented. The algorithm is compared with the Eigen feature based face recognition algorithm. The proposed algorithm and PCA are verified using 8 different datasets. Results show that in all the face datasets the proposed algorithm shows higher face recognition rate when compared with the traditional Eigen feature based face recognition algorithm. There is an improvement of 8.75% face recognition rate when compared with PCA based face recognition algorithm. The experiment is conducted on ORL database with 2 face images for testing and 8 face images for training for each person. Three performance curves namely CMC, EPC and ROC are considered. The curves show that the proposed algorithm outperforms when compared with PCA algorithm. IndexTerms: Facial features, Histogram of Oriented Gradients, Support Vector Machine, Principle Component Analysis.", "title": "" }, { "docid": "bde516c748dcd4a9b16ec8228220fa90", "text": "BACKGROUND\nFew studies on foreskin development and the practice of circumcision have been done in Chinese boys. This study aimed to determine the natural development process of foreskin in children.\n\n\nMETHODS\nA total of 10 421 boys aged 0 to 18 years were studied. The condition of foreskin was classified into type I (phimosis), type II (partial phimosis), type III (adhesion of prepuce), type IV (normal), and type V (circumcised). Other abnormalities of the genitalia were also determined.\n\n\nRESULTS\nThe incidence of a completely retractile foreskin increased from 0% at birth to 42.26% in adolescence; however, the phimosis rate decreased with age from 99.7% to 6.81%. Other abnormalities included web penis, concealed penis, cryptorchidism, hydrocele, micropenis, inguinal hernia, and hypospadias.\n\n\nCONCLUSIONS\nIncomplete separation of foreskin is common in children. Since it is a natural phenomenon to approach the adult condition until puberty, circumcision should be performed with cautions in children.", "title": "" }, { "docid": "ed0b269f861775550edd83b1eb420190", "text": "The continuous innovation process of the Information and Communication Technology (ICT) sector shape the way businesses redefine their business models. Though, current drivers of innovation processes focus solely on a technical dimension, while disregarding social and environmental drivers. However, examples like Nokia, Yahoo or Hewlett-Packard show that even though a profitable business model exists, a sound strategic innovation process is needed to remain profitable in the long term. A sustainable business model innovation demands the incorporation of all dimensions of the triple bottom line. Nevertheless, current management processes do not take the responsible steps to remain sustainable and keep being in denial of the evolutionary direction in which the markets develop, because the effects are not visible in short term. The implications are of substantial effect and can bring the foundation of the company’s business model in danger. This work evaluates the decision process that lets businesses decide in favor of un-sustainable changes and points out the barriers that prevent the development towards a sustainable business model that takes the new balance of forces into account.", "title": "" }, { "docid": "8c3ecd27a695fef2d009bbf627820a0d", "text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.", "title": "" }, { "docid": "ca74dda60d449933ff72d14fe5c7493c", "text": "We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditional distribution that generally involves a small move, so it has fewer dominant modes and is unimodal in the limit of small moves. This simplifies the learning problem, making it less like density estimation and more akin to supervised function approximation, with gradients that can be obtained by backprop. The theorems provided here provide a probabilistic interpretation for denoising autoencoders and generalize them; seen in the context of this framework, auto-encoders that learn with injected noise are a special case of GSNs and can be interpreted as generative models. The theorems also provide an interesting justification for dependency networks and generalized pseudolikelihood and define an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. Experiments validating these theoretical results are conducted on both synthetic datasets and image datasets. The experiments employ a particular architecture that mimics the Deep Boltzmann Machine Gibbs sampler but that allows training to proceed with backprop through a recurrent neural network with noise injected inside and without the need for layerwise pretraining.", "title": "" }, { "docid": "ffb65e7e1964b9741109c335f37ff607", "text": "To build a redundant medium-voltage converter, the semiconductors must be able to turn OFF different short circuits. The most challenging one is a hard turn OFF of a diode which is called short-circuit type IV. Without any protection measures this short circuit destroys the high-voltage diode. Therefore, a novel three-level converter with an increased short-circuit inductance is used. In this paper several short-circuit measurements on a 6.5 kV diode are presented which explain the effect of the protection measures. Moreover, the limits of the protection scheme are presented.", "title": "" }, { "docid": "6ea7ef18171ae0af018bf9b5f2ddd7f8", "text": "This paper presents DONet, a data-driven overlay network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available data to partners. We emphasize three salient features of this data-driven design: 1) easy to implement, as it does not have to construct and maintain a complex global structure; 2) efficient, as data forwarding is dynamically determined according to data availability while not restricted by specific directions; and 3) robust and resilient, as the partnerships enable adaptive and quick switching among multi-suppliers. We show through analysis that DONet is scalable with bounded delay. We also address a set of practical challenges for realizing DONet, and propose an efficient member and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents. We have extensively evaluated the performance of DONet over the PlanetLab. Our experiments, involving almost all the active PlanetLab nodes, demonstrate that DONet achieves quite good streaming quality even under formidable network conditions. Moreover, its control overhead and transmission delay are both kept at low levels. An Internet-based DONet implementation, called CoolStreaming v.0.9, was released on May 30, 2004, which has attracted over 30000 distinct users with more than 4000 simultaneously being online at some peak times. We discuss the key issues toward designing CoolStreaming in this paper, and present several interesting observations from these large-scale tests; in particular, the larger the overlay size, the better the streaming quality it can deliver.", "title": "" }, { "docid": "a8d6fe9d4670d1ccc4569aa322f665ee", "text": "Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.", "title": "" }, { "docid": "0448b076548e9ada3529292741ac1a29", "text": "Evidence based medicine, whose philosophical origins extend back to mid-19th century Paris and earlier, remains a hot topic for clinicians, public health practitioners, purchasers, planners, and the public. There are now frequent workshops in how to practice and teach it (one sponsored by the BMJ will be held in London on 24 April); undergraduate and postgraduate training programmes are incorporating it (or pondering how to do so); British centres for evidence based practice have been established or planned in adult medicine, child health, surgery, pathology, pharmacotherapy, nursing, general practice, and dentistry; the Cochrane Collaboration and Britain's Centre for Review and Dissemination in York are providing systematic reviews of the effects of health care; new evidence based practice journals are being launched; and it has become a common topic in the lay media. But enthusiasm has been mixed with some negative reaction. 5 6 Criticism has ranged from evidence based medicine being old hat to it being a dangerous innovation, perpetrated by the arrogant to serve cost cutters and suppress clinical freedom. As evidence based medicine continues to evolve and adapt, now is a useful time to refine the discussion of what it is and what it is not.", "title": "" }, { "docid": "be9971903bf3d754ed18cc89cf254bd1", "text": "This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.", "title": "" }, { "docid": "9448a075257110d47c0fefa521aa34c1", "text": "We present a developmental perspective of robot learning that uses affordances as the link between sensory-motor coordination and imitation. The key concept is a general model for affordances able to learn the statistical relations between actions, object properties and the effects of actions on objects. Based on the learned affordances, it is possible to perform simple imitation games providing both task interpretation and planning capabilities. To evaluate the approach, we provide results of affordance learning with a real robot and simple imitation games with people.", "title": "" }, { "docid": "4b8ef592ca2c4bb36133483c93ee12ee", "text": "The recent decade has witnessed a mass proliferation of information systems enabled, community-based, social networking. Such proliferation has contributed to seismic social and political movements around the globe, but is yet to make a noticeable imprint in business organisations. While many researchers and practitioners have advocated the transition of social media to the organisational sphere, the actuality of this transition is still deficient, necessitating thorough investigation. Consequently, this study addresses this pressing issue by first, presenting a vantage point on the theoretical and practical underpinnings of social media and the revolutionising role they stand to play in organisations. An empirical case study is then presented highlighting the actual diffusion and utilisation of social media in a regional branch of a global consultancy and audit firm. The findings hold important implications as they identify key drivers contributing to the successful diffusion of social media in organisations, and their corresponding utilisation for enabling an inclusive and innovative environment in the workplace.", "title": "" }, { "docid": "88976f137ea43b1be8d133ddc4124af2", "text": "Real-time stereo vision is attractive in many areas such as outdoor mapping and navigation. As a popular accelerator in the image processing field, GPU is widely used for the studies of the stereo vision algorithms. Recently, many stereo vision systems on GPU have achieved low error rate, as a result of the development of deep learning. However, their processing speed is normally far from the real-time requirement. In this paper, we propose a real-time stereo vision system on GPU for the high-resolution images. This system also maintains a low error rate compared with other fast systems. In our approach, the image is resized to reduce the computational complexity and to realize the real-time processing. The low error rate is kept by using the cost aggregation with multiple blocks, secondary matching and sub-pixel estimation. Its processing speed is 41 fps for $2888\\times 1920$ pixels images when the maximum disparity is 760.", "title": "" }, { "docid": "19a0954fb21092853d9577e25019aaee", "text": "In this paper the design of a CMOS cascoded operational amplifier is described. Due to technology scaling the design of a former developed operational amplifier has now overcome its stability problems. A stable three stage operational amplifier is presented. A layout has been created automatically by using the ALADIN tool. With help of the extracted layout the performance data of the amplifier is simulated.", "title": "" }, { "docid": "107cad2d86935768e9401495d2241b20", "text": "A method is presented for using an extended Kalman filter with state noise compensation to estimate the trajectory, orientation, and slip variables for a small-scale robotic tracked vehicle. The principal goal of the method is to enable terrain property estimation. The methodology requires kinematic and dynamic models for skid-steering, as well as tractive force models parameterized by key soil parameters. Simulation studies initially used to verify the model basis are described, and results presented from application of the estimation method to both simulated and experimental study of a 60-kg robotic tracked vehicle. Preliminary results show the method can effectively estimate vehicle trajectory relying only on the model-based estimation and onboard sensor information. Estimates of slip on the left and right track as well as slip angle are essential for ongoing work in vehicle-based soil parameter estimation. The favorable comparison against motion capture data suggests this approach will be useful for laboratory and field-based application.", "title": "" }, { "docid": "e1c0fc53db69eb0cc8778fd03498aa64", "text": "An outlier is an observation that deviates so much from other observations that it seems to have been generated by a different mechanism. Outlier detection has many applications, such as data cleaning, fraud detection and network intrusion. The existence of outliers can indicate individuals or groups that exhibit a behavior that is very different from most of the individuals of the data set. Frequently, outliers are removed to improve accuracy of estimators, but sometimes, the presence of an outlier has a certain meaning, which explanation can be lost if the outlier is deleted. In this paper we study the effect of the presence of outliers on the performance of three well-known classifiers based on the results observed on four real world datasets. We use detection of outliers based on robust statistical estimators of the center and the covariance matrix for the Mahalanobis distance, detection of outliers based on clustering using the partitioning around medoids (PAM) algorithm, and two data mining techniques to detect outliers: Bay’s algorithm for distance-based outliers, and the LOF, a density-based local outlier algorithm.", "title": "" }, { "docid": "c8c97af8f1c2b539eb0bf833de483272", "text": "In this paper we discuss the genre of pervasive larp that seamlessly merges game and ordinary life, presenting Prosopopeia Bardo : Där vi föll, which was intended as a proof-of-concept for the genre. In addition to being a street larp staged in the cityscape, Prosopopeia aimed at blurring the border of game and ordinary life by spanning over a long duration of players’ lives and by forcing the players to larp with outsiders. Mixing the game content and non-game content turned out to produce a load of engaging experiences and emergent game content.", "title": "" }, { "docid": "e5a3119470420024b99df2d6eb14b966", "text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?", "title": "" }, { "docid": "6f9a8fe3ee315fe8fd403e23d23c17e3", "text": "A common approach to clustering data is to view data objects as points in a metric space, and then to optimize a natural distance-based objective such as the k-median, k-means, or min-sum score. For applications such as clustering proteins by function or clustering images by subject, the implicit hope in taking this approach is that the optimal solution for the chosen objective will closely match the desired “target” clustering (e.g., a correct clustering of proteins by function or of images by who is in them). However, most distance-based objectives, including those mentioned here, are NP-hard to optimize. So, this assumption by itself is not sufficient, assuming P ≠ NP, to achieve clusterings of low-error via polynomial time algorithms.\n In this article, we show that we can bypass this barrier if we slightly extend this assumption to ask that for some small constant c, not only the optimal solution, but also all c-approximations to the optimal solution, differ from the target on at most some ε fraction of points—we call this (c,ε)-approximation-stability. We show that under this condition, it is possible to efficiently obtain low-error clusterings even if the property holds only for values c for which the objective is known to be NP-hard to approximate. Specifically, for any constant c > 1, (c,ε)-approximation-stability of k-median or k-means objectives can be used to efficiently produce a clustering of error O(ε) with respect to the target clustering, as can stability of the min-sum objective if the target clusters are sufficiently large. Thus, we can perform nearly as well in terms of agreement with the target clustering as if we could approximate these objectives to this NP-hard value.", "title": "" } ]
scidocsrr
a860f0b3712942dab25ac0147784d42b
An Empirical Analysis of Phishing Blacklists
[ { "docid": "40fbee18e4b0eca3f2b9ad69119fec5d", "text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.", "title": "" }, { "docid": "00410fcb0faa85d5423ccf0a7cc2f727", "text": "Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today", "title": "" }, { "docid": "2cf5552cca1c21986d2b4a00c4286941", "text": "There are many applications available for phishing detection. However, unlike predicting spam, there are only few studies that compare machine learning techniques in predicting phishing. The present study compares the predictive accuracy of several machine learning methods including Logistic Regression (LR), Classification and Regression Trees (CART), Bayesian Additive Regression Trees (BART), Support Vector Machines (SVM), Random Forests (RF), and Neural Networks (NNet) for predicting phishing emails. A data set of 2889 phishing and legitimate emails is used in the comparative study. In addition, 43 features are used to train and test the classifiers.", "title": "" }, { "docid": "9bbf2a9f5afeaaa0f6ca12e86aef8e88", "text": "Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a \"skin\" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.", "title": "" } ]
[ { "docid": "58d2f2b9abea7c83d3d9988bf356a8a1", "text": "In this study, the warpages of a chip-first and die face-up FOWLP (fan-out wafer-level packaging) with a very large silicon chip (10mmx10mmx0.15mm) and three RDLs (redistributed layers) are measured and characterized. Emphasis is placed on the measurement and 3D finite element simulation of the warpages during the FOWLP fabrication processes, especially for: (a) right after PMC (post mold cure), (b) right after backgrinding of the EMC (epoxy molding compound) to expose the Cu-contact pads, and (c) the individual package (right after the solder ball mounting and dicing) vs. SMT (surface mount technology) reflow temperatures. The simulation results are compared to the measurement results. Some recommendations on controlling the warpages are provided.", "title": "" }, { "docid": "65e297211555a88647eb23a65698531c", "text": "Game theoretical techniques have recently become prevalen t in many engineering applications, notably in communications. With the emergence of cooperation as a new communicat ion paradigm, and the need for self-organizing, decentrali zed, and autonomic networks, it has become imperative to seek sui table game theoretical tools that allow to analyze and study the behavior and interactions of the nodes in future communi cation networks. In this context, this tutorial introduces the concepts of cooperative game theory, namely coalitiona l games, and their potential applications in communication and wireless networks. For this purpose, we classify coalit i nal games into three categories: Canonical coalitional g ames, coalition formation games, and coalitional graph games. Th is new classification represents an application-oriented a pproach for understanding and analyzing coalitional games. For eac h class of coalitional games, we present the fundamental components, introduce the key properties, mathematical te hniques, and solution concepts, and describe the methodol ogies for applying these games in several applications drawn from the state-of-the-art research in communications. In a nuts hell, this article constitutes a unified treatment of coalitional g me theory tailored to the demands of communications and", "title": "" }, { "docid": "38a74fff83d3784c892230255943ee23", "text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "title": "" }, { "docid": "0a45c122c6995df91f03f8615f4668d1", "text": "The advanced microgrid is envisioned to be a critical part of the future smart grid because of its local intelligence, automation, interoperability, and distributed energy resources (DER) hosting capability. The enabling technology of advanced microgrids is the microgrid management system (MGMS). In this article, we discuss and review the concept of the MGMS and state-of-the-art solutions regarding centralized and distributed MGMSs in the primary, secondary, and tertiary levels, from which we observe a general tendency toward decentralization.", "title": "" }, { "docid": "f00da39761c24d335777c86d7fad0c02", "text": "Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic video, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real environment, with graphic enhancements or augmentations. Augmented Virtuality describes that class of displays that enhance the virtual experience by adding elements of the real environment. All Mixed Reality systems are limited in their capability of accurately displaying and controlled all relevant depth cues, and as a result, perceptual biases can interfere with task performance. In this paper we identify and discuss eighteen issues that pertain to Mixed Reality in general, and Augmented Reality in particular.", "title": "" }, { "docid": "d10c1a0b7553953a8fddc87815911cfb", "text": "Until recently, information-flow analysis has been used primarily to verify that information transmission between program variables cannot violate security requirements. Here, the notion of information flow is explored as an aid to program development and validation.\nInformation-flow relations are presented for while-programs, which identify those program statements whose execution may cause information to be transmitted from or to particular input, internal, or output values. It is shown with examples how these flow relations can be helpful in writing, testing, and updating programs; they also usefully extend the class of errors which can be detected automatically in the “static analysis” of a program.", "title": "" }, { "docid": "5cd70dede0014f4a58c0dc8460ba8513", "text": "In this paper the Model Predictive Control (MPC) strategy is used to solve the mobile robot trajectory tracking problem, where controller must ensure that robot follows pre-calculated trajectory. The so-called explicit optimal controller design and implementation are described. The MPC solution is calculated off-line and expressed as a piecewise affine function of the current state of a mobile robot. A linearized kinematic model of a differential drive mobile robot is used for the controller design purpose. The optimal controller, which has a form of a look-up table, is tested in simulation and experimentally.", "title": "" }, { "docid": "6e13d2074fcacffe93608ff48b093c35", "text": "Interest in the construct of psychopathy as it applies to children and adolescents has become an area of considerable research interest in the past 5-10 years, in part due to the clinical utility of psychopathy as a predictor of violence among adult offenders. Despite interest in \"juvenile psychopathy\" in general and its relationship to violence in particular, relatively few studies specifically have examined whether operationalizations of this construct among children and adolescents predict various forms of aggression. This article critically reviews this literature, as well as controversies regarding the assessment of adult psychopathic \"traits\" among juveniles. Existing evidence indicates a moderate association between measures of psychopathy and various forms of aggression, suggesting that this construct may be relevant for purposes of short-term risk appraisal and management among juveniles. However, due to the enormous developmental changes that occur during adolescence and the absence of longitudinal research on the stability of this construct (and its association with violence), we conclude that reliance on psychopathy measures to make decisions regarding long-term placements for juveniles is contraindicated at this time.", "title": "" }, { "docid": "5350af2d42f9321338e63666dcd42343", "text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.", "title": "" }, { "docid": "1d195fb4df8375772674d0852a046548", "text": "All existing image enhancement methods, such as HDR tone mapping, cannot recover A/D quantization losses due to insufficient or excessive lighting, (underflow and overflow problems). The loss of image details due to A/D quantization is complete and it cannot be recovered by traditional image processing methods, but the modern data-driven machine learning approach offers a much needed cure to the problem. In this work we propose a novel approach to restore and enhance images acquired in low and uneven lighting. First, the ill illumination is algorithmically compensated by emulating the effects of artificial supplementary lighting. Then a DCNN trained using only synthetic data recovers the missing detail caused by quantization.", "title": "" }, { "docid": "f6c0cde5a1d44b5899761448faaf8a59", "text": "Information security and confidentiality has become of prime concern and importance as a result of the notifiable and explosive growth of the internet. Also, with the growth of information technology and communication techniques, unauthorized access for sensitive data increases daily. There are a lot of widely used techniques in the communication and computer security fields like; cryptography and steganography to protect sensitive data from attackers. New fields of DNA based cryptography and steganography are emerging to provide data security using DNA as a carrier by exploiting its bio-molecular computational abilities. In this paper, the authors compare various DNA based steganography by using important security parameters with explaining the main DNA based steganography strategies that each one is considered as a building block for most of the other steganographic schemes, which are: The substitution, insertion and complementary pair based algorithms. Finally, on this base some suggestions are given to help future researchers to design or improve the DNA storage techniques for secure data storage through more efficient, reliable, high capacity and biologically preserved DNA steganography techniques.", "title": "" }, { "docid": "ea6f873567e8c45200afb62723cf6e16", "text": "Database technology is one of the cornerstones for the new millennium’s IT landscape. However, database systems as a unit of code packaging and deployment are at a crossroad: commercial systems have been adding features for a long time and have now reached complexity that makes them a difficult choice, in terms of their \"gain/pain ratio\", as a central platform for value-added information services such as ERP or e-commerce. It is critical that database systems be easy to manage, predictable in their performance characteristics, and ultimately self-tuning. For this elusive goal, RISC-style simplification of server functionality and interfaces is absolutely crucial. We suggest a radical architectural departure in which database technology is packaged into much smaller RISC-style data managers with lean, specialized APIs, and with built-in self-assessment and auto-tuning capabilities 1. The Need for a New Departure Database technology has an extremely successful track record as a backbone of information technology (IT) throughout the last three decades. High-level declarative query languages like SQL and atomic transactions are key assets in the cost-effective development and maintenance of information systems. Furthermore, database technology continues to play a major role in the trends of our modern cyberspace society with applications ranging from webbased applications/services, and digital libraries to information mining on business as well as scientific data. Thus, database technology has impressively proven its benefits and seems to remain crucially relevant in the new millennium as well. Success is a lousy teacher (to paraphrase Bill Gates), and therefore we should not conclude that the database system, as the unit of engineering, deploying, and operating packaged database technology, is in good shape. A closer look at some important application areas and major trends in the software industry strongly indicates that database systems have an overly low “gain/pain ratio”. First, with the dramatic drop of hardware and software prices, the expenses due to human administration and tuning staff dominate the cost of ownership for a database system. The complexity and cost of these feed-and-care tasks is likely to prohibit database systems from further playing their traditionally prominent role in the future IT infrastructure. Next, database technology is more likely to be adopted in unbundled and dispersed form within higher-level application services. Both of the above problems stem from packaging all database technology into a single unit of development, maintenance, deployment, and operation. We argue that this architecture is no longer appropriate for the new age of cyberspace applications. The alternative approach that we envision and advocate in this paper is to provide RISC-style, functionally restricted, specialized data managers that have a narrow interface as well as a smaller footprint and are more amenable to automatic tuning. The rest of the paper is organized as follows. Section 2 puts together some important observations indicating that database systems in their traditional form are in crisis. Section 3 briefly reviews earlier attempts for a new architectural departure along the lines of the current paper, and discusses why they did not catch on. Section 4 outlines the envisioned architecture with emphasis on RISC-style simplification of data-management components and consequences for the viability of autotuning. Section 5 outlines a possible research agenda towards our vision. 2. Crisis Indicators To begin our analysis, let us put together a few important observations on how database systems are perceived by customers, vendors, and the research community. Observation 1: Featurism drives products beyond manageability. Database systems offer more and more features, leading to extremely broad and thus complex Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 26th International Conference on Very Large Databases, Cairo, Egypt, 2000 interfaces. Quite often novel features are more a marketing issue rather than a real application need or technological advance; for example, a database system vendor may decide to support a fancy type of join or spatial index in the next product release because the major competitors have already announced this feature. As a result, database systems become overloaded with functionality, increasing the complexity of maintaining the system’s code base as well as installing and managing the system. The irony of this trend lies in the fact that each individual customer (e.g., a small enterprise) only makes use of a tiny fraction of the system’s features and many high-end features are hardly ever exercised at all. Observation 2: SQL is painful. A big headache that comes with a database system is the SQL language. It is the union of all conceivable features (many of which are rarely used or should be discouraged to use anyway) and is way too complex for the typical application developer. Its core, say selection-projection-join queries and aggregation, is extremely useful, but we doubt that there is wide and wise use of all the bells and whistles. Understanding semantics of SQL (not even of SQL-92), covering all combinations of nested (and correlated) subqueries, null values, triggers, ADT functions, etc. is a nightmare. Teaching SQL typically focuses on the core, and leaves the featurism as a “learning-on-the-job” life experience. Some trade magazines occasionally pose SQL quizzes where the challenge is to express a complicated information request in a single SQL statement. Those statements run over several pages, and are hardly comprehensible. When programmers adopt this style in real applications and given the inherent difficulty of debugging a very high-level “declarative” statement, it is extremely hard if not impossible to gain high confidence that the query is correct in capturing the users’ information needs. In fact, good SQL programming in many cases decomposes complex requests into a sequence of simpler SQL statements. Observation 3: Performance is unpredictable. Commercial database engines are among the most sophisticated pieces of software that have ever been built in the history of computer technology. Furthermore, as product releases have been driven by the time-to-market pressure for quite a few years, these systems have little leeway for redesigning major components so that adding features and enhancements usually increases the code size and complexity and, ultimately, the general “software entropy” of the system. The scary consequence is that database systems become inherently unpredictable in their exact behavior and, especially, performance. Individual components like query optimizers may already have crossed the critical complexity barrier. There is probably no single person in the world who fully understands all subtleties of the complex interplay of rewrite rules, approximate cost models, and search-space traversal heuristics that underlie the optimization of complex queries. Contrast this dilemma with the emerging need for performance and service quality guarantees in ecommerce, digital libraries, and other Internet applications. The PTAC report has rightly emphasized: “our ability to analyze and predict the performance of the enormously complex software systems that lie at the core of our economy is painfully inadequate” [18]. Observation 4: Tuning is a nightmare and auto-tuning is wishful thinking at this stage. The wide diversity of applications for a given database system makes it impossible to provide universally good performance by solely having a well-engineered product. Rather all commercial database systems offer a variety of “tuning knobs” that allow the customer to adjust certain system parameters to the specific workload characteristics of the application. These knobs include index selection, data placement across parallel disks, and other aspects of physical database design, query optimizer hints, thresholds that govern the partitioning of memory or multiprogramming level in a multi-user environment. Reasonable settings for such critical parameters for a complex application often depend on the expertise and experience of highly skilled tuning gurus and/or timeconsuming trial-and-error experimentation; both ways are expensive and tend to dominate the cost of ownership for a database system. “Auto-tuning” capabilities and “zeroadmin” systems have been put on the research and development agenda as high priority topics for several years (see, e.g., [2]), but despite some advances on individual issues (e.g., [4,7,8,10,24]) progress on the big picture of self-tuning system architectures is slow and a breakthrough is not nearly in sight. Although commercial systems have admittedly improved on ease of use, many tuning knobs are merely disguised by introducing internal thresholds that still have to be carefully considered, e.g., at packaging or installation time to take into account the specific resources and the application environment. In our experience, robust, universally working default settings for complex tuning knobs are wishful thinking. Despite the common myth is that a few rules of thumb could be sufficient for most tuning concerns, with complex, highly diverse workloads whose characteristics evolve over time it is quite a nightmare to find appropriate settings for physical design and the various run-time parameters of a database server to ensure at least decent performance. Observation 5: We are not alone in the universe. Database systems are not (or no long", "title": "" }, { "docid": "ee6bcb714c118361a51db8f1f8f0e985", "text": "BACKGROUND\nWe propose the use of serious games to screen for abnormal cognitive status in situations where it may be too costly or impractical to use standard cognitive assessments (eg, emergency departments). If validated, serious games in health care could enable broader availability of efficient and engaging cognitive screening.\n\n\nOBJECTIVE\nThe objective of this work is to demonstrate the feasibility of a game-based cognitive assessment delivered on tablet technology to a clinical sample and to conduct preliminary validation against standard mental status tools commonly used in elderly populations.\n\n\nMETHODS\nWe carried out a feasibility study in a hospital emergency department to evaluate the use of a serious game by elderly adults (N=146; age: mean 80.59, SD 6.00, range 70-94 years). We correlated game performance against a number of standard assessments, including the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and the Confusion Assessment Method (CAM).\n\n\nRESULTS\nAfter a series of modifications, the game could be used by a wide range of elderly patients in the emergency department demonstrating its feasibility for use with these users. Of 146 patients, 141 (96.6%) consented to participate and played our serious game. Refusals to play the game were typically due to concerns of family members rather than unwillingness of the patient to play the game. Performance on the serious game correlated significantly with the MoCA (r=-.339, P <.001) and MMSE (r=-.558, P <.001), and correlated (point-biserial correlation) with the CAM (r=.565, P <.001) and with other cognitive assessments.\n\n\nCONCLUSIONS\nThis research demonstrates the feasibility of using serious games in a clinical setting. Further research is required to demonstrate the validity and reliability of game-based assessments for clinical decision making.", "title": "" }, { "docid": "7bc81d5c42266a75fe46d99a76b0861d", "text": "Stem cells continue to garner attention by the news media and play a role in public and policy discussions of emerging technologies. As new media platforms develop, it is important to understand how different news media represents emerging stem cell technologies and the role these play in public discussions. We conducted a comparative analysis of newspaper and sports websites coverage of one recent high profile case: Gordie Howe’s stem cell treatment in Mexico. Using qualitative coding methods, we analyzed news articles and readers’ comments from Canadian and US newspapers and sports websites. Results indicate that the efficacy of stem cell treatments is often assumed in news coverage and readers’ comments indicate a public with a wide array of beliefs and perspectives on stem cells and their clinical efficacy. Media coverage that presents uncritical perspectives on unproven stem cell therapies may create patient expectations, may have an affect on policy discussions, and help to feed the marketing of unproven therapies. However, news coverage that provides more balanced or critical coverage of unproven stem cell treatments may also inspire more critical discussion, as reflected in readers’ comments.", "title": "" }, { "docid": "f7b911eca27efc3b0535f8b48222f993", "text": "Numerous entity linking systems are addressing the entity recognition problem by using off-the-shelf NER systems. It is, however, a difficult task to select which specific model to use for these systems, since it requires to judge the level of similarity between the datasets which have been used to train models and the dataset at hand to be processed in which we aim to properly recognize entities. In this paper, we present the newest version of ADEL, our adaptive entity recognition and linking framework, where we experiment with an hybrid approach mixing a model combination method to improve the recognition level and to increase the efficiency of the linking step by applying a filter over the types. We obtain promising results when performing a 4-fold cross validation experiment on the OKE 2016 challenge training dataset. We also demonstrate that we achieve better results that in our previous participation on the OKE 2015 test set. We finally report the results of ADEL on the OKE 2016 test set and we present an error analysis highlighting the main difficulties of this challenge.", "title": "" }, { "docid": "6050bd9f60b92471866d2935d42fce2d", "text": "As one of the successful forms of using Wisdom of Crowd, crowdsourcing, has been widely used for many human intrinsic tasks, such as image labeling, natural language understanding, market predication and opinion mining. Meanwhile, with advances in pervasive technology, mobile devices, such as mobile phones and tablets, have become extremely popular. These mobile devices can work as sensors to collect multimedia data(audios, images and videos) and location information. This power makes it possible to implement the new crowdsourcing mode: spatial crowdsourcing. In spatial crowdsourcing, a requester can ask for resources related a specific location, the mobile users who would like to take the task will travel to that place and get the data. Due to the rapid growth of mobile device uses, spatial crowdsourcing is likely to become more popular than general crowdsourcing, such as Amazon Turk and Crowdflower. However, to implement such a platform, effective and efficient solutions for worker incentives, task assignment, result aggregation and data quality control must be developed. In this demo, we will introduce gMission, a general spatial crowdsourcing platform, which features with a collection of novel techniques, including geographic sensing, worker detection, and task recommendation. We introduce the sketch of system architecture and illustrate scenarios via several case analysis.", "title": "" }, { "docid": "54ebafe33f0e0cffe2431e9fb9a5bed5", "text": "The distributed query optimization is one of the hardest problems in the database area. The great commercial success of database systems is partly due to the development of sophisticated query optimization technology where users pose queries in a declarative way using SQL or OQL and the optimizer of the database system finds a good way (i. e. plan) to execute these queries. The optimizer, for example, determines which indices should be used to execute a query and in which order the operations of a query (e. g. joins, selects, and projects) should be executed. To this end, the optimizer enumerates alternative plans, estimates the cost of every plan using a cost model, and chooses the plan with lowest cost. There has been much research into this field. In this paper, we study the problem of distributed query optimization; we focus on the basic components of the distributed query optimizer, i. e. search space, search strategy, and cost model. A survey of the available work into this field is given. Finally, some future work is highlighted based on some recent work that uses mobile agent", "title": "" }, { "docid": "5948f08c1ca41b7024a4f7c0b2a99e5b", "text": "Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent, compared with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolution neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluate our DRNNs on the SemEval-2010 Task 8, and achieve an F1score of 85.81%, outperforming state-of-theart recorded results.", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" }, { "docid": "2ae69330b32aa485876e26ecc78ca66d", "text": "One of the promising usages of Physically Unclonable Functions (PUFs) is to generate cryptographic keys from PUFs for secure storage of key material. This usage has attractive properties such as physical unclonability and enhanced resistance against hardware attacks. In order to extract a reliable cryptographic key from a noisy PUF response a fuzzy extractor is used to convert non-uniform random PUF responses into nearly uniform randomness. Bösch et al. in 2008 proposed a fuzzy extractor suitable for efficient hardware implementation using two-stage concatenated codes, where the inner stage is a conventional error correcting code and the outer stage is a repetition code. In this paper we show that the combination of PUFs with repetition code approaches is not without risk and must be approached carefully. For example, PUFs with min-entropy lower than 66% may yield zero leftover entropy in the generated key for some repetition code configurations. In addition, we find that many of the fuzzy extractor designs in the literature are too optimistic with respect to entropy estimation. For high security applications, we recommend a conservative estimation of entropy loss based on the theoretical work of fuzzy extractors and present parameters for generating 128-bit keys from memory based PUFs.", "title": "" } ]
scidocsrr
ba4fd858ae6198a47a0ea3ce1f079232
Extracting semantics from audio-visual content: the final frontier in multimedia retrieval
[ { "docid": "4070072c5bd650d1ca0daf3015236b31", "text": "Automated classiication of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the eeciency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identiication of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in video, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion, and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classiier built using these features is able to identify sports clips with an accuracy of about 93%.", "title": "" }, { "docid": "662b1ec9e2481df760c19567ce635739", "text": "Semantic versus nonsemantic information icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today’s technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see “Semantic versus nonsemantic” sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they", "title": "" } ]
[ { "docid": "09e740b38d0232361c89f47fce6155b4", "text": "Nano-emulsions consist of fine oil-in-water dispersions, having droplets covering the size range of 100-600 nm. In the present work, nano-emulsions were prepared using the spontaneous emulsification mechanism which occurs when an organic phase and an aqueous phase are mixed. The organic phase is an homogeneous solution of oil, lipophilic surfactant and water-miscible solvent, the aqueous phase consists on hydrophilic surfactant and water. An experimental study of nano-emulsion process optimisation based on the required size distribution was performed in relation with the type of oil, surfactant and the water-miscible solvent. The results showed that the composition of the initial organic phase was of great importance for the spontaneous emulsification process, and so, for the physico-chemical properties of the obtained emulsions. First, oil viscosity and HLB surfactants were changed, alpha-tocopherol, the most viscous oil, gave the smallest droplets size (171 +/- 2 nm), HLB required for the resulting oil-in-water emulsion was superior to 8. Second, the effect of water-solvent miscibility on the emulsification process was studied by decreasing acetone proportion in the organic phase. The solvent-acetone proportion leading to a fine nano-emulsion was fixed at 15/85% (v/v) with EtAc-acetone and 30/70% (v/v) with MEK-acetone mixture. To strength the choice of solvents, physical characteristics were compared, in particular, the auto-inflammation temperature and the flash point. This phase of emulsion optimisation represents an important step in the process of polymeric nanocapsules preparation using nanoprecipitation or interfacial polycondensation combined with spontaneous emulsification technique.", "title": "" }, { "docid": "a95761b5a67a07d02547c542ddc7e677", "text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; RL9J@virginia.edu. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.", "title": "" }, { "docid": "170a1dba20901d88d7dc3988647e8a22", "text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.", "title": "" }, { "docid": "0778eff54b2f48c9ed4554c617b2dcab", "text": "The diagnosis of heart disease is a significant and tedious task in medicine. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. The term Heart disease encompasses the diverse diseases that affect the heart. Cardiomyopathy and Cardiovascular disease are some categories of heart diseases. The reduction of blood and oxygen supply to the heart leads to heart disease. In this paper the data classification is based on supervised machine learning algorithms which result in accuracy, time taken to build the algorithm. Tanagra tool is used to classify the data and the data is evaluated using 10-fold cross validation and the results are compared.", "title": "" }, { "docid": "037dc2916e4356c11039e9520369ca3b", "text": "Surmounting terrain elevations, such as terraces, is useful to increase the reach of mobile robots operating in disaster areas, construction sites, and natural environments. This paper proposes an autonomous climbing maneuver for tracked mobile manipulators with the help of the onboard arm. The solution includes a fast 3-D scan processing method to estimate a simple set of geometric features for the ascent: three lines that correspond to the low and high edges, and the maximum inclination axis. Furthermore, terraces are classified depending on whether they are reachable through a slope or an abrupt step. In the proposed maneuver, the arm is employed both for shifting the center of gravity of the robot and as an extra limb that can be pushed against the ground. Feedback during climbing can be obtained through an inertial measurement unit, joint absolute encoders, and pressure sensors. Experimental results are presented for terraces of both kinds on rough terrain with the hydraulic mobile manipulator Alacrane.", "title": "" }, { "docid": "cfb1e7710233ca9a8e91885801326c20", "text": "During the last ten years technological development has reshaped the banking industry, which has become one of the leading sectors in utilizing new technology on consumer markets. Today, mobile communication technologies offer vast additional value for consumers’ banking transactions due to their always-on functionality and the option to access banks anytime and anywhere. Various alternative approaches have used in analyzing customer’s acceptance of new technologies. In this paper, factors affect acceptance of Mobile Banking are explored and presented as a New Model.", "title": "" }, { "docid": "a0c37bb6608f51f7095d6e5392f3c2f9", "text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589", "title": "" }, { "docid": "109c5caa55d785f9f186958f58746882", "text": "Apriori and Eclat are the best-known basic algorithms for mining frequent item sets in a set of transactions. In this paper I describe implementations of these two algorithms that use several optimizations to achieve maximum performance, w.r.t. both execution time and memory usage. The Apriori implementation is based on a prefix tree representation of the needed counters and uses a doubly recursive scheme to count the transactions. The Eclat implementation uses (sparse) bit matrices to represent transactions lists and to filter closed and maximal item sets.", "title": "" }, { "docid": "4f9b168efee2348f0f02f2480f9f449f", "text": "Transcutaneous neuromuscular electrical stimulation applied in clinical settings is currently characterized by a wide heterogeneity of stimulation protocols and modalities. Practitioners usually refer to anatomic charts (often provided with the user manuals of commercially available stimulators) for electrode positioning, which may lead to inconsistent outcomes, poor tolerance by the patients, and adverse reactions. Recent evidence has highlighted the crucial importance of stimulating over the muscle motor points to improve the effectiveness of neuromuscular electrical stimulation. Nevertheless, the correct electrophysiological definition of muscle motor point and its practical significance are not always fully comprehended by therapists and researchers in the field. The commentary describes a straightforward and quick electrophysiological procedure for muscle motor point identification. It consists in muscle surface mapping by using a stimulation pen-electrode and it is aimed at identifying the skin area above the muscle where the motor threshold is the lowest for a given electrical input, that is the skin area most responsive to electrical stimulation. After the motor point mapping procedure, a proper placement of the stimulation electrode(s) allows neuromuscular electrical stimulation to maximize the evoked tension, while minimizing the dose of the injected current and the level of discomfort. If routinely applied, we expect this procedure to improve both stimulation effectiveness and patient adherence to the treatment. The aims of this clinical commentary are to present an optimized procedure for the application of neuromuscular electrical stimulation and to highlight the clinical implications related to its use.", "title": "" }, { "docid": "619e3893a731ffd0ed78c9dd386a1dff", "text": "The introduction of new gesture interfaces has been expanding the possibilities of creating new Digital Musical Instruments (DMIs). Leap Motion Controller was recently launched promising fine-grained hand sensor capabilities. This paper proposes a preliminary study and evaluation of this new sensor for building new DMIs. Here, we list a series of gestures, recognized by the device, which could be theoretically used for playing a large number of musical instruments. Then, we present an analysis of precision and latency of these gestures as well as a first case study integrating Leap Motion with a virtual music keyboard.", "title": "" }, { "docid": "df0756ecff9f2ba84d6db342ee6574d3", "text": "Security is becoming a critical part of organizational information systems. Intrusion detection system (IDS) is an important detection that is used as a countermeasure to preserve data integrity and system availability from attacks. Data mining is being used to clean, classify, and examine large amount of network data to correlate common infringement for intrusion detection. The main reason for using data mining techniques for intrusion detection systems is due to the enormous volume of existing and newly appearing network data that require processing. The amount of data accumulated each day by a network is huge. Several data mining techniques such as clustering, classification, and association rules are proving to be useful for gathering different knowledge for intrusion detection. This paper presents the idea of applying data mining techniques to intrusion detection systems to maximize the effectiveness in identifying attacks, thereby helping the users to construct more secure information systems.", "title": "" }, { "docid": "058db5e1a8c58a9dc4b68f6f16847abc", "text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.", "title": "" }, { "docid": "619165e7f74baf2a09271da789e724df", "text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.", "title": "" }, { "docid": "05ab4fa15696ee8b47e017ebbbc83f2c", "text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.", "title": "" }, { "docid": "0ccbc8579a1d6e39c92f8a7acea979bd", "text": "In mental health, the term ‘recovery’ is commonly used to refer to the lived experience of the person coming to terms with, and overcoming the challenges associated with, having a mental illness (Shepherd et al 2008). The term ‘recovery’ has evolved as having a special meaning for mental health service users (Andresen et al 2003) and consistently refers to their personal experiences and expectations for recovery (Slade et al 2008). On the other hand, mental health service providers often refer to a ‘recovery’ framework in order to promote their service (Meehan et al 2008). However, practitioners lean towards a different meaning-in-use, which is better described as ‘clinical recovery’ and is measured routinely in terms of symptom profiles, health service utilisation, health outcomes and global assessments of functioning. These very different meanings-in-use of the same term have the potential to cause considerable confusion to readers of the mental health literature. Researchers have recently identified an urgent need to clarify the recovery concept so that a common meaning can be established and the construct can be defined operationally (Meehan et al 2008, Slade et al 2008). This paper aims to delineate a construct of recovery that can be applied operationally and consistently in mental health. The criteria were twofold: 1. The dimensions need to have a parsimonious and near mutually exclusive internal structure 2. All stakeholder perspectives and interests, including those of the wider community, need to be accommodated. With these criteria in mind, the literature was revisited to identify possible domains. It was subsequently identified that the recovery literature can be reclassified into components that accommodate the views of service users, practitioners, rehabilitation providers, family and carers, and the wider community. The recovery dimensions identified were clinical recovery, personal recovery, social recovery and functional recovery. Recovery as a concept has gained increased attention in the field of mental health. There is an expectation that service providers use a recovery framework in their work. This raises the question of what recovery means, and how it is conceptualised and operationalised. It is proposed that service providers approach the application of recovery principles by considering systematically individual recovery goals in multiple domains, encompassing clinical recovery, personal recovery, social recovery and functional recovery. This approach enables practitioners to focus on service users’ personal recovery goals while considering parallel goals in the clinical, social, and role-functioning domains. Practitioners can reconceptualise recovery as involving more than symptom remission, and interventions can be tailored to aspects of recovery of importance to service users. In order to accomplish this shift, practitioners will require effective assessments, access to optimal treatment and care, and the capacity to conduct recovery planning in collaboration with service users and their families and carers. Mental health managers can help by fostering an organisational culture of service provision that supports a broader focus than that on clinical recovery alone, extending to client-centred recovery planning in multiple recovery domains.", "title": "" }, { "docid": "16a6c26d6e185be8383c062c6aa620f8", "text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.", "title": "" }, { "docid": "74ce3b76d697d59df0c5d3f84719abb8", "text": "Existing Byzantine fault tolerance (BFT) protocols face significant challenges in the consortium blockchain scenario. On the one hand, we can make little assumptions about the reliability and security of the underlying Internet. On the other hand, the applications on consortium blockchains demand a system as scalable as the Bitcoin but providing much higher performance, as well as provable safety. We present a new BFT protocol, Gosig, that combines crypto-based secret leader selection and multi-round voting in the protocol layer with implementation layer optimizations such as gossip-based message propagation. In particular, Gosig guarantees safety even in a network fully controlled by adversaries, while providing provable liveness with easy-to-achieve network connectivity assumption. On a wide area testbed consisting of 140 Amazon EC2 servers spanning 14 cities on five continents, we show that Gosig can achieve over 4,000 transactions per second with less than 1 minute transaction confirmation time.", "title": "" }, { "docid": "9c3218ce94172fd534e2a70224ee564f", "text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.", "title": "" }, { "docid": "207bb3922ad45daa1023b70e1a18baf7", "text": "The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.", "title": "" }, { "docid": "d80fc668073878c476bdf3997b108978", "text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system", "title": "" } ]
scidocsrr
259bf7197e7afefe9bfa1f4fd62ff545
Electrical simulations of series and parallel PV arc-faults
[ { "docid": "66474114bf431f3ee6973ad6469565b2", "text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damage and to eliminate risks of safety hazards. This paper focuses on line–line faults in PV arrays that may be caused by short-circuit faults or double ground faults. The effect on fault current from a maximum-power-point tracking of a PV inverter is discussed and shown to, at times, prevent overcurrent protection devices (OCPDs) to operate properly. Furthermore, fault behavior of PV arrays is highly related to the fault location, fault impedance, irradiance level, and use of blocking diodes. Particularly, this paper examines the challenges to OCPD in a PV array brought by unique faults: One is a fault that occurs under low-irradiance conditions, and the other is a fault that occurs at night and evolves during “night-to-day” transition. In both circumstances, the faults might remain hidden in the PV system, no matter how irradiance changes afterward. These unique faults may subsequently lead to unexpected safety hazards, reduced system efficiency, and reduced reliability. A small-scale experimental PV system has been developed to further validate the conclusions.", "title": "" }, { "docid": "1634b893909c900194f0f936d3dcdc10", "text": "The 2011 National Electrical Code® (NEC®) added Article 690.11 that requires photovoltaic (PV) systems on or penetrating a building to include a listed DC arc fault protection device. To fill this new market, manufacturers are developing new Arc Fault Circuit Interrupters (AFCIs). Comprehensive and challenging testing has been conducted using a wide range of PV technologies, system topologies, loads and noise sources. The Distributed Energy Technologies Laboratory (DETL) at Sandia National Laboratories (SNL) has used multiple reconfigurable arrays with a variety of module technologies, inverters, and balance of system (BOS) components to characterize new Photovoltaic (PV) DC AFCIs and Arc Fault Detectors (AFDs). The device's detection capabilities, characteristics and nuisance tripping avoidance were the primary purpose of the testing. SNL and Eaton Corporation collaborated to test an Eaton AFD prototype and quantify arc noise for a wide range of PV array configurations and the system responses. The tests were conducted by generating controlled, series PV arc faults between PV modules. Arc fault detection studies were performed on systems using aged modules, positive- and negative-grounded arrays, DC/DC converters, 3-phase inverters, and on strings with branch connectors. The tests were conducted to determine if nuisance trips would occur in systems using electrically noisy inverters, with series arc faults on parallel strings, and in systems with inverters performing anti-islanding and maximum power point tracking (MPPT) algorithms. The tests reported herein used the arc fault detection device to indicate when the trip signal was sent to the circuit interrupter. Results show significant noise is injected into the array from the inverter but AFCI functionality of the device was generally stable. The relative locations of the arc fault and detector had little influence on arc fault detection. Lastly, detection of certain frequency bands successfully differentiated normal operational noise from an arc fault signal.", "title": "" } ]
[ { "docid": "befd91b3e6874b91249d101f8373db01", "text": "Today's biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/", "title": "" }, { "docid": "2943f1d374a6a63ef1b140a83e5a8caf", "text": "Gill morphometric and gill plasticity of the air-breathing striped catfish (Pangasianodon hypophthalmus) exposed to different temperatures (present day 27°C and future 33°C) and different air saturation levels (92% and 35%) during 6weeks were investigated using vertical sections to estimate the respiratory lamellae surface areas, harmonic mean barrier thicknesses, and gill component volumes. Gill respiratory surface area (SA) and harmonic mean water - blood barrier thicknesses (HM) of the fish were strongly affected by both environmental temperature and oxygen level. Thus initial values for 27°C normoxic fish (12.4±0.8g) were 211.8±21.6mm2g-1 and 1.67±0.12μm for SA and HM respectively. After 5weeks in same conditions or in the combinations of 33°C and/or PO2 of 55mmHg, this initial surface area scaled allometrically with size for the 33°C hypoxic group, whereas branchial SA was almost eliminated in the 27°C normoxic group, with other groups intermediate. In addition, elevated temperature had an astounding effect on growth with the 33°C group growing nearly 8-fold faster than the 27°C fish.", "title": "" }, { "docid": "7d0020ff1a7500df1458ddfd568db7b4", "text": "In this position paper, we address the problems of automated road congestion detection and alerting systems and their security properties. We review different theoretical adaptive road traffic control approaches, and three widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT and InSync. We then discuss some related research questions, and the corresponding possible approaches, as well as the adversary model and potential attack scenarios. Two theoretical concepts of automated road congestion alarm systems (including system architecture, communication protocol, and algorithms) are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the security properties of the proposed system have been discussed and analysed using the ProVerif protocol verification tool.", "title": "" }, { "docid": "e8197d339037ada47ed6db5f8f427211", "text": "Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of precurved superelastic tubes and are capable of assuming complex 3-D curves. The family of 3-D curves that the robot can assume depends on the number, curvatures, lengths, and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedure- or patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery.", "title": "" }, { "docid": "b4d7a8b6b24c85af9f62105194087535", "text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.", "title": "" }, { "docid": "742528748a3103e145029539b0faeb90", "text": "The views expressed in this paper are those of the author(s) and do not necessarily reflect the policies of Statistics Netherlands Explanation of symbols. data not available * provisional gure ** revised provisional gure (but not dee nite) x publication prohibited (conn dential gure) – nil – (between two gures) inclusive 0 (0.0) less than half of unit concerned empty cell not applicable Due to rounding, some totals may not correspond with the sum of the separate gures. Summary This report describes the validity testing of a multi-item scale of global life satisfaction, namely the Satisfaction With Life Scale (SWLS). This scale has been proposed as an alternative to single-item life satisfaction measures. As expected, the scale has sufficient construct validity and consists of one underlying dimension. The SWLS is significantly related to an alternative global life satisfaction measure, which indicates convergent validity. However, this correlation is rather low, which raises the question whether the SWLS measures the same as the single-item life satisfaction measure. The correlation of the SWLS with an alternative well-being measure (i.e. mental health index) is lower than the correlation with global life satisfaction. This provides evidence of discriminant validity. Other aspects related to well-being, such as health and illness, also correlate significantly with the SWLS. This shows nomological validity. However, the SWLS also has a number of serious shortcomings. It suffers from data collection mode effects and the data show that a specific group of respondents misinterprets the scale. Furthermore, the SWLS and the single-item global life satisfaction are almost equally related to other aspects that predict well-being. Therefore, the SWLS has no clear added value as an alternative of the single-item life satisfaction. In conclusion, it is recommended that a single-item measure be used instead of the SWLS.", "title": "" }, { "docid": "2a057079c544b97dded598b6f0d750ed", "text": "Introduction Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions:", "title": "" }, { "docid": "0aa666e59fc645f8bbc16483581bf4c4", "text": "Wide Area Motion Imagery (WAMI) enables the surveillance of tens of square kilometers with one airborne sensor Each image can contain thousands of moving objects. Applications such as driver behavior analysis or traffic monitoring require precise multiple object tracking that is dependent on initial detections. However, low object resolution, dense traffic, and imprecise image alignment lead to split, merged, and missing detections. No systematic evaluation of moving object detection exists so far although many approaches have been presented in the literature. This paper provides a detailed overview of existing methods for moving object detection in WAMI data. Also we propose a novel combination of short-term background subtraction and suppression of image alignment errors by pixel neighborhood consideration. In total, eleven methods are systematically evaluated using more than 160,000 ground truth detections of the WPAFB 2009 dataset. Best performance with respect to precision and recall is achieved by the proposed one.", "title": "" }, { "docid": "875bba98f3b6dcdc851798c9eef2aa3e", "text": "This paper presents a DC−30 GHz single-polefour-throw (SP4T) CMOS switch using 0.13 μm CMOS process. The CMOS transistor layout is done to minimize the substrate network resistance. The on-chip matching inductors and routing are designed for a very small die area (250 × 180 μm), and modeled using full-wave EM simulations. The SP4T CMOS switch result in an insertion loss of 1.8 dB and 2.7 dB at 5 GHz and 24 GHz, respectively. The isolation is > 25 dB up to 30 GHz and achieved using a series-shunt switch configuration. The measured input P1dB and IIP3 of the SP4T switch are 9 dBm and 21 dBm, respectively. To our knowledge, this is the first ultra wideband CMOS SP4T switch and with a very small chip area.", "title": "" }, { "docid": "ea0952674e4fbf5e5c5d3738cc4a6ae1", "text": "Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.", "title": "" }, { "docid": "f891a454b463d130bbe6306d92d05587", "text": "We examine the employment of word embeddings for machine translation (MT) of phrasal verbs (PVs), a linguistic phenomenon with challenging semantics. Using word embeddings, we augment the translation model with two features: one modelling distributional semantic properties of the source and target phrase and another modelling the degree of compositionality of PVs. We also obtain paraphrases to increase the amount of relevant training data. Our method leads to improved translation quality for PVs in a case study with English to Bulgarian MT system.", "title": "" }, { "docid": "ba4860f970b966f482b6c68c63b4404d", "text": "Systems for assessing and tutoring reading skills place unique requirements on underlying ASR technologies. Most responses to a “read out loud” task can be handled with a low perplexity language model, but the educational setting of the task calls for diagnostic measures beyond plain accuracy. Pearson developed an automatic assessment of oral reading fluency that was administered in the field to a large, diverse sample of American adults. Traditional N-gram methods for language modeling are not optimal for the special domain of reading tests because N-grams need too much data and do not produce as accurate recognition. An efficient rule-based language model implemented a set of linguistic rules learned from an archival body of transcriptions, using only the text of the new passage and no passage-specific training data. Results from operational data indicate that this rule-based language model can improve the accuracy of test results and produce useful diagnostic information.", "title": "" }, { "docid": "6886b42b7624d2a47466d7356973f26c", "text": "Conventional on-off keyed signals, such as return-to-zero (RZ) and nonreturn-to-zero (NRZ) signals are susceptible to cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs) due to pattern effect. In this letter, XGM effect of Manchester-duobinary, RZ differential phase-shift keying (RZ-DPSK), NRZ-DPSK, RZ, and NRZ signals in SOAs were compared. The experimental results confirmed the reduction of crosstalk penalty in SOAs by using Manchester-duobinary signals", "title": "" }, { "docid": "200fd3c94e8b064833cfcbe7dfe0d39e", "text": "This article reviews the current opinion of the histopathological findings of common elbow, wrist, and hand tendinopathies. Implications for client management including examination, diagnosis, prognosis, intervention, and outcomes are addressed. Concepts for further research regarding common therapeutic interventions are discussed.", "title": "" }, { "docid": "7182c5b1fac4a4d0d43a15c1feb28be1", "text": "This paper provides an objective evaluation of the performance impacts of binary XML encodings, using a fast stream-based XQuery processor as our representative application. Instead of proposing one binary format and comparing it against standard XML parsers, we investigate the individual effects of several binary encoding techniques that are shared by many proposals. Our goal is to provide a deeper understanding of the performance impacts of binary XML encodings in order to clarify the ongoing and often contentious debate over their merits, particularly in the domain of high performance XML stream processing.", "title": "" }, { "docid": "21df2b20c9ecd6831788e00970b3ca79", "text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.", "title": "" }, { "docid": "75e794b731685064820c79f4d68ed79b", "text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to implicitly indicate groups. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. We discuss results from evaluations of those techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.", "title": "" }, { "docid": "9d7e520928aa2fdeab7fbfe4fe2258ed", "text": "Psychomotor stimulants and neuroleptics exert multiple effects on dopaminergic signaling and produce the dopamine (DA)-related behaviors of motor activation and catalepsy, respectively. However, a clear relationship between dopaminergic activity and behavior has been very difficult to demonstrate in the awake animal, thus challenging existing notions about the mechanism of these drugs. The present study examined whether the drug-induced behaviors are linked to a presynaptic site of action, the DA transporter (DAT) for psychomotor stimulants and the DA autoreceptor for neuroleptics. Doses of nomifensine (7 mg/kg i.p.), a DA uptake inhibitor, and haloperidol (0.5 mg/kg i.p.), a dopaminergic antagonist, were selected to examine characteristic behavioral patterns for each drug: stimulant-induced motor activation in the case of nomifensine and neuroleptic-induced catalepsy in the case of haloperidol. Presynaptic mechanisms were quantified in situ from extracellular DA dynamics evoked by electrical stimulation and recorded by voltammetry in the freely moving animal. In the first experiment, the maximal concentration of electrically evoked DA ([DA](max)) measured in the caudate-putamen was found to reflect the local, instantaneous change in presynaptic DAT or DA autoreceptor activity according to the ascribed action of the drug injected. A positive temporal association was found between [DA](max) and motor activation following nomifensine (r=0.99) and a negative correlation was found between [DA](max) and catalepsy following haloperidol (r=-0.96) in the second experiment. Taken together, the results suggest that a dopaminergic presynaptic site is a target of systemically applied psychomotor stimulants and regulates the postsynaptic action of neuroleptics during behavior. This finding was made possible by a voltammetric microprobe with millisecond temporal resolution and its use in the awake animal to assess release and uptake, two key mechanisms of dopaminergic neurotransmission. Moreover, the results indicate that presynaptic mechanisms may play a more important role in DA-behavior relationships than is currently thought.", "title": "" }, { "docid": "879f675f7c8a25af3f0feb7bed09504b", "text": "Occlusions by sunglasses, scarf, hats, beard, shadow etc, can significantly reduce the performance of face recognition systems. Although there exists a rich literature of researches focusing on face recognition with illuminations, poses and facial expression variations, there is very limited work reported for occlusion robust face recognition. In this paper, we present a method to restore occluded facial regions using deep learning technique to improve face recognition performance. Inspired by SSDA for facial occlusion removal with known occlusion type and explicit occlusion location detection from a preprocessing step, this paper further introduces Double Channel SSDA (DC-SSDA) which requires no prior knowledge of the types and the locations of occlusions. Experimental results based on CMU-PIE face database have showed that, the proposed method is robust to a variety of occlusion types and locations, and the restored faces could yield significant recognition performance improvements over occluded ones.", "title": "" } ]
scidocsrr
9700d880ea946726f8aa8a0afe0f63d8
Wearable Monitoring Unit for Swimming Performance Analysis
[ { "docid": "8717a6e3c20164981131997efbe08a0d", "text": "The recent maturity of body sensor networks has enabled a wide range of applications in sports, well-being and healthcare. In this paper, we hypothesise that a single unobtrusive head-worn inertial sensor can be used to infer certain biomotion details of specific swimming techniques. The sensor, weighing only seven grams is mounted on the swimmer's goggles, limiting the disturbance to a minimum. Features extracted from the recorded acceleration such as the pitch and roll angles allow to recognise the type of stroke, as well as basic biomotion indices. The system proposed represents a non-intrusive, practical deployment of wearable sensors for swimming performance monitoring.", "title": "" }, { "docid": "4122375a509bf06cc7e8b89cb30357ff", "text": "Textile-based sensors offer an unobtrusive method of continually monitoring physiological parameters during daily activities. Chemical analysis of body fluids, noninvasively, is a novel and exciting area of personalized wearable healthcare systems. BIOTEX was an EU-funded project that aimed to develop textile sensors to measure physiological parameters and the chemical composition of body fluids, with a particular interest in sweat. A wearable sensing system has been developed that integrates a textile-based fluid handling system for sample collection and transport with a number of sensors including sodium, conductivity, and pH sensors. Sensors for sweat rate, ECG, respiration, and blood oxygenation were also developed. For the first time, it has been possible to monitor a number of physiological parameters together with sweat composition in real time. This has been carried out via a network of wearable sensors distributed around the body of a subject user. This has huge implications for the field of sports and human performance and opens a whole new field of research in the clinical setting.", "title": "" } ]
[ { "docid": "0886c323b86b4fac8de6217583841318", "text": "Data Mining is a technique used in various domains to give meaning to the available data Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree, Bayesian networks, k-nearest neighbour classifier, Neural Network, Support vector machine. The goal of this paper is to provide a review of different classification techniques in data mining. Keywords— Data mining, classification, Supper vector machine (SVM), K-nearest neighbour (KNN), Decision Tree.", "title": "" }, { "docid": "c112b88b7a5762050a54a15d066336b0", "text": "Before 2005, data broker ChoicePoint suffered fraudulent access to its databases that exposed thousands of customers' personal information. We examine Choice-Point's data breach, explore what went wrong from the perspective of consumers, executives, policy, and IT systems, and offer recommendations for the future.", "title": "" }, { "docid": "2923ea4e17567b06b9d8e0e9f1650e55", "text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.", "title": "" }, { "docid": "24174e59a5550fbf733c1a93f1519cf7", "text": "Using social practice theory, this article reveals the process of collective value creation within brand communities. Moving beyond a single case study, the authors examine previously published research in conjunction with data collected in nine brand communities comprising a variety of product categories, and they identify a common set of value-creating practices. Practices have an “anatomy” consisting of (1) general procedural understandings and rules (explicit, discursive knowledge); (2) skills, abilities, and culturally appropriate consumption projects (tacit, embedded knowledge or how-to); and (3) emotional commitments expressed through actions and representations. The authors find that there are 12 common practices across brand communities, organized by four thematic aggregates, through which consumers realize value beyond that which the firm creates or anticipates. They also find that practices have a physiology, interact with one another, function like apprenticeships, endow participants with cultural capital, produce a repertoire for insider sharing, generate consumption opportunities, evince brand community vitality, and create value. Theoretical and managerial implications are offered with specific suggestions for building and nurturing brand community and enhancing collaborative value creation between and among consumers and firms.", "title": "" }, { "docid": "114affaf4e25819aafa1c11da26b931f", "text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.", "title": "" }, { "docid": "44a8b574a892bff722618d256aa4ba6c", "text": "In this article, we investigate the cross-media retrieval between images and text, that is, using image to search text (I2T) and using text to search images (T2I). Existing cross-media retrieval methods usually learn one couple of projections, by which the original features of images and text can be projected into a common latent space to measure the content similarity. However, using the same projections for the two different retrieval tasks (I2T and T2I) may lead to a tradeoff between their respective performances, rather than their best performances. Different from previous works, we propose a modality-dependent cross-media retrieval (MDCR) model, where two couples of projections are learned for different cross-media retrieval tasks instead of one couple of projections. Specifically, by jointly optimizing the correlation between images and text and the linear regression from one modal space (image or text) to the semantic space, two couples of mappings are learned to project images and text from their original feature spaces into two common latent subspaces (one for I2T and the other for T2I). Extensive experiments show the superiority of the proposed MDCR compared with other methods. In particular, based on the 4,096-dimensional convolutional neural network (CNN) visual feature and 100-dimensional Latent Dirichlet Allocation (LDA) textual feature, the mAP of the proposed method achieves the mAP score of 41.5%, which is a new state-of-the-art performance on the Wikipedia dataset.", "title": "" }, { "docid": "8ea0ac6401d648e359fc06efa59658e6", "text": "Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way. The model is tested on speech corpus TIMIT for phoneme recognition and IEMOCAP for emotion recognition. Experimental results show that the model is competitive with previous methods in terms of accuracy and efficiency.", "title": "" }, { "docid": "474986186c068f8872f763288b0cabd7", "text": "Mobile ad hoc network researchers face the challenge of achieving full functionality with good performance while linking the new technology to the rest of the Internet. A strict layered design is not flexible enough to cope with the dynamics of manet environments, however, and will prevent performance optimizations. The MobileMan cross-layer architecture offers an alternative to the pure layered approach that promotes stricter local interaction among protocols in a manet node.", "title": "" }, { "docid": "c05f2a6df3d58c5a18e0087556c8067e", "text": "Child maltreatment is a major social problem. This paper focuses on measuring the relationship between child maltreatment and crime using data from the National Longitudinal Study of Adolescent Health (Add Health). We focus on crime because it is one of the most costly potential outcomes of maltreatment. Our work addresses two main limitations of the existing literature on child maltreatment. First, we use a large national sample, and investigate different types of maltreatment in a unified framework. Second, we pay careful attention to controlling for possible confounders using a variety of statistical methods that make differing assumptions. The results suggest that maltreatment greatly increases the probability of engaging in crime and that the probability increases with the experience of multiple forms of maltreatment.", "title": "" }, { "docid": "c9df206d8c0bc671f3109c1c7b12b149", "text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.", "title": "" }, { "docid": "638336dba1dd589b0f708a9426483827", "text": "Girard's linear logic can be used to model programming languages in which each bound variable name has exactly one \"occurrence\"---i.e., no variable can have implicit \"fan-out\"; multiple uses require explicit duplication. Among other nice properties, \"linear\" languages need no garbage collector, yet have no dangling reference problems. We show a natural equivalence between a \"linear\" programming language and a stack machine in which the top items can undergo arbitrary permutations. Such permutation stack machines can be considered combinator abstractions of Moore's Forth programming language.", "title": "" }, { "docid": "28552dfe20642145afa9f9fa00218e8e", "text": "Augmented Reality can be of immense benefit to the construction industry. The oft-cited benefits of AR in construction industry include real time visualization of projects, project monitoring by overlaying virtual models on actual built structures and onsite information retrieval. But this technology is restricted by the high cost and limited portability of the devices. Further, problems with real time and accurate tracking in a construction environment hinder its broader application. To enable utilization of augmented reality on a construction site, a low cost augmented reality framework based on the Google Cardboard visor is proposed. The current applications available for Google cardboard has several limitations in delivering an AR experience relevant to construction requirements. To overcome these limitations Unity game engine, with the help of Vuforia & Cardboard SDK, is used to develop an application environment which can be used for location and orientation specific visualization and planning of work at construction workface. The real world image is captured through the smart-phone camera input and blended with the stereo input of the 3D models to enable a full immersion experience. The application is currently limited to marker based tracking where the 3D models are triggered onto the user’s view upon scanning an image which is registered with a corresponding 3D model preloaded into the application. A gaze input user interface is proposed which enables the user to interact with the augmented models. Finally usage of AR app while traversing the construction site is illustrated.", "title": "" }, { "docid": "2271347e3b04eb5a73466aecbac4e849", "text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method", "title": "" }, { "docid": "c28dc261ddc770a6655eb1dbc528dd3b", "text": "Software applications are no longer stand-alone systems. They are increasingly the result of integrating heterogeneous collections of components, both executable and data, possibly dispersed over a computer network. Different components can be provided by different producers and they can be part of different systems at the same time. Moreover, components can change rapidly and independently, making it difficult to manage the whole system in a consistent way. Under these circumstances, a crucial step of the software life cycle is deployment—that is, the activities related to the release, installation, activation, deactivation, update, and removal of components, as well as whole systems. This paper presents a framework for characterizing technologies that are intended to support software deployment. The framework highlights four primary factors concerning the technologies: process coverage; process changeability; interprocess coordination; and site, product, and deployment policy abstraction. A variety of existing technologies are surveyed and assessed against the framework. Finally, we discuss promising research directions in software deployment. This work was supported in part by the Air Force Material Command, Rome Laboratory, and the Defense Advanced Research Projects Agency under Contract Number F30602-94-C-0253. The content of the information does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.", "title": "" }, { "docid": "ff002c483d22b4d961bbd2f1a18231fd", "text": "Dogs can be grouped into two distinct types of breed based on the predisposition to chondrodystrophy, namely, non-chondrodystrophic (NCD) and chondrodystrophic (CD). In addition to a different process of endochondral ossification, NCD and CD breeds have different characteristics of intravertebral disc (IVD) degeneration and IVD degenerative diseases. The anatomy, physiology, histopathology, and biochemical and biomechanical characteristics of the healthy and degenerated IVD are discussed in the first part of this two-part review. This second part describes the similarities and differences in the histopathological and biochemical characteristics of IVD degeneration in CD and NCD canine breeds and discusses relevant aetiological factors of IVD degeneration.", "title": "" }, { "docid": "58de521ab563333c2051b590592501a8", "text": "Prognostics and systems health management (PHM) is an enabling discipline that uses sensors to assess the health of systems, diagnoses anomalous behavior, and predicts the remaining useful performance over the life of the asset. The advent of the Internet of Things (IoT) enables PHM to be applied to all types of assets across all sectors, thereby creating a paradigm shift that is opening up significant new business opportunities. This paper introduces the concepts of PHM and discusses the opportunities provided by the IoT. Developments are illustrated with examples of innovations from manufacturing, consumer products, and infrastructure. From this review, a number of challenges that result from the rapid adoption of IoT-based PHM are identified. These include appropriate analytics, security, IoT platforms, sensor energy harvesting, IoT business models, and licensing approaches.", "title": "" }, { "docid": "011a9ac960aecc4a91968198ac6ded97", "text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.", "title": "" }, { "docid": "a5090b67307b2efa1f8ae7d6a212a6ff", "text": "Providing highly flexible connectivity is a major architectural challenge for hardware implementation of reconfigurable neural networks. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that multicast mesh NoC provides the highest performance/cost ratio and consequently it is the most suitable interconnect architecture for configurable neural network implementation. Routing table size requirements and their impact on scalability were analyzed. Modular hierarchical architecture based on multicast mesh NoC is proposed to allow large scale neural networks emulation. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "78966bb154649f9f4abb87bd5f29b230", "text": "The objective of a news veracity detection system is to identify various types of potentially misleading or false information, typically in a digital platform. A critical challenge in this scenario is that there are large volumes of data available online. However, obtaining samples with annotations (i.e. ground-truth labels) is difficult and a known limiting factor for many data analytic tasks including the current problem of news veracity detection. In this paper, we propose a human-machine collaborative learning system to evaluate the veracity of a news content, with a limited amount of annotated data samples. In a semi-supervised scenario, an initial classifier is learnt on a small, limited amount of the annotated data followed by an interactive approach to gradually update the model by shortlisting only relevant samples from the large pool of unlabeled data that are most likely to improve the classifier performance. Our prioritized active learning solution achieves faster convergence in terms of the classification performance, while requiring about 1–2 orders of magnitude fewer annotated samples compared to fully supervised solutions to attain a reasonably acceptable accuracy of nearly 80%. Unlike traditional deep learning architecture, the proposed active learning based deep model designed with a smaller number of more localized filters per layer can efficiently learn from small relevant sample batches that can effectively improve performance in the weakly-supervised learning environment and thus is more suitable for several practical applications. An effective dynamic domain adaptive feature weighting scheme can adjust the relative importance of feature dimensions iteratively. Insightful initial feedback gathered from two independent learning modules (a NLP shallow feature based classifier and a deep classifier), modeled to capture complementary information about data characteristics are finally fused together to achieve an impressive 25% average gain in the detection performance.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
9b6f2e73000d63def62ea15e63691432
FSIM: A Feature Similarity Index for Image Quality Assessment
[ { "docid": "e42357ff2f957f6964bab00de4722d52", "text": "We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.", "title": "" } ]
[ { "docid": "8464635cbbef4361d56cc017da8d0317", "text": "In large-scale distributed learning, security issues have become increasingly important. Particularly in a decentralized environment, some computing units may behave abnormally, or even exhibit Byzantine failures—arbitrary and potentially adversarial behavior. In this paper, we develop distributed learning algorithms that are provably robust against such failures, with a focus on achieving optimal statistical performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for three kinds of population loss functions: strongly convex, nonstrongly convex, and smooth non-convex. In particular, these algorithms are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradient descent algorithms.", "title": "" }, { "docid": "09404689f2d1620ac85966c19a2671b5", "text": "Purpose. An upsurge of pure red cell aplasia (PRCA) cases associated with subcutaneous treatment with epoetin alpha has been reported. A formulation change introduced in 1998 is suspected to be the reason for the induction of antibodies that also neutralize the native protein. The aim of this study was to detect the mechanism by which the new formulation may induce these antibodies. Methods. Formulations of epoetin were subjected to gel permeation chromatography with UV detection, and the fractions were analyzed by an immunoassay for the presence of epoetin. Results. The chromatograms showed that Eprex®/Erypo® contained micelles of Tween 80. A minute amount of epoetin (0.008-0.033% of the total epoetin content) coeluted with the micelles, as evidenced by ELISA. When 0.03% (w/v) Tween 80, corresponding to the concentration in the formulation, was added to the elution medium, the percentage of epoetin eluting before the main peak was 0.68%. Conclusions. Eprex®/Erypo® contains micelle-associated epoetin, which may be a risk factor for the development of antibodies against epoetin.", "title": "" }, { "docid": "cff8ae2635684a6f0e07142175b7fbf1", "text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.", "title": "" }, { "docid": "645c0e5b4946217bb6ccaf7f03454cc2", "text": "Nowadays, huge sheet music collections exist on the Web, allowing people to access public domain scores for free. However, beginners may be lost in finding a score appropriate to their instrument level, and should often rely on themselves to start out on the chosen piece. In this instrumental e-Learning context, we propose a Score Analyzer prototype in order to automatically extract the difficulty level of a MusicXML piece and suggest advice thanks to a Musical Sign Base (MSB). To do so, we first review methods related to score performance information retrieval. We then identify seven criteria to characterize technical instrumental difficulties and propose methods to extract them from a MusicXML score. The relevance of these criteria is then evaluated through a Principal Components Analysis and compared to human estimations. Lastly we discuss the integration of this work to @MUSE, a collaborative score annotation platform based on multimedia contents indexation.", "title": "" }, { "docid": "9b547f43a345d2acc3a75c80a8b2f064", "text": "A risk-metric framework that supports Enterprise Risk Management is described. At the heart of the framework is the notion of a risk profile that provides risk measurement for risk elements. By providing a generic template in which metrics can be codified in terms of metric space operators, risk profiles can be used to construct a variety of risk measures for different business contexts. These measures can vary from conventional economic risk calculations to the kinds of metrics that are used by decision support systems, such as those supporting inexact reasoning and which are considered to closely match how humans combine information.", "title": "" }, { "docid": "3246cdbc21244152385f67caa862d4cd", "text": "Immediate access to information about people that we encounter is an essential requirement for effective social interactions. In this manuscript we briefly review our work and work of others on familiar face recognition and propose a modified version of our model of neural systems for face perception with a special emphasis on processes associated with recognition of familiar faces. We argue that visual appearance is only one component of successful recognition of familiar individuals. Other fundamental aspects include the retrieval of \"person knowledge\" - the representation of the personal traits, intentions, and outlook of someone we know - and the emotional response we experience when seeing a familiar individual. Specifically, we hypothesize that the \"theory of mind\" areas, that have been implicated in social and cognitive functions other than face perception, play an essential role in the spontaneous activation of person knowledge associated with the recognition of familiar individuals. The amygdala and the insula, structures that are involved in the representation of emotion, also are part of the distributed network of areas that are modulated by familiarity, reflecting the role of emotion in face recognition.", "title": "" }, { "docid": "abcc4de8a7ca3b716fa0951429a6c969", "text": "Recently, deep learning has been successfully applied to the problem of hashing, yielding remarkable performance compared to traditional methods with hand-crafted features. However, most of existing deep hashing methods are designed for the supervised scenario and require a large number of labeled data. In this paper, we propose a novel semi-supervised hashing method for image retrieval, named Deep Hashing with a Bipartite Graph (BGDH), to simultaneously learn embeddings, features and hash codes. More specifically, we construct a bipartite graph to discover the underlying structure of data, based on which an embedding is generated for each instance. Then, we feed raw pixels as well as embeddings to a deep neural network, and concatenate the resulting features to determine the hash code. Compared to existing methods, BGDH is a universal framework that is able to utilize various types of graphs and losses. Furthermore, we propose an inductive variant of BGDH to support out-of-sample extensions. Experimental results on real datasets show that our BGDH outperforms state-of-the-art hashing methods.", "title": "" }, { "docid": "f649286f5bb37530bbfced0a48513f4f", "text": "Collobert et al. (2011) showed that deep neural network architectures achieve stateof-the-art performance in many fundamental NLP tasks, including Named Entity Recognition (NER). However, results were only reported for English. This paper reports on experiments for German Named Entity Recognition, using the data from the GermEval 2014 shared task on NER. Our system achieves an F1-measure of 75.09% according to the official metric.", "title": "" }, { "docid": "5d4df5fb218490c349b302d14d78d372", "text": "Here we investigate the automatic detection of fire pixel regions in conventional video (or still) imagery within realtime bounds. As an extension to prior, established approaches within this field we specifically look to extend the primary use of threshold-driven colour spectroscopy to the combined use of colour-texture feature descriptors as an input to a trained classification approach that is independent of temporal information. We show the limitations of such spectroscopy driven approaches on simple, real-world examples and propose our novel extension as a robust, real-time solution within this field by combining simple texture descriptors to illustrate maximal ∼98% fire region detection.", "title": "" }, { "docid": "3f5097b33aab695678caca712b649a8f", "text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.", "title": "" }, { "docid": "1b781833b9baaa393fc2d909be21c2c3", "text": "BACKGROUND\nThe main aim of this study was to explore the relationships between personal self-concept and satisfaction with life, with the latter as the key indicator for personal adjustment. The study tests a structural model which encompasses four dimensions of self-concept: self-fulfillment, autonomy, honesty and emotions.\n\n\nMETHOD\nThe 801 participants in the study, all of whom were aged between 15 and 65 (M = 34.03, SD = 17.29), completed the Satisfaction with Life Scale (SWLS) and the Personal Self-Concept (APE) Questionnaire.\n\n\nRESULTS\nAlthough the four dimensions of personal self-concept differ in their weight, the results show that, taken together, they explain 46% of the differences observed in satisfaction with life. This implies a weight that is as significant as that observed for general self-esteem in previous research studies.\n\n\nCONCLUSIONS\nThis issue should be dealt with early on, during secondary education, in order to help prevent psychological distress or maladjustment.", "title": "" }, { "docid": "c67fd84601a528ea951fcf9952f46316", "text": "Electric vehicles make use of permanent-magnet (PM) synchronous traction motors for their high torque density and efficiency. A comparison between interior PM and surface-mounted PM (SPM) motors is carried out, in terms of performance at given inverter ratings. The results of the analysis, based on a simplified analytical model and confirmed by finite element (FE) analysis, show that the two motors have similar rated power but that the SPM motor has barely no overload capability, independently of the available inverter current. Moreover, the loss behavior of the two motors is rather different in the various operating ranges with the SPM one better at low speed due to short end connections but penalized at high speed by the need of a significant deexcitation current. The analysis is validated through FE simulation of two actual motor designs.", "title": "" }, { "docid": "d7065dccb396b0a47526fc14e0a9e796", "text": "A modified compact antipodal Vivaldi antenna is proposed with good performance for different applications including microwave and millimeter wave imaging. A step-by-step procedure is applied in this design including conventional antipodal Vivaldi antenna (AVA), AVA with a periodic slit edge, and AVA with a trapezoid-shaped dielectric lens to feature performances including wide bandwidth, small size, high gain, front-to-back ratio and directivity, modification on E-plane beam tilt, and small sidelobe levels. By adding periodic slit edge at the outer brim of the antenna radiators, lower-end limitation of the conventional AVA extended twice without changing the overall dimensions of the antenna. The optimized antenna is fabricated and tested, and the results show that S11 <; -10 dB frequency band is from 3.4 to 40 GHz, and it is in good agreement with simulation one. Gain of the antenna has been elevated by the periodic slit edge and the trapezoid dielectric lens at lower frequencies up to 8 dB and at higher frequencies up to 15 dB, respectively. The E-plane beam tilts and sidelobe levels are reduced by the lens.", "title": "" }, { "docid": "62492ad62fee28f04b002c9bfe860b78", "text": "Non-Intrusive Appliance Load Monitoring has drawn increasing attention in the last few years. Many existing studies that use machine learning for this problem assume that the analyst has access to the actual appliances states at every sample instant, whereas in fact collecting this information exposes consumers to severe privacy risks. It may, however, be possible to persuade consumers to provide brief samples of the operation of their home appliances as part of a “registration” process for smart metering (if appropriate financial incentives are offered). This labeled data would then be supplemented by a large volume of unlabeled data. Hence, we propose the use of semi-supervised learning for non-intrusive appliance load monitoring. Furthermore, based on our previous work, we model the simultaneous operation of multiple appliances via multi-label classification. Thus, our proposed approach employs semi-supervised multi-label classifiers for the monitoring task. Experiments on publicly-available dataset demonstrate our proposed method.", "title": "" }, { "docid": "ec8847a65f015a52ce90bdd304103658", "text": "This study has a purpose to investigate the adoption of online games technologies among adolescents and their behavior in playing online games. The findings showed that half of them had experience ten months or less in playing online games with ten hours or less for each time playing per week. Nearly fifty-four percent played up to five times each week where sixty-six percent played two hours or less. Behavioral Intention has significant correlation to model variables naming Perceived Enjoyment, Flow Experience, Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions; Experience; and the number and duration of game sessions. The last, Performance Expectancy and Facilitating Condition had a positive, medium, and statistically direct effect on Behavioral Intention. Four other variables Perceived Enjoyment, Flow Experience, Effort Expectancy, and Social Influence had positive or negative, medium or small, and not statistically direct effect on Behavioral Intention. Additionally, Flow Experience and Social Influence have no significant different between the mean value for male and female. Other variables have significant different regard to gender, where mean value of male was significantly greater than female except for Age. Practical implications of this study are relevant to groups who have interest to enhance or to decrease the adoption of online games technologies. Those to enhance the adoption of online games technologies must: preserve Performance Expectancy and Facilitating Conditions; enhance Flow Experience, Perceived Enjoyment, Effort Expectancy, and Social Influence; and engage the adolescent's online games behavior, specifically supporting them in longer playing games and in enhancing their experience. The opposite actions to these proposed can be considered to decrease the adoption.", "title": "" }, { "docid": "068d87d2f1e24fdbe8896e0ab92c2934", "text": "This paper presents a primary color optical pixel sensor circuit that utilizes hydrogenated amorphous silicon thin-film transistors (TFTs). To minimize the effect of ambient light on the sensing result of optical sensor circuit, the proposed sensor circuit combines photo TFTs with color filters to sense a primary color optical input signal. A readout circuit, which also uses thin-film transistors, is integrated into the sensor circuit for sampling the stored charges in the pixel sensor circuit. Measurements demonstrate that the signal-to-noise ratio of the proposed sensor circuit is unaffected by ambient light under illumination up to 12 000 lux by white LEDs. Thus, the proposed optical pixel sensor circuit is suitable for receiving primary color optical input signals in large TFT-LCD panels.", "title": "" }, { "docid": "eddcf41fe566b65540d147171ce50002", "text": "This paper addresses the problem of virtual pedestrian autonomous navigation for crowd simulation. It describes a method for solving interactions between pedestrians and avoiding inter-collisions. Our approach is agent-based and predictive: each agent perceives surrounding agents and extrapolates their trajectory in order to react to potential collisions. We aim at obtaining realistic results, thus the proposed model is calibrated from experimental motion capture data. Our method is shown to be valid and solves major drawbacks compared to previous approaches such as oscillations due to a lack of anticipation. We first describe the mathematical representation used in our model, we then detail its implementation, and finally, its calibration and validation from real data.", "title": "" }, { "docid": "f45fc4d1cefa08f09b60752f44359090", "text": "A novel organization of switched capacitor charge pump circuits based on voltage doubler structures is presented in this paper. Each voltage doubler takes a dc input and outputs a doubled dc voltage. By cascading voltage doublers the output voltage increases up to2 times. A two-phase voltage doubler and a multiphase voltage doubler (MPVD) structures are discussed and design considerations are presented. A simulator working in the – realm was used for simplified circuit level simulation. In order to evaluate the power delivered by a charge pump, a resistive load is attached to the output of the charge pump and an equivalent capacitance is evaluated. A comparison of the voltage doubler circuits with Dickson charge pump and Makowski’s voltage multiplier is presented in terms of the area requirements, the voltage gain, and the power level. This paper also identifies optimum loading conditions for different configurations of the charge pumps. Design guidelines for the desired voltage and power levels are discussed. A two-stage MPVD was fabricated using MOSIS 2.0m CMOS technology. It was designed with internal frequency regulation to reduce power consumption under no load condition.", "title": "" }, { "docid": "c09d57ca9130dc39bd51acb5628e99d0", "text": "The goal of the DECODA project is to reduce the development cost of Speech Analytics systems by reducing the need for manual annotation. This project aims to propose robust speech data mining tools in the framework of call-center monitoring and evaluation, by means of weakly supervised methods. The applicative framework of the project is the call-center of the RATP (Paris public transport authority). This project tackles two very important open issues in the development of speech mining methods from spontaneous speech recorded in call-centers : robustness (how to extract relevant information from very noisy and spontaneous speech messages) and weak supervision (how to reduce the annotation effort needed to train and adapt recognition and classification models). This paper describes the DECODA corpus collected at the RATP during the project. We present the different annotation levels performed on the corpus, the methods used to obtain them, as well as some evaluation of the quality of the annotations produced.", "title": "" }, { "docid": "3789f0298f0ad7935e9267fa64c33a59", "text": "We study session key distribution in the three-party setting of Needham and Schroeder. (This is the trust model assumed by the popular Kerberos authentication system.) Such protocols are basic building blocks for contemporary distributed systems|yet the underlying problem has, up until now, lacked a de nition or provably-good solution. One consequence is that incorrect protocols have proliferated. This paper provides the rst treatment of this problem in the complexitytheoretic framework of modern cryptography. We present a de nition, protocol, and a proof that the protocol satis es the de nition, assuming the (minimal) assumption of a pseudorandom function. When this assumption is appropriately instantiated, our protocols are simple and e cient. Abstract appearing in Proceedings of the 27th ACM Symposium on the Theory of Computing, May 1995.", "title": "" } ]
scidocsrr
3393e9f9ac0d814e7bd88ec347d8a93a
Modularity-based Dynamic Community Detection
[ { "docid": "52844cb9280029d5ddec869945b28be2", "text": "In this work, a new fast dynamic community detection algorithm for large scale networks is presented. Most of the previous community detection algorithms are designed for static networks. However, large scale social networks are dynamic and evolve frequently over time. To quickly detect communities in dynamic large scale networks, we proposed dynamic modularity optimizer framework (DMO) that is constructed by modifying well-known static modularity based community detection algorithm. The proposed framework is tested using several different datasets. According to our results, community detection algorithms in the proposed framework perform better than static algorithms when large scale dynamic networks are considered.", "title": "" }, { "docid": "f31bbf333b3513be695f8d10892b39eb", "text": "In this paper a simple but efficient real-time detecting algorithm is proposed for tracking community structure of dynamic networks. Community structure is intuitively characterized as divisions of network nodes into subgroups, within which nodes are densely connected while between which they are sparsely connected. To evaluate the quality of community structure of a network, a metric called modularity is proposed and many algorithms are developed on optimizing it. However, most of the modularity based algorithms deal with static networks and cannot be performed frequently, due to their high computing complexity. In order to track the community structure of dynamic networks in a finegrained way, we propose a modularity based algorithm that is incremental and has very low computing complexity. In our algorithm we adopt a two-step approach. Firstly we apply the Blondel et al’s algorithm for detecting static communities to obtain an initial community structure. Then, apply our incremental updating strategies to track the dynamic communities. The performance of our algorithm is measured in terms of the modularity. We test the algorithm on tracking community structure of Enron Email and three other real world datasets. The experimental results show that our algorithm can keep track of community structure in time and outperform the well known CNM algorithm in terms of modularity.", "title": "" } ]
[ { "docid": "cb6c816afcca2401b72f7ba224cd601b", "text": "Smart city services are enabled by a massive use of Internet of Things (IoT) technologies. The huge amount of sensors, and terminals with a great variety of typologies and applications, requires a secure way to manage them. Capillary networks can be seen as a short range extension of conventional access network in order to efficiently capture the IoT traffic, and are enablers for smart city services. They can include both IP and non-IP devices, and security can become an issue, especially when simple unidirectional communication devices are considered. The main goal of this paper is to analyze security aspects in IoT capillary networks including unidirectional and bidirectional IP or non-IP devices. We propose an algorithm for secure access for uni- and bi-directional devices. The security procedure is based on a secure key renewal (without any exchange in air), considering a local clock time and a time interval of key validity. Following previous work in 2014 by Giuliano et al., in this paper we assess the duration of the validity of the time window, and present extended simulation results in terms of (average) transmission time in a realistic scenario, i.e., including the presence of disturber(s), then providing indications for the setting of the duration of the key validity time window. Finally, we present the benchmark analysis in order to assess the effectiveness of our approach with respect to other existing standards, as well as the security analysis in terms of typical attacks.", "title": "" }, { "docid": "b354f4f9bd12caef2a22ebfeae315cb5", "text": "In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.", "title": "" }, { "docid": "6e80b0b867b5b2863cd82663546c5642", "text": "Until recently, our understanding of how language is organized in the brain depended on analysis of behavioral deficits in patients with fortuitously placed lesions. The availability of functional magnetic resonance imaging (fMRI) for in vivo analysis of the normal brain has revolutionized the study of language. This review discusses three lines of fMRI research into how the semantic system is organized in the adult brain. These are (a) the role of the left inferior frontal lobe in semantic processing and dissociations from other frontal lobe language functions, (b) the organization of categories of objects and concepts in the temporal lobe, and (c) the role of the right hemisphere in comprehending contextual and figurative meaning. Together, these lines of research broaden our understanding of how the brain stores, retrieves, and makes sense of semantic information, and they challenge some commonly held notions of functional modularity in the language system.", "title": "" }, { "docid": "3f72a668554a2cb69170055a3522c37f", "text": "In ancient times goods and services were exchanged through barter system1 Gold, valuable metals and other tangibles like stones and shells were also exploited as medium of exchange. Now Paper Currency (PC) is country-wide accepted common medium of trade. It has three major flaws. First, the holder of currency is always at risk due to theft and robbery culture in most of the societies of world. Second, counterfeit 2 currency is a challenge for currency issuing authorities. Third, printing and transferring PC causes a heavy cost. Different organizations have introduced and implemented digital currency systems but none of them is governed by any government. In this paper we introduce Official digital currency System (ODCS). Our proposed digital currency is issued and controlled by the state/central bank of a country that is why we name it Official digital currency (ODC). The process of issuing ODC is almost same as that of Conventional Paper Currency (CPC) but controlling system is different. The proposal also explains country-wide process of day to day transactions in trade through ODCS. ODC is more secure, reliable, economical and easy to use. Here we introduce just the idea and compulsory modules of ODC system and not the implementable framework. We will present the implementable framework in a separate forthcoming publication.", "title": "" }, { "docid": "064505e942f5f8fd5f7e2db5359c7fe8", "text": "THE hopping of kangaroos is reminiscent of a bouncing ball or the action of a pogo stick. This suggests a significant storage and recovery of energy in elastic elements. One might surmise that the kangaroo's first hop would require a large amount of energy whereas subsequent hops could rely extensively on elastic rebound. If this were the case, then the kangaroo's unusual saltatory mode of locomotion should be an energetically inexpensive way to move.", "title": "" }, { "docid": "bc5f0d388b28e2091ce5d6ff562d7594", "text": "The use of time-motion analysis has advanced our understanding of position-specific work rate profiles and the physical requirements of soccer players. Still, many of the typical soccer activities can be neglected, as these systems only examine activities measured by distance and speed variables. This study used triaxial accelerometer and time-motion analysis to obtain new knowledge about elite soccer players' match load. Furthermore, we determined acceleration/deceleration profiles of elite soccer players and their contribution to the players' match load. The data set includes every domestic home game (n = 45) covering 3 full seasons (2009, 2010, and 2011) for the participating team (Rosenborg FC), and includes 8 central defenders (n = 68), 9 fullbacks (n = 83), 9 central midfielders (n = 70), 7 wide midfielders (n = 39), and 5 attackers (A, n = 50). A novel finding was that accelerations contributed to 7-10% of the total player load for all player positions, whereas decelerations contributed to 5-7%. Furthermore, the results indicate that other activities besides the high-intensity movements contribute significantly to the players' total match workload. Therefore, motion analysis alone may underestimate player load because many high-intensity actions are without a change in location at the pitch or they are classified as low-speed activity according to current standards. This new knowledge may help coaches to better understand the different ways players achieve match load and could be used in developing individualized programs that better meet the \"positional physical demands\" in elite soccer.", "title": "" }, { "docid": "69b9389893cc6b72c94d5c5b8ed940ae", "text": "Due to the rapid growth of network infrastructure and sensor, the age of the IoT (internet of things) that can be implemented into the smart car, smart home, smart building, and smart city is coming. IoT is a very useful ecosystem that provides various services (e.g., amazon echo); however, at the same time, risk can be huge too. Collecting information to help people could lead serious information leakage, and if IoT is combined with critical control system (e.g., train control system), security attack would cause loss of lives. Furthermore, research on IoT security requirements is insufficient now. Therefore, this paper focuses on IoT security, and its requirements. First, we propose basic security requirements of IoT by analyzing three basic characteristics (i.e., heterogeneity, resource constraint, dynamic environment). Then, we suggest six key elements of IoT (i.e., IoT network, cloud, user, attacker, service, platform) and analyze their security issues for overall security requirements. In addition, we evaluate several IoT security requirement researches.", "title": "" }, { "docid": "abe9e19b8e5e388933645ce25c48b2b1", "text": "We introduce \"time hallucination\": synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as \"night\".\n Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime.", "title": "" }, { "docid": "825640f8ce425a34462b98869758e289", "text": "Recurrent neural networks scale poorly due to the intrinsic difficulty in parallelizing their state computations. For instance, the forward pass computation of ht is blocked until the entire computation of ht−1 finishes, which is a major bottleneck for parallel computing. In this work, we propose an alternative RNN implementation by deliberately simplifying the state computation and exposing more parallelism. The proposed recurrent unit operates as fast as a convolutional layer and 5-10x faster than cuDNN-optimized LSTM. We demonstrate the unit’s effectiveness across a wide range of applications including classification, question answering, language modeling, translation and speech recognition. We open source our implementation in PyTorch and CNTK1.", "title": "" }, { "docid": "5f68b3ab2253349941fc1bf7e602c6a2", "text": "Motivated by recent advances in adaptive sparse representations and nonlocal image modeling, we propose a patch-based image interpolation algorithm under a set theoretic framework. Our algorithm alternates the projection onto two convex sets: one is given by the observation data and the other defined by a sparsity-based nonlocal prior similar to BM3D. In order to optimize the design of observation constraint set, we propose to address the issue of sampling pattern and model it by a spatial point process. A Monte-Carlo based algorithm is proposed to optimize the randomness of sampling patterns to better approximate homogeneous Poisson process. Extensive experimental results in image interpolation and coding applications are reported to demonstrate the potential of the proposed algorithms.", "title": "" }, { "docid": "7e77b9204a8f59a4343b52211ff907d1", "text": "The stability of the sintering end point is an important precondition for the smooth running of the sintering machine and the assurance of the quality and quantity of sintering ore. The sintering flame of the sintering machine tail involves a great deal of associated information about sintering end point. Extracting the image feature of the flame has important significance for judging the operation state of sintering. Firstly, taking flame image of normal sintering as research objects, analyzing the energy information in the flame image; Secondly, reducing the halo effect on the image by processing image with Discrete Fourier transform (DFT); Thirdly, using the color decomposition method to decompose the RGB image into three components of R, G and B, obtaining the principal component image of RGB by removing the background of image; Finally, obtaining ideal binary image through threshold segmentation and mathematical morphology processing on image, and realizing geometric features extraction of the flame image.", "title": "" }, { "docid": "f7de95bb35f7f53518f6c86e06ce9e48", "text": "Domain Generation Algorithms (DGAs) are a popular technique used by contemporary malware for command-and-control (C&C) purposes. Such malware utilizes DGAs to create a set of domain names that, when resolved, provide information necessary to establish a link to a C&C server. Automated discovery of such domain names in real-time DNS traffic is critical for network security as it allows to detect infection, and, in some cases, take countermeasures to disrupt the communication and identify infected machines. Detection of the specific DGA malware family provides the administrator valuable information about the kind of infection and steps that need to be taken. In this paper we compare and evaluate machine learning methods that classify domain names as benign or DGA, and label the latter according to their malware family. Unlike previous work, we select data for test and training sets according to observation time and known seeds. This allows us to assess the robustness of the trained classifiers for detecting domains generated by the same families at a different time or when seeds change. Our study includes tree ensemble models based on human-engineered features and deep neural networks that learn features automatically from domain names. We find that all state-of-the-art classifiers are significantly better at catching domain names from malware families with a time-dependent seed compared to time-invariant DGAs. In addition, when applying the trained classifiers on a day of real traffic, we find that many domain names unjustifiably are flagged as malicious, thereby revealing the shortcomings of relying on a standard whitelist for training a production grade DGA detection system.", "title": "" }, { "docid": "532d5655281bf409dd6a44c1f875cd88", "text": "BACKGROUND\nOlder adults are at increased risk of experiencing loneliness and depression, particularly as they move into different types of care communities. Information and communication technology (ICT) usage may help older adults to maintain contact with social ties. However, prior research is not consistent about whether ICT use increases or decreases isolation and loneliness among older adults.\n\n\nOBJECTIVE\nThe purpose of this study was to examine how Internet use affects perceived social isolation and loneliness of older adults in assisted and independent living communities. We also examined the perceptions of how Internet use affects communication and social interaction.\n\n\nMETHODS\nOne wave of data from an ongoing study of ICT usage among older adults in assisted and independent living communities in Alabama was used. Regression analysis was used to determine the relationship between frequency of going online and isolation and loneliness (n=205) and perceptions of the effects of Internet use on communication and social interaction (n=60).\n\n\nRESULTS\nAfter controlling for the number of friends and family, physical/emotional social limitations, age, and study arm, a 1-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores (P=.005). Going online was not associated with perceived social isolation (P=.14). Among the measures of perception of the social effects of the Internet, each 1-point increase in the frequency of going online was associated with an increase in agreement that using the Internet had: (1) made it easier to reach people (b=0.508, P<.001), (2) contributed to the ability to stay in touch (b=0.516, P<.001), (3) made it easier to meet new people (b=0.297, P=.01, (4) increased the quantity of communication with others (b=0.306, P=.01), (5) made the respondent feel less isolated (b=0.491, P<.001), (6) helped the respondent feel more connected to friends and family (b=0.392, P=.001), and (7) increased the quality of communication with others (b=0.289, P=.01).\n\n\nCONCLUSIONS\nUsing the Internet may be beneficial for decreasing loneliness and increasing social contact among older adults in assisted and independent living communities.", "title": "" }, { "docid": "61f257b3cebc439d7902e6c85b525237", "text": "In this paper, we propose a generalization of the algorithm we developed previously. Along the way, we also develop a theory of quaternionic M symbols whose definition bears some resemblance to the classical M -symbols, except for their combinatorial nature. The theory gives a more efficient way to compute Hilbert modular forms over totally real number fields, especially quadratic fields, and we have illustrated it with several examples. Namely, we have computed all the newforms of prime levels of norm less than 100 over the quadratic fields Q( √ 29) and Q( √ 37), and whose Fourier coefficients are rational or are defined over a quadratic field.", "title": "" }, { "docid": "f1132d786a6384e3c1a6db776922ee69", "text": "The analysis of forensic investigation results has generally been identified as the most complex phase of a digital forensic investigation. This phase becomes more complicated and time consuming as the storage capacity of digital devices is increasing, while at the same time the prices of those devices are decreasing. Although there are some tools and techniques that assist the investigator in the analysis of digital evidence, they do not adequately address some of the serious challenges, particularly with the time and effort required to conduct such tasks. In this paper, we consider the use of semantic web technologies and in particular the ontologies, to assist the investigator in analyzing digital evidence. A novel ontology-based framework is proposed for forensic analysis tools, which we believe has the potential to influence the development of such tools. The framework utilizes a set of ontologies to model the environment under investigation. The evidence extracted from the environment is initially annotated using the Resource Description Framework (RDF). The evidence is then merged from various sources to identify new and implicit information with the help of inference engines and classification mechanisms. In addition, we present the ongoing development of a forensic analysis tool to analyze content retrieved from Android smart phones. For this purpose, several ontologies have been created to model some concepts of the smart phone environment.", "title": "" }, { "docid": "64ef634078467594df83fe4cec779c27", "text": "In Natural Language Processing the sequence-to-sequence, encoder-decoder model is very successful in generating sentences, as are the tasks of dialogue, translation and question answering. On top of this model an attention mechanism is often used. The attention mechanism has the ability to look back at all encoder outputs for every decoding step. The performance increase of attention shows that the final encoded state of an input sequence alone is too poor to successfully generate a target. In this paper more elaborate forms of attention, namely external memory, are investigated on varying properties within the range of dialogue. In dialogue, the target sequence is much more complex to predict than in other tasks, since the sequence can be of arbitrary length and can contain any information related to any of the previous utterances. External memory is hypothesized to improve performance exactly because of these properties of dialogue. Varying memory models are tested on a range of context sizes. Some memory modules show more stable results with an increasing context size.", "title": "" }, { "docid": "a41dfbce4138a8422bc7ddfac830e557", "text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.", "title": "" }, { "docid": "3be195643e5cb658935b20997f7ebdea", "text": "We describe the structure and functionality of the Internet Cache Protocol (ICP) and its implementation in the Squid Web Caching software. ICP is a lightweight message format used for communication among Web caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object. We present background on the history of ICP, and discuss issues in ICP deployment, e ciency, security, and interaction with other aspects of Web tra c behavior. We catalog successes, failures, and lessons learned from using ICP to deploy a global Web cache hierarchy.", "title": "" }, { "docid": "786d1ba82d326370684395eba5ef7cd3", "text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.", "title": "" } ]
scidocsrr
c5866cd38e9fb246e011b3ca468f5fc4
After Sandy Hook Elementary: A Year in the Gun Control Debate on Twitter
[ { "docid": "b5004502c5ce55f2327e52639e65d0b6", "text": "Public health applications using social media often require accurate, broad-coverage location information. However, the standard information provided by social media APIs, such as Twitter, cover a limited number of messages. This paper presents Carmen, a geolocation system that can determine structured location information for messages provided by the Twitter API. Our system utilizes geocoding tools and a combination of automatic and manual alias resolution methods to infer location structures from GPS positions and user-provided profile data. We show that our system is accurate and covers many locations, and we demonstrate its utility for improving influenza surveillance.", "title": "" } ]
[ { "docid": "210ec3c86105f496087c7b012619e1d3", "text": "An ultra compact projection system based on a high brightness OLEd micro display is developed. System design and realization of a prototype are presented. This OLEd pico projector with a volume of about 10 cm3 can be integrated into portable systems like mobile phones or PdAs. The Fraunhofer IPMS developed the high brightness monochrome OLEd micro display. The Fraunhofer IOF desig­ ned the specific projection lens [1] and in tegrated the OLEd and the projection optic to a full functional pico projection system. This article provides a closer look on the technology and its possibilities.", "title": "" }, { "docid": "f7d56588da8f5c5ac0f1481e5f2286b4", "text": "Machine learning is an established method of selecting algorithms to solve hard search problems. Despite this, to date no systematic comparison and evaluation of the different techniques has been performed and the performance of existing systems has not been critically compared to other approaches. We compare machine learning techniques for algorithm selection on real-world data sets of hard search problems. In addition to well-established approaches, for the first time we also apply statistical relational learning to this problem. We demonstrate that most machine learning techniques and existing systems perform less well than one might expect. To guide practitioners, we close by giving clear recommendations as to which machine learning techniques are likely to perform well based on our experiments.", "title": "" }, { "docid": "86dd65bddeb01d4395b81cef0bc4f00e", "text": "Many people may see the development of software and hardware like different disciplines. However, there are great similarities between them that have been shown due to the appearance of extensions for general purpose programming languages for its use as hardware description languages. In this contribution, the approach proposed by the MyHDL package to use Python as an HDL is analyzed by making a comparative study. This study is based on the independent application of Verilog and Python based flows to the development of a real peripheral. The use of MyHDL has revealed to be a powerful and promising tool, not only because of the surprising results, but also because it opens new horizons towards the development of new techniques for modeling and verification, using the full power of one of the most versatile programming languages nowadays.", "title": "" }, { "docid": "d2e0c8db8724b25a646e2c1f24f395bc", "text": "US Presidential election is an event anticipated by US citizens and people around the world. By utilizing the big data provided by social media, this research aims to make a prediction of the party or candidate that will win the US presidential election 2016. This paper proposes two stages in research methodology which is data collection and implementation. Data used in this research are collected from Twitter. The implementation stage consists of preprocessing, sentiment analysis, aggregation, and implementation of Electoral College system to predict the winning party or candidate. The implementation of Electoral College will be limited only by using winner take all basis for all states. The implementations are referring from previous works with some addition of methods. The proposed method still unable to use real time data due to random user location value gathered from Twitter REST API, and researchers will be working on it for future works.", "title": "" }, { "docid": "fe697283a3e08f04d439ffaeb11746e9", "text": "Visual Question Answering (VQA) has attracted attention from both computer vision and natural language processing communities. Most existing approaches adopt the pipeline of representing an image via pre-trained CNNs, and then using the uninterpretable CNN features in conjunction with the question to predict the answer. Although such end-to-end models might report promising performance, they rarely provide any insight, apart from the answer, into the VQA process. In this work, we propose to break up the end-to-end VQA into two steps: explaining and reasoning, in an attempt towards a more explainable VQA by shedding light on the intermediate results between these two steps. To that end, we first extract attributes and generate descriptions as explanations for an image using pre-trained attribute detectors and image captioning models, respectively. Next, a reasoning module utilizes these explanations in place of the image to infer an answer to the question. The advantages of such a breakdown include: (1) the attributes and captions can reflect what the system extracts from the image, thus can provide some explanations for the predicted answer; (2) these intermediate results can help us identify the inabilities of both the image understanding part and the answer inference part when the predicted answer is wrong. We conduct extensive experiments on a popular VQA dataset and dissect all results according to several measurements of the explanation quality. Our system achieves comparable performance with the state-of-theart, yet with added benefits of explanability and the inherent ability to further improve with higher quality explanations.", "title": "" }, { "docid": "6469b318a84d5865e304a8afd4408cfa", "text": "5-hydroxytryptamine (5-HT, serotonin) is an ancient biochemical manipulated through evolution to be utilized extensively throughout the animal and plant kingdoms. Mammals employ 5-HT as a neurotransmitter within the central and peripheral nervous systems, and also as a local hormone in numerous other tissues, including the gastrointestinal tract, the cardiovascular system and immune cells. This multiplicity of function implicates 5-HT in a vast array of physiological and pathological processes. This plethora of roles has consequently encouraged the development of many compounds of therapeutic value, including various antidepressant, antipsychotic and antiemetic drugs.", "title": "" }, { "docid": "148f27fdea734cf4ae50d38caca94827", "text": "This paper discusses a personalized heart monitoring system using smart phones and wireless (bio) sensors. We combine ubiquitous computing with mobile health technology to monitor the wellbeing of high risk cardiac patients. The smart phone analyses in real-time the ECG data and determines whether the person needs external help. We focus on two life threatening arrhythmias: ventricular fibrillation (VF) and ventricular tachycardia (VT). The smart phone can automatically alert the ambulance and pre assigned caregivers when a VF/VT arrhythmia is detected. The system can be personalized to the needs and requirements of the patient. It can be used to give advice (e.g. exercise more) or to reassure the patient when the bio-sensors and environmental data are within predefined ranges", "title": "" }, { "docid": "a07338beeb3246954815e0389c59ae29", "text": "We have proposed gate-all-around Silicon nanowire MOSFET (SNWFET) on bulk Si as an ultimate transistor. Well controlled processes are used to achieve gate length (LG) of sub-10nm and narrow nanowire widths. Excellent performance with reasonable VTH and short channel immunity are achieved owing to thin nanowire channel, self-aligned gate, and GAA structure. Transistor performance with gate length of 10nm has been demonstrated and nanowire size (DNW) dependency of various electrical characteristics has been investigated. Random telegraph noise (RTN) in SNWFET is studied as well.", "title": "" }, { "docid": "3013a8b320cbbfc1ac8fed7c06d6996f", "text": "Security and privacy are among the most pressing concerns that have evolved with the Internet. As networks expanded and became more open, security practices shifted to ensure protection of the ever growing Internet, its users, and data. Today, the Internet of Things (IoT) is emerging as a new type of network that connects everything to everyone, everywhere. Consequently, the margin of tolerance for security and privacy becomes narrower because a breach may lead to large-scale irreversible damage. One feature that helps alleviate the security concerns is authentication. While different authentication schemes are used in vertical network silos, a common identity and authentication scheme is needed to address the heterogeneity in IoT and to integrate the different protocols present in IoT. We propose in this paper an identity-based authentication scheme for heterogeneous IoT. The correctness of the proposed scheme is tested with the AVISPA tool and results showed that our scheme is immune to masquerade, man-in-the-middle, and replay attacks.", "title": "" }, { "docid": "3eebdb20316c225b839cd310dc173499", "text": "This paper proposes a planar embedded structure pick-up coil current sensor for integrated power electronic modules technology. It has compact size, excellent linearity, stability, noise immunity and wide bandwidth without adding significant losses or parasitics. Preliminary test results and discussions are presented in this paper.", "title": "" }, { "docid": "67a958a34084061e3bcd7964790879c4", "text": "Researchers spent lots of time in searching published articles relevant to their project. Though having similar interest in projects researches perform individual and time overwhelming searches. But researchers are unable to control the results obtained from earlier search process, whereas they can share the results afterwards. We propose a research paper recommender system by enhancing existing search engines with recommendations based on preceding searches performed by others researchers that avert time absorbing searches. Top-k query algorithm retrieves best answers from a potentially large record set so that we find the most accurate records from the given record set that matches the filtering keywords. KeywordsRecommendation System, Personalization, Profile, Top-k query, Steiner Tree", "title": "" }, { "docid": "de7d29c7e11445e836bd04c003443c67", "text": "Logistic regression with `1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale `1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.", "title": "" }, { "docid": "0b5431e668791d180239849c53faa7f7", "text": "Crowdfunding is quickly emerging as an alternative to traditional methods of funding new products. In a crowdfunding campaign, a seller solicits financial contributions from a crowd, usually in the form of pre-buying an unrealized product, and commits to producing the product if the total amount pledged is above a certain threshold. We provide a model of crowdfunding in which consumers arrive sequentially and make decisions about whether to pledge or not. Pledging is not costless, and hence consumers would prefer not to pledge if they think the campaign will not succeed. This can lead to cascades where a campaign fails to raise the required amount even though there are enough consumers who want the product. The paper introduces a novel stochastic process --- anticipating random walks --- to analyze this problem. The analysis helps explain why some campaigns fail and some do not, and provides guidelines about how sellers should design their campaigns in order to maximize their chances of success. More broadly, Anticipating Random Walks can also find application in settings where agents make decisions sequentially and these decisions are not just affected by past actions of others, but also by how they will impact the decisions of future actors as well.", "title": "" }, { "docid": "2d615aa63ff115a1e9d511456000c226", "text": "The face mask presentation attack introduces a greater threat to the face recognition system. With the evolving technology in generating both 2D and 3D masks in a more sophisticated, realistic and cost effective manner encloses the face recognition system to more challenging vulnerabilities. In this paper, we present a novel Presentation Attack Detection (PAD) scheme that explores both global (i.e. face) and local (i.e. periocular or eye) region to accurately identify the presence of both 2D and 3D face masks. The proposed PAD algorithm is based on both Binarized Statistical Image Features (BSIF) and Local Binary Patterns (LBP) that can capture a prominent micro-texture features. The linear Support Vector Machine (SVM) is then trained independently on these two features that are applied on both local and global region to obtain the comparison scores. We then combine these scores using the weighted sum rule before making the decision about a normal (or real or live) or an artefact (or spoof) face. Extensive experiments are carried out on two publicly available databases for 2D and 3D face masks namely: CASIA face spoof database and 3DMAD shows the efficacy of the proposed scheme when compared with well-established state-of-the-art techniques.", "title": "" }, { "docid": "aaba4377acbd22cbc52681d4d15bf9af", "text": "This paper presents a new human body communication (HBC) technique that employs magnetic resonance for data transfer in wireless body-area networks (BANs). Unlike electric field HBC (eHBC) links, which do not necessarily travel well through many biological tissues, the proposed magnetic HBC (mHBC) link easily travels through tissue, offering significantly reduced path loss and, as a result, reduced transceiver power consumption. In this paper the proposed mHBC concept is validated via finite element method simulations and measurements. It is demonstrated that path loss across the body under various postures varies from 10-20 dB, which is significantly lower than alternative BAN techniques.", "title": "" }, { "docid": "37148a1c4e16edeac5f8fb082ea3dc70", "text": "Familial aggregation and the effect of parenting styles on three dispositions toward ridicule and being laughed at were tested. Nearly 100 families (parents, their adult children, and their siblings) completed subjective questionnaires to assess the presence of gelotophobia (the fear of being laughed at), gelotophilia (the joy of being laughed at), and katagelasticism (the joy of laughing at others). A positive relationship between fear of being laughed at in children and their parents was found. Results for gelotophilia were similar but numerically lower; if split by gender of the adult child, correlations to the mother’s gelotophilia exceeded those of the father. Katagelasticism arose independently from the scores in the parents but was robustly related to greater katagelasticism in the children’s siblings. Gelotophobes remembered punishment (especially from the mother), lower warmth and higher control from their parents (this was also found in the parents’ recollections of their parenting style). The incidence of gelotophilia was unrelated to specific parenting styles, and katagelasticism exhibited only weak relations with punishment. The study suggests a specific pattern in the relation of the three dispositions within families and argues for a strong impact of parenting styles on gelotophobia but less so for gelotophilia and katagelasticism. DOI: https://doi.org/10.1080/17439760.2012.702784 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-63535 Accepted Version Originally published at: Harzer, Claudia; Ruch, Willibald (2012). When the job is a calling: the role of applying one’s signature strengths at work. The Journal of Positive Psychology, 7(5):362-371. DOI: https://doi.org/10.1080/17439760.2012.702784 This manuscript was published as: Harzer, C., & Ruch, W. (2012). When the job is a calling: The role of applying one’s signature strengths at work. Journal of Positive Psychology, 7, 362371. doi:10.1080/17439760.2012.702784 Running Head: WHEN THE JOB IS A CALLING 1 When the Job is a Calling: The Role of Applying One’s Signature Strengths at Work Claudia Harzer and Willibald Ruch Department of Psychology, University of Zurich, Zurich, Switzerland Claudia Harzer, Section on Personality and Assessment, Department of Psychology, University of Zurich, Binzmuehlestrasse 14/ Box 7, 8050 Zurich, Switzerland, E-mail: c.harzer@psychologie.uzh.ch, telephone: 0041 44 635 75 26, fax: 0041 44 635 75 29 Willibald Ruch, Section on Personality and Assessment, Department of Psychology, University of Zurich, Binzmuehlestrasse 14/ Box 7, 8050 Zurich, Switzerland, E-mail: w.ruch@psychologie.uzh.ch, telephone: 0041 44 635 75 20, fax: 0041 44 635 75 29 * Corresponding author. Email: c.harzer@psychologie.uzh.ch Running Head: WHEN THE JOB IS A CALLING 2 When the Job is a Calling: The Role of Applying One’s Signature Strengths at Work The present study investigates the role of applying the individual signature strengths at work for positive experiences at work (i.e., job satisfaction, pleasure, engagement, meaning) and calling. A sample of 111 employees from various occupations completed measures on character strengths, positive experiences at work, and calling. Co-workers (N = 111) rated the applicability of character strengths at work. Correlations between applicability of character strengths and positive experiences at work decreased with intra-individual centrality of strengths (ranked strengths from the highest to the lowest). Level of positive experiences and calling were higher when four to seven signature strengths were applied at work compared to less than four. Positive experiences partially mediated the effect of the number of applied signature strengths on calling. Implications for further research and practice will be discussed.", "title": "" }, { "docid": "0150caaaa121afdbf04dbf496d3770c3", "text": "The use of interactive technologies to aid in the implementation of smart cities has a significant potential to support disabled users in performing their activities as citizens. In this study, we present an investigation of the accessibility of a sample of 10 mobile Android™ applications of Brazilian municipalities, two from each of the five big geographical regions of the country, focusing especially on users with visual disabilities. The results showed that many of the applications were not in accordance with accessibility guidelines, with an average of 57 instances of violations and an average of 11.6 different criteria violated per application. The main problems included issues like not addressing labelling of non-textual content, headings, identifying user location, colour contrast, enabling users to interact using screen reader gestures, focus visibility and lack of adaptation of text contained in image. Although the growth in mobile applications for has boosted the possibilities aligned with the principles of smart cities, there is a strong need for including accessibility in the design of such applications in order for disabled people to benefit from the potential they can have for their lives.", "title": "" }, { "docid": "23d61c3396d49e223485baa1c66b8eab", "text": "Of the different branches of indoor localization research, WiFi fingerprinting has drawn significant attention over the past decade. These localization systems function by comparing WiFi received signal strength indicator (RSSI) and a pre-established location-specific fingerprint map. However, due to the time-variant wireless signal strength, the RSSI fingerprint map needs to be calibrated periodically, incurring high labor and time costs. In addition, biased RSSI measurements across devices along with transmission power control techniques of WiFi routers further undermine the fidelity of existing fingerprint-based localization systems. To remedy these problems, we propose GradIent FingerprinTing (GIFT) which leverages a more stable RSSI gradient. GIFT first builds a gradient-based fingerprint map (Gmap) by comparing absolute RSSI values at nearby positions, and then runs an online extended particle filter (EPF) to localize the user/device. By incorporating Gmap, GIFT is more adaptive to the time-variant RSSI in indoor environments, thus effectively reducing the overhead of fingerprint map calibration. We implemented GIFT on Android smartphones and tablets, and conducted extensive experiments in a five-story campus building. GIFT is shown to achieve an 80 percentile accuracy of 5.6 m with dynamic WiFi signals.", "title": "" }, { "docid": "94b8aeb8454b05a7916daf0f0b57ee8b", "text": "Accumulating evidence suggests that neuroinflammation affecting microglia plays an important role in the etiology of schizophrenia, and appropriate control of microglial activation may be a promising therapeutic strategy for schizophrenia. Minocycline, a second-generation tetracycline that inhibits microglial activation, has been shown to have a neuroprotective effect in various models of neurodegenerative disease, including anti-inflammatory, antioxidant, and antiapoptotic properties, and an ability to modulate glutamate-induced excitotoxicity. Given that these mechanisms overlap with neuropathologic pathways, minocycline may have a potential role in the adjuvant treatment of schizophrenia, and improve its negative symptoms. Here, we review the relevant studies of minocycline, ranging from preclinical research to human clinical trials.", "title": "" }, { "docid": "0d9affda4d9f7089d76a492676ab3f9e", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR' s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR' s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The American Political Science Review is published by American Political Science Association. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/joumals/apsa.html.", "title": "" } ]
scidocsrr
a85ea10e35bf4bd4bf721a360a22fdaa
Mechanism-Aware Neural Machine for Dialogue Response Generation
[ { "docid": "5132cf4fdbe55a47214f66738599df78", "text": "Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving context awareness is challenging due to data sparsity. We present a novel hierarchical recurrent encoder-decoder architecture that makes possible to account for sequences of previous queries of arbitrary lengths. As a result, our suggestions are sensitive to the order of queries in the context while avoiding data sparsity. Additionally, our model can suggest for rare, or long-tail, queries. The produced suggestions are synthetic and are sampled one word at a time, using computationally cheap decoding techniques. This is in contrast to current synthetic suggestion models relying upon machine learning pipelines and hand-engineered feature sets. Results show that our model outperforms existing context-aware approaches in a next query prediction setting. In addition to query suggestion, our architecture is general enough to be used in a variety of other applications.", "title": "" }, { "docid": "af45e4aa653af4e2f2ece29f965aaafc", "text": "We use Reinforcement Learning (RL) to learn question-answering dialogue policies for a real-world application. We analyze a corpus of interactions of museum visitors with two virtual characters that serve as guides at the Museum of Science in Boston, in order to build a realistic model of user behavior when interacting with these characters. A simulated user is built based on this model and used for learning the dialogue policy of the virtual characters using RL. Our learned policy outperforms two baselines (including the original dialogue policy that was used for collecting the corpus) in a simulation setting.", "title": "" }, { "docid": "7e06f62814a2aba7ddaff47af62c13b4", "text": "Natural language conversation is widely regarded as a highly difficult problem, which is usually attacked with either rule-based or learning-based models. In this paper we propose a retrieval-based automatic response model for short-text conversation, to exploit the vast amount of short conversation instances available on social media. For this purpose we introduce a dataset of short-text conversation based on the real-world instances from Sina Weibo (a popular Chinese microblog service), which will be soon released to public. This dataset provides rich collection of instances for the research on finding natural and relevant short responses to a given short text, and useful for both training and testing of conversation models. This dataset consists of both naturally formed conversations, manually labeled data, and a large repository of candidate responses. Our preliminary experiments demonstrate that the simple retrieval-based conversation model performs reasonably well when combined with the rich instances in our dataset.", "title": "" } ]
[ { "docid": "bdb761f7af69d850e6f9aa91ce2b9aa1", "text": "Stencil printing remains the technology route of choice for flip chip bumping because of its economical advantages over traditionally costly evaporation and electroplating processes. This paper provides the first research results on stencil printing of 80 μm and 60 μm pitch peripheral array configurations with Type 7 Sn63/Pb37 solder paste. In specific, the paste particle size ranges from 2 μm to 11μm with an average particle size of 6.5 μm taken into account for aperture packing considerations. Furthermore, the present study unveils the determining role of stencil design and paste characteristics on the final bumping results. The limitations of stencil design are discussed and guidelines for printing improvement are given. Printing of Type 7 solder paste has yielded promising results. Solder bump deposits of 25 μm and 42 μm have been demonstrated on 80 μm pitch rectangular and round pads, respectively. Stencil printing challenges at 60 μm pitch peripheral arrays are also discussed.", "title": "" }, { "docid": "27a8ec0dc0f4ad0ae67c2a75c25c4553", "text": "Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.", "title": "" }, { "docid": "0b437c0fc573c2f9d368cf501678b0a8", "text": "Sexual selection is the mechanism that favors an increase in the frequency of alleles associated with reproduction (Darwin, 1871). Darwin distinguished sexual selection from natural selection, but today most evolutionary scientists combine the two concepts under the name, natural selection. Sexual selection is composed of intrasexual competition (competition between members of the same sex for sexual access to members of the opposite sex) and intersexual selection (differential mate choice of members of the opposite sex). Focusing mainly on precopulatory adaptations associated with intrasexual competition and intersexual selection, postcopulatory sexual selection was largely ignored even a century after the presentation of sexual selection theory. Parker (1970) was the first to recognize that male–male competition may continue even after the initiation of copulation when males compete for fertilizations. More recently, Thornhill (1983) and others (e.g. Eberhard, 1996) recognized that intersexual selection may also continue after the initiation of copulation when a female biases paternity between two or more males’ sperm. The competition between males for fertilization of a single female’s ova is known as sperm competition (Parker, 1970), and the selection of sperm from two or more males by a single female is known as cryptic female choice (Eberhard, 1996; Thornhill, 1983). Although sperm competition and cryptic female choice together compose postcopulatory sexual selection (see Table 6.1), sperm competition is often used in reference to both processes (e.g. Baker & Bellis, 1995; Birkhead & Møller, 1998; Simmons, 2001; Shackelford, Pound, & Goetz, 2005). In this chapter, we review the current state of knowledge regarding human sperm competition (and see Shackelford et al., 2005).", "title": "" }, { "docid": "58a016629de2a2556fae9ca3fa81040a", "text": "This paper studies a type of image priors that are constructed implicitly through the alternating direction method of multiplier (ADMM) algorithm, called the algorithm-induced prior. Different from classical image priors which are defined before running the reconstruction algorithm, algorithm-induced priors are defined by the denoising procedure used to replace one of the two modules in the ADMM algorithm. Since such prior is not explicitly defined, analyzing the performance has been difficult in the past. Focusing on the class of symmetric smoothing filters, this paper presents an explicit expression of the prior induced by the ADMM algorithm. The new prior is reminiscent to the conventional graph Laplacian but with stronger reconstruction performance. It can also be shown that the overall reconstruction has an efficient closed-form implementation if the associated symmetric smoothing filter is low rank. The results are validated with experiments on image inpainting.", "title": "" }, { "docid": "d5e75dda868742ec692eff4f37886d64", "text": "In this paper we present our automated fact checking system demonstration which we developed in order to participate in the Fast and Furious Fact Check challenge. We focused on simple numerical claims such as “population of Germany in 2015 was 80 million” which comprised a quarter of the test instances in the challenge, achieving 68% accuracy. Our system extends previous work on semantic parsing and claim identification to handle temporal expressions and knowledge bases consisting of multiple tables, while relying solely on automatically generated training data. We demonstrate the extensible nature of our system by evaluating it on relations used in previous work. We make our system publicly available so that it can be used and extended by the community.1", "title": "" }, { "docid": "dbca7415a584b3a8b9348c47d5ab2fa4", "text": "The shared nature of the medium in wireless networks makes it easy for an adversary to launch a Wireless Denial of Service (WDoS) attack. Recent studies, demonstrate that such attacks can be very easily accomplished using off-the-shelf equipment. To give a simple example, a malicious node can continually transmit a radio signal in order to block any legitimate access to the medium and/or interfere with reception. This act is called jamming and the malicious nodes are referred to as jammers. Jamming techniques vary from simple ones based on the continual transmission of interference signals, to more sophisticated attacks that aim at exploiting vulnerabilities of the particular protocol used. In this survey, we present a detailed up-to-date discussion on the jamming attacks recorded in the literature. We also describe various techniques proposed for detecting the presence of jammers. Finally, we survey numerous mechanisms which attempt to protect the network from jamming attacks. We conclude with a summary and by suggesting future directions.", "title": "" }, { "docid": "6c5062fee45132a1801b5ed77934a350", "text": "The issue of “fake news” has arisen recently as a potential threat to high-quality journalism and well-informed public discourse. The Fake News Challenge was organized in early 2017 to encourage development of machine learning-based classification systems that perform “stance detection” -i.e. identifying whether a particular news headline “agrees” with, “disagrees” with, “discusses,” or is unrelated to a particular news article -in order to allow journalists and others to more easily find and investigate possible instances of “fake news.” We developed several deep neural network-based models to tackle the stance detection problem, ranging from relatively simple feed-forward networks to elaborate recurrent models featuring attention and multiple vocabularies. We ultimately found that an LSTM-based bidirectional conditional encoding model using pre-trained GloVe word embeddings delivered the best performance: greater than 97% classification accuracy on the dev set.", "title": "" }, { "docid": "e808b5d50a4f3326c149a88c8c789c65", "text": "n engl j med 360;21 nejm.org may 21, 2009 2153 online announcements by government agencies but also through informal channels, ranging from press reports to blogs to chat rooms to analyses of Web searches (see box). Collectively, these sources provide a view of global health that is fundamentally different from that yielded by the disease reporting of the traditional public health infrastructure.1 Over the past 15 years, Internet technology has become integral to public health surveillance. Systems using informal electronic information have been credited with reducing the time to recognition of an outbreak, preventing governments from suppressing outbreak information, and facilitating public health responses to outbreaks and emerging diseases. Because Web-based sources frequently contain data not captured through traditional government communication channels, they are useful to public health agencies, including the Global Outbreak Alert and Response Network of the World Health Organization (WHO), which relies on such sources for daily surveillance activities. Early efforts in this area were made by the International Society for Infectious Diseases’ Program for Monitoring Emerging Diseases, or ProMED-mail, which was founded in 1994 and has grown into a large, publicly available reporting system, with more than 45,000 subscribers in 188 countries.2 ProMED uses the Internet to disseminate information on outbreaks by e-mailing and posting case reports, including many gleaned from readers, along with expert commentary. In 1997, the Public Health Agency of Canada, in collaboration with the WHO, created the Global Public Health Intelligence Network (GPHIN), whose software retrieves relevant articles from news aggregators every 15 minutes, using extensive search queries. ProMED and GPHIN played critical roles in informing public health officials of the outbreak of SARS, or severe acute respiratory syndrome, in Guangdong, China, as early as November 2002, by identifying informal reports on the Web through news media and chat-room discussions. More recently, the advent of openly available news aggregators and visualization tools has spawned a new generation of disDigital Disease Detection — Harnessing the Web for Public Health Surveillance", "title": "" }, { "docid": "30d191f30f8d0cd0fd0d9b99a440a1df", "text": "Despite their ubiquitous presence, texture-less objects present significant challenges to contemporary visual object detection and localization algorithms. This paper proposes a practical method for the detection and accurate 3D localization of multiple texture-less and rigid objects depicted in RGB-D images. The detection procedure adopts the sliding window paradigm, with an efficient cascade-style evaluation of each window location. A simple pre-filtering is performed first, rapidly rejecting most locations. For each remaining location, a set of candidate templates (i.e. trained object views) is identified with a voting procedure based on hashing, which makes the method's computational complexity largely unaffected by the total number of known objects. The candidate templates are then verified by matching feature points in different modalities. Finally, the approximate object pose associated with each detected template is used as a starting point for a stochastic optimization procedure that estimates accurate 3D pose. Experimental evaluation shows that the proposed method yields a recognition rate comparable to the state of the art, while its complexity is sub-linear in the number of templates.", "title": "" }, { "docid": "706d2f0fd76eefe8eba145b199848d6f", "text": "Maritime domain awareness is critical for protecting sea lanes, ports, harbors, offshore structures like oil and gas rigs and other types of critical infrastructure against common threats and illegal activities. Typical examples range from smuggling of drugs and weapons, human trafficking and piracy all the way to terror attacks. Limited surveillance resources constrain maritime domain awareness and compromise full security coverage at all times. This situation calls for innovative intelligent systems for interactive situation analysis to assist marine authorities and security personal in their routine surveillance operations. In this article, we propose a novel situation analysis approach to analyze marine traffic data and differentiate various scenarios of vessel engagement for the purpose of detecting anomalies of interest for marine vessels that operate over some period of time in relative proximity to each other. We consider such scenarios as probabilistic processes and analyze complex vessel trajectories using machine learning to model common patterns. Specifically, we represent patterns as left-to-right Hidden Markov Models and classify them using Support Vector Machines. To differentiate suspicious activities from unobjectionable behavior, we explore fusion of data and information, including kinematic features, geospatial features, contextual information and maritime domain knowledge. Our experimental evaluation shows the effectiveness of the proposed approach using comprehensive real-world vessel tracking data from coastal waters of North America.", "title": "" }, { "docid": "87c17ce9b4bd78f3be037fedf7e558e3", "text": "Conversational Memory Network: To classify emotion of utterance ui, corresponding histories (hista and histb) are taken. Each history, histλ, contains the preceding K utterances by person Pλ. Histories are modeled into memories and utilized as follows, Memory Representation: Memory representation Mλ = [mλ, ...,mλ ] for histλ is generated using a GRU, λ ∈ {a, b}. Memory Input: Attention mechanism is used to read Mλ. Relevance of each memory mλ’s context with ui is computed using a match operation, pλ = softmax(q i .mλ) , qi = B.ui (1) Memory Output: Weighted combination of memories is calculated using attention scores. oλ = M ′ λ.pλ (2)", "title": "" }, { "docid": "2ffb20d66a0d5cb64442c2707b3155c6", "text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.", "title": "" }, { "docid": "f2fc6440b95c9ed93f5925672798ae2d", "text": "This paper presents a standalone 5.6 nV/√Hz chopper op-amp that operates from a 2.1-5.5 V supply. Frequency compensation is achieved in a power-and area-efficient manner by using a current attenuator and a dummy differential output. As a result, the overall op-amp only consumes 1.4 mA supply current and 1.26 mm2 die area. Up-modulated chopper ripple is suppressed by a local feedback technique, called auto correction feedback (ACFB). The charge injection of the input chopping switches can cause residual offset voltages, especially with the wider switches needed to reduce thermal noise. By employing an adaptive clock boosting technique with NMOS input switches, the amount of charge injection is minimized and kept constant as the input common-mode voltage changes. This results in a 0.5 μV maximum offset and 0.015 μV/°C maximum drift over the amplifier's entire rail-to-rail input common-mode range and from -40 °C to 125 °C. The design is implemented in a 0.35 μm CMOS process augmented by 5 V CMOS transistors.", "title": "" }, { "docid": "dfc6455cb7c12037faeb8c02c0027570", "text": "This paper proposes efficient and powerful deep networks for action prediction from partially observed videos containing temporally incomplete action executions. Different from after-the-fact action recognition, action prediction task requires action labels to be predicted from these partially observed videos. Our approach exploits abundant sequential context information to enrich the feature representations of partial videos. We reconstruct missing information in the features extracted from partial videos by learning from fully observed action videos. The amount of the information is temporally ordered for the purpose of modeling temporal orderings of action segments. Label information is also used to better separate the learned features of different categories. We develop a new learning formulation that enables efficient model training. Extensive experimental results on UCF101, Sports-1M and BIT datasets demonstrate that our approach remarkably outperforms state-of-the-art methods, and is up to 300x faster than these methods. Results also show that actions differ in their prediction characteristics, some actions can be correctly predicted even though only the beginning 10% portion of videos is observed.", "title": "" }, { "docid": "3deae5f2d776261bd11a4bb76f945b74", "text": "We present a discussion forum assistant based on deep recurrent neural networks (RNNs). The assistant is trained to perform three different tasks when faced with a question from a user. Firstly, to recommend related posts. Secondly, to recommend other users that might be able to help. Thirdly, it recommends other channels in the forum where people may discuss related topics. Our recurrent forum assistant is evaluated experimentally by prediction accuracy for the end–to–end trainable parts, as well as by performing an end-user study. We conclude that the model generalizes well, and is helpful for the users.", "title": "" }, { "docid": "67ef359b63bf9fc1e1351c003d254c0b", "text": "AIM\nThis paper is a report of selected findings from a study exploring the relationship between belongingness and placement experiences of preregistration nursing students.\n\n\nBACKGROUND\nStaff-student relationships are an important influence on students' experiences of belongingness and their clinical learning. The need to belong is universal and pervasive, exerting a powerful influence on thought processes, emotions, behaviour, health and happiness. People deprived of belongingness are more likely to experience diminished self-esteem, increased stress and anxiety, depression and a decrease in general well-being. Nursing students' motivation and capacity to learn, self-concept, confidence, the extent to which they are willing to question or conform to poor practice and their future career decisions are influenced by the extent to which they experience belongingness.\n\n\nMETHOD\nDuring 2006, 18 third year students from two Australian universities and one United Kingdom university participated in in-depth semi-structured interviews. Data were analysed thematically.\n\n\nFINDINGS\nParticipants described placement experiences spanning a continuum from those promoting a high degree of belongingness to those provoking intense feelings of alienation. Staff-student relationships (including receptiveness, inclusion/exclusion, legitimization of the student role, recognition and appreciation, challenge and support) were the most important influence on students' sense of belonging and learning. Similarities between sites were remarkable, despite the differences in healthcare and higher education systems.\n\n\nCONCLUSION\nStaff-student relationships are key to students' experience of belongingness. Understanding the types of interactions and behaviours that facilitate or impede students' belongingness and learning are essential to the creation of positive clinical experiences.", "title": "" }, { "docid": "5ae890862d844ce03359624c3cb2012b", "text": "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd software architecture in practice second edition that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.", "title": "" }, { "docid": "b19fb7f7471d3565e79dbaab3572bb4d", "text": "Self-enucleation or oedipism is a specific manifestation of psychiatric illness distinct from the milder forms of self-inflicted ocular injury. In this article, we discuss the previously unreported medical complication of subarachnoid hemorrhage accompanying self-enucleation. The diagnosis was suspected from the patient's history and was confirmed by computed tomographic scan of the head. This complication may be easily missed in the overtly psychotic patient. Specific steps in the medical management of self-enucleation are discussed, and medical complications of self-enucleation are reviewed.", "title": "" }, { "docid": "968c0de61cbd45e04155ecfc6eaf6891", "text": "An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model’s learned saliency and entailment skills.", "title": "" }, { "docid": "4c0427bd87ef200484f0a510e8acb0de", "text": "Recent deep learning (DL) models are moving more and more to dynamic neural network (NN) architectures, where the NN structure changes for every data sample. However, existing DL programming models are inefficient in handling dynamic network architectures because of: (1) substantial overhead caused by repeating dataflow graph construction and processing every example; (2) difficulties in batched execution of multiple samples; (3) inability to incorporate graph optimization techniques such as those used in static graphs. In this paper, we present “Cavs”, a runtime system that overcomes these bottlenecks and achieves efficient training and inference of dynamic NNs. Cavs represents a dynamic NN as a static vertex function F and a dynamic instance-specific graph G. It avoids the overhead of repeated graph construction by only declaring and constructing F once, and allows for the use of static graph optimization techniques on pre-defined operations in F . Cavs performs training and inference by scheduling the execution of F following the dependencies in G, hence naturally exposing batched execution opportunities over different samples. Experiments comparing Cavs to state-of-the-art frameworks for dynamic NNs (TensorFlow Fold, PyTorch and DyNet) demonstrate the efficacy of our approach: Cavs achieves a near one order of magnitude speedup on training of dynamic NN architectures, and ablations verify the effectiveness of our proposed design and optimizations.", "title": "" } ]
scidocsrr
fe29d7cd82b7c04669406cb95c494ed4
Opponent Modeling in Deep Reinforcement Learning
[ { "docid": "d65ccb1890bdc597c19d11abad6ae7af", "text": "The traditional view of agent modelling is to infer the explicit parameters of another agent’s strategy (i.e., their probability of taking each action in each situation). Unfortunately, in complex domains with high dimensional strategy spaces, modelling every parameter often requires a prohibitive number of observations. Furthermore, given a model of such a strategy, computing a response strategy that is robust to modelling error may be impractical to compute online. Instead, we propose an implicit modelling framework where agents aim to estimate the utility of a fixed portfolio of pre-computed strategies. Using the domain of heads-up limit Texas hold’em poker, this work describes an end-to-end approach for building an implicit modelling agent. We compute robust response strategies, show how to select strategies for the portfolio, and apply existing variance reduction and online learning techniques to dynamically adapt the agent’s strategy to its opponent. We validate the approach by showing that our implicit modelling agent would have won the heads-up limit opponent exploitation event in the 2011 Annual Computer Poker Competition.", "title": "" }, { "docid": "ff140197e5f96ca0f5837f2774c1825f", "text": "When an opponent with a stationary and stochastic policy is encountered in a twoplayer competitive game, model-free Reinforcement Learning (RL) techniques such as Q-learning and Sarsa(λ) can be used to learn near-optimal counter strategies given enough time. When an agent has learned such counter strategies against multiple diverse opponents, it is not trivial to decide which one to use when a new unidentified opponent is encountered. Opponent modeling provides a sound method for accomplishing this in the case where a policy has already been learned against the new opponent; the policy corresponding to the most likely opponent model can be employed. When a new opponent has never been encountered previously, an appropriate policy may not be available. The proposed solution is to use model-based RL methods in conjunction with separate environment and opponent models. The model-based RL algorithms used were Dyna-Q and value iteration (VI). The environment model allows an agent to reuse general knowledge about the game that is not tied to a specific opponent. Opponent models that are evaluated include Markov chains, Mixtures of Markov chains, and Latent Dirichlet Allocation on Markov chains. The latter two models are latent variable models, which make predictions for new opponents by estimating their latent (unobserved) parameters. In some situations, I have found that this allows good predictive models to be learned quickly for new opponents given data from previous opponents. I show cases where these models have low predictive perplexity (high accuracy) for novel opponents. In theory, these opponent models would enable modelbased RL agents to learn best response strategies in conjunction with an environment model, but converting prediction accuracy to actual game performance is non-trivial. This was not achieved with these methods for the domain, which is a two-player soccer game based on a physics simulation. Model-based RL did allow for faster learning in the game, but did not take full advantage of the opponent models. The quality of the environment model seems to be a critical factor in this situation.", "title": "" } ]
[ { "docid": "fb204d2f9965d17ed87c8fe8d1f22cdd", "text": "Are metaphors departures from a norm of literalness? According to classical rhetoric and most later theories, including Gricean pragmatics, they are. No, metaphors are wholly normal, say the Romantic critics of classical rhetoric and a variety of modern scholars ranging from hard-nosed cognitive scientists to postmodern critical theorists. On the metaphor-as-normal side, there is a broad contrast between those, like the cognitive linguists Lakoff, Talmy or Fauconnier, who see metaphor as pervasive in language because it is constitutive of human thought, and those, like the psycholinguists Glucksberg or Kintsch, or relevance theorists, who describe metaphor as emerging in the process of verbal communication. 1 While metaphor cannot be both wholly normal and a departure from normal language use, there might be distinct, though related, metaphorical phenomena at the level of thought, on the one hand, and verbal communication, on the other. This possibility is being explored (for instance) in the work of Raymond Gibbs. 2 In this chapter, we focus on the relevance-theoretic approach to linguistic metaphors.", "title": "" }, { "docid": "14d68a45e54b07efb15ef950ba92d7bc", "text": "We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout. Instead of learning a direct mapping from text to image, our algorithm decomposes the generation process into multiple steps, in which it first constructs a semantic layout from the text by the layout generator and converts the layout to an image by the image generator. The proposed layout generator progressively constructs a semantic layout in a coarse-to-fine manner by generating object bounding boxes and refining each box by estimating object shapes inside the box. The image generator synthesizes an image conditioned on the inferred semantic layout, which provides a useful semantic structure of an image matching with the text description. Our model not only generates semantically more meaningful images, but also allows automatic annotation of generated images and user-controlled generation process by modifying the generated scene layout. We demonstrate the capability of the proposed model on challenging MS-COCO dataset and show that the model can substantially improve the image quality, interpretability of output and semantic alignment to input text over existing approaches.", "title": "" }, { "docid": "ddfd02c12c42edb2607a6f193f4c242b", "text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.", "title": "" }, { "docid": "5519eea017d8f69804060f5e40748b1a", "text": "The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels. I will overview recent advances and some of challenges in this field.", "title": "" }, { "docid": "0185bbf151e3de2cc038420380a3e877", "text": "Powder-based additive manufacturing (AM) technologies have been evaluated for use in different fields of application (aerospace, medical, etc.). Ideally, AM parts should be at least equivalent, or preferably better quality than conventionally produced parts. Manufacturing defects and their effects on the quality and performance of AM parts are a currently a major concern. It is essential to understand the defect types, their generation mechanisms, and the detection methodologies for mechanical properties evaluation and quality control. We consider the various types of microstructural features or defects, their generation mechanisms, their effect on bulk properties and the capability of existing characterisation methodologies for powder based AM parts in this work. Methods of in-situ non-destructive evaluation and the influence of defects on mechanical properties and design considerations are also reviewed. Together, these provide a framework to understand the relevant machine and material parameters, optimise the process and production, and select appropriate characterisation methods.", "title": "" }, { "docid": "698cc50558811c7af44d40ba7dbdfe6f", "text": "We show that the demand for news varies with the perceived affinity of the news organization to the consumer’s political preferences. In an experimental setting, conservatives and Republicans preferred to read news reports attributed to Fox News and to avoid news from CNN and NPR. Democrats and liberals exhibited exactly the opposite syndrome—dividing their attention equally between CNN and NPR, but avoiding Fox News. This pattern of selective exposure based on partisan affinity held not only for news coverage of controversial issues but also for relatively ‘‘soft’’ subjects such as crime and travel. The tendency to select news based on anticipated agreement was also strengthened among more politically engaged partisans. Overall, these results suggest that the further proliferation of new media and enhanced media choices may contribute to the further polarization of the news audience.", "title": "" }, { "docid": "e71bd8a43806651b412d00848821a517", "text": "Techniques for procedural generation of the graphics content have seen widespread use in multimedia over the past thirty years. It is still an active area of research with many applications in 3D modeling software, video games, and films. This thesis focuses on algorithmic generation of virtual terrains in real-time and their real-time visualization. We provide an overview of available approaches and present an extendable library for procedural terrain synthesis.", "title": "" }, { "docid": "5dfbe9036bc9fd63edc53992daf1858d", "text": "The paper reviews applications of data mining in manufacturing engineering, in particular production processes, operations, fault detection, maintenance, decision support, and product quality improvement. Customer relationship management, information integration aspects, and standardization are also briefly discussed. This review is focused on demonstrating the relevancy of data mining to manufacturing industry, rather than discussing the data mining domain in general. The volume of general data mining literature makes it difficult to gain a precise view of a target area such as manufacturing engineering, which has its own particular needs and requirements for mining applications. This review reveals progressive applications in addition to existing gaps and less considered areas such as manufacturing planning and shop floor control. DOI: 10.1115/1.2194554", "title": "" }, { "docid": "e92ab865f33c7548c21ba99785912d03", "text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.", "title": "" }, { "docid": "f99d0e24dece8b2de287b7d86c483f83", "text": "Recently, the Task Force on Process Mining released the Process Mining Manifesto. The manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active contributions from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing relevance of process mining as a bridge between data mining and business process modeling. This paper summarizes the manifesto and explains why process mining is a highly relevant, but also very challenging, research area. This way we hope to stimulate the broader ACM SIGKDD community to look at process-centric knowledge discovery.", "title": "" }, { "docid": "97270ca739c7e005da4cab41f19342e7", "text": "Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.", "title": "" }, { "docid": "693dd8eb0370259c4ee5f8553de58443", "text": "Most research in Interactive Storytelling (IS) has sought inspiration in narrative theories issued from contemporary narratology to either identify fundamental concepts or derive formalisms for their implementation. In the former case, the theoretical approach gives raise to empirical solutions, while the latter develops Interactive Storytelling as some form of “computational narratology”, modeled on computational linguistics. In this paper, we review the most frequently cited theories from the perspective of IS research. We discuss in particular the extent to which they can actually inspire IS technologies and highlight key issues for the effective use of narratology in IS.", "title": "" }, { "docid": "fba1a1296d8f3e22248e45cbe33263b5", "text": "Wi-Fi has become the de facto wireless technology for achieving short- to medium-range device connectivity. While early attempts to secure this technology have been proved inadequate in several respects, the current more robust security amendments will inevitably get outperformed in the future, too. In any case, several security vulnerabilities have been spotted in virtually any version of the protocol rendering the integration of external protection mechanisms a necessity. In this context, the contribution of this paper is multifold. First, it gathers, categorizes, thoroughly evaluates the most popular attacks on 802.11 and analyzes their signatures. Second, it offers a publicly available dataset containing a rich blend of normal and attack traffic against 802.11 networks. A quite extensive first-hand evaluation of this dataset using several machine learning algorithms and data features is also provided. Given that to the best of our knowledge the literature lacks such a rich and well-tailored dataset, it is anticipated that the results of the work at hand will offer a solid basis for intrusion detection in the current as well as next-generation wireless networks.", "title": "" }, { "docid": "5bb9ca3c14dd84f1533789c3fe4bbd10", "text": "The field of spondyloarthritis (SpA) has experienced major progress in the last decade, especially with regard to new treatments, earlier diagnosis, imaging technology and a better definition of outcome parameters for clinical trials. In the present work, the Assessment in SpondyloArthritis international Society (ASAS) provides a comprehensive handbook on the most relevant aspects for the assessments of spondyloarthritis, covering classification criteria, MRI and x rays for sacroiliac joints and the spine, a complete set of all measurements relevant for clinical trials and international recommendations for the management of SpA. The handbook focuses at this time on axial SpA, with ankylosing spondylitis (AS) being the prototype disease, for which recent progress has been faster than in peripheral SpA. The target audience includes rheumatologists, trial methodologists and any doctor and/or medical student interested in SpA. The focus of this handbook is on practicality, with many examples of MRI and x ray images, which will help to standardise not only patient care but also the design of clinical studies.", "title": "" }, { "docid": "274186e87674920bfe98044aa0208320", "text": "Message routing in mobile delay tolerant networks inherently relies on the cooperation between nodes. In most existing routing protocols, the participation of nodes in the routing process is taken as granted. However, in reality, nodes can be unwilling to participate. We first show in this paper the impact of the unwillingness of nodes to participate in existing routing protocols through a set of experiments. Results show that in the presence of even a small proportion of nodes that do not forward messages, performance is heavily degraded. We then analyze two major reasons of the unwillingness of nodes to participate, i.e., their rational behavior (also called selfishness) and their wariness of disclosing private mobility information. Our main contribution in this paper is to survey the existing related research works that overcome these two issues. We provide a classification of the existing approaches for protocols that deal with selfish behavior. We then conduct experiments to compare the performance of these strategies for preventing different types of selfish behavior. For protocols that preserve the privacy of users, we classify the existing approaches and provide an analytical comparison of their security guarantees.", "title": "" }, { "docid": "6e6237011de5348d9586fb70941b4037", "text": "BACKGROUND\nAlthough clinicians frequently add a second medication to an initial, ineffective antidepressant drug, no randomized controlled trial has compared the efficacy of this approach.\n\n\nMETHODS\nWe randomly assigned 565 adult outpatients who had nonpsychotic major depressive disorder without remission despite a mean of 11.9 weeks of citalopram therapy (mean final dose, 55 mg per day) to receive sustained-release bupropion (at a dose of up to 400 mg per day) as augmentation and 286 to receive buspirone (at a dose of up to 60 mg per day) as augmentation. The primary outcome of remission of symptoms was defined as a score of 7 or less on the 17-item Hamilton Rating Scale for Depression (HRSD-17) at the end of this study; scores were obtained over the telephone by raters blinded to treatment assignment. The 16-item Quick Inventory of Depressive Symptomatology--Self-Report (QIDS-SR-16) was used to determine the secondary outcomes of remission (defined as a score of less than 6 at the end of this study) and response (a reduction in baseline scores of 50 percent or more).\n\n\nRESULTS\nThe sustained-release bupropion group and the buspirone group had similar rates of HRSD-17 remission (29.7 percent and 30.1 percent, respectively), QIDS-SR-16 remission (39.0 percent and 32.9 percent), and QIDS-SR-16 response (31.8 percent and 26.9 percent). Sustained-release bupropion, however, was associated with a greater reduction (from baseline to the end of this study) in QIDS-SR-16 scores than was buspirone (25.3 percent vs. 17.1 percent, P<0.04), a lower QIDS-SR-16 score at the end of this study (8.0 vs. 9.1, P<0.02), and a lower dropout rate due to intolerance (12.5 percent vs. 20.6 percent, P<0.009).\n\n\nCONCLUSIONS\nAugmentation of citalopram with either sustained-release bupropion or buspirone appears to be useful in actual clinical settings. Augmentation with sustained-release bupropion does have certain advantages, including a greater reduction in the number and severity of symptoms and fewer side effects and adverse events. (ClinicalTrials.gov number, NCT00021528.).", "title": "" }, { "docid": "81a44de6f529f09e78ade5384c9b1527", "text": "Code Blue is an emergency code used in hospitals to indicate when a patient goes into cardiac arrest and needs resuscitation. When Code Blue is called, an on-call medical team staffed by physicians and nurses is paged and rushes in to try to save the patient's life. It is an intense, chaotic, and resource-intensive process, and despite the considerable effort, survival rates are still less than 20% [4]. Research indicates that patients actually start showing clinical signs of deterioration some time before going into cardiac arrest [1][2[][3], making early prediction, and possibly intervention, feasible. In this paper, we describe our work, in partnership with NorthShore University HealthSystem, that preemptively flags patients who are likely to go into cardiac arrest, using signals extracted from demographic information, hospitalization history, vitals and laboratory measurements in patient-level electronic medical records. We find that early prediction of Code Blue is possible and when compared with state of the art existing method used by hospitals (MEWS - Modified Early Warning Score)[4], our methods perform significantly better. Based on these results, this system is now being considered for deployment in hospital settings.", "title": "" }, { "docid": "8bb5a38908446ca4e6acb4d65c4c817c", "text": "Column-oriented database systems have been a real game changer for the industry in recent years. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. In this paper we present the column-oriented datastore developed as one of the central components of PowerDrill. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. The main goal of the latter being to reduce the main memory footprint and to increase the efficiency in processing typical user queries. In this combination we achieve large speed-ups. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset.", "title": "" }, { "docid": "c75095680818ccc7094e4d53815ef475", "text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.", "title": "" } ]
scidocsrr
03d9544d79a4915d618452782084b08e
Deep Inverse Optimization
[ { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "93f89a636828df50dfe48ffa3e868ea6", "text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.", "title": "" } ]
[ { "docid": "15bad4566e44ed8f9865d3ef179d6df7", "text": "Experiences of social rejection or loss have been described as some of the most \"painful\" experiences that we, as humans, face and perhaps for good reason. Because of our prolonged period of immaturity, the social attachment system may have co-opted the pain system, borrowing the pain signal to prevent the detrimental consequences of social separation. This review summarizes a program of research that has explored the idea that experiences of physical pain and social pain rely on shared neural substrates. First, evidence showing that social pain activates pain-related neural regions is reviewed. Then, studies exploring some of the expected consequences of such a physical pain-social pain overlap are summarized. These studies demonstrate that a) individuals who are more sensitive to one kind of pain are also more sensitive to the other and b) factors that increase or decrease one kind of pain alter the other in a similar manner. Finally, what these shared neural substrates mean for our understanding of socially painful experience is discussed.", "title": "" }, { "docid": "3d3f5b45b939f926d1083bab9015e548", "text": "Industry is facing an era characterised by unpredictable market changes and by a turbulent competitive environment. The key to compete in such a context is to achieve high degrees of responsiveness by means of high flexibility and rapid reconfiguration capabilities. The deployment of modular solutions seems to be part of the answer to face these challenges. Semantic modelling and ontologies may represent the needed knowledge representation to support flexibility and modularity of production systems, when designing a new system or when reconfiguring an existing one. Although numerous ontologies for production systems have been developed in the past years, they mainly focus on discrete manufacturing, while logistics aspects, such as those related to internal logistics and warehousing, have not received the same attention. The paper aims at offering a representation of logistics aspects, reflecting what has become a de-facto standard terminology in industry and among researchers in the field. Such representation is to be used as an extension to the already-existing production systems ontologies that are more focused on manufacturing processes. The paper presents the structure of the hierarchical relations within the examined internal logistics elements, namely Storage and Transporters, structuring them in a series of classes and sub-classes, suggesting also the relationships and the attributes to be considered to complete the modelling. Finally, the paper proposes an industrial example with a miniload system to show how such a modelling of internal logistics elements could be instanced in the real world. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e485aca373cf4543e1a8eeadfa0e6772", "text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.", "title": "" }, { "docid": "78b71cd47cb633b8ac47f88a8e26a646", "text": "One of the greatest challenges in speech technology is estimating the speaker’s emotion. Most of the existing approaches concentrate either on audio or text features. In this work, we propose a novel approach for emotion classification of audio conversation based on both speech and text. The novelty in this approach is in the choice of features and the generation of a single feature vector for classification. Our main intention is to increase the accuracy of emotion classification of speech by considering both audio and text features. In this work we use standard methods such as Natural Language Processing, Support Vector Machines, WordNet Affect and SentiWordNet. The dataset for this work have been taken from Semval -2007 and eNTERFACE'05 EMOTION Database. © 2014 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the International Conference on Information and Communication Technologies (ICICT 2014).", "title": "" }, { "docid": "095dbdc1ac804487235cdd0aeffe8233", "text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.", "title": "" }, { "docid": "2a1eb2fa37809bfce258476463af793c", "text": "Parkinson’s disease (PD) is a chronic disease that develops over years and varies dramatically in its clinical manifestations. A preferred strategy to resolve this heterogeneity and thus enable better prognosis and targeted therapies is to segment out more homogeneous patient sub-populations. However, it is challenging to evaluate the clinical similarities among patients because of the longitudinality and temporality of their records. To address this issue, we propose a deep model that directly learns patient similarity from longitudinal and multi-modal patient records with an Recurrent Neural Network (RNN) architecture, which learns the similarity between two longitudinal patient record sequences through dynamically matching temporal patterns in patient sequences. Evaluations on real world patient records demonstrate the promising utility and efficacy of the proposed architecture in personalized predictions.", "title": "" }, { "docid": "cd36cc1b2cb7c2d5a6876a2fb2e7a0bf", "text": "Due to the vast amount of information available nowadays, and the advantages related to the processing of this data, the topics of big data and data science have acquired a great importance in the current research. Big data applications are mainly about scalability, which can be achieved via the MapReduce programming model.It is designed to divide the data into several chunks or groups that are processed in parallel, and whose result is “assembled” to provide a single solution. Among different classification paradigms adapted to this new framework, fuzzy rule based classification systems have shown interesting results with a MapReduce approach for big data. It is well known that the performance of these types of systems has a strong dependence on the selection of a good granularity level for the Data Base. However, in the context of MapReduce this parameter is even harder to determine as it can be also related with the number ofMaps chosen for the processing stage. In this paper, we aim at analyzing the interrelation between the number of labels of the fuzzy variables and the scarcity of the data due to the data sampling in MapReduce. Specifically, we consider that as the partitioning of the initial instance set grows, the level of granularity necessary to B Alberto Fernández alberto@decsai.ugr.es Sara del Río srio@decsai.ugr.es Abdullah Bawakid abawakid@kau.edu.sa Francisco Herrera herrera@decsai.ugr.es 1 Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain 2 Faculty of Computing and Information Technology, King Abdulaziz University (KAU), Jeddah, Saudi Arabia", "title": "" }, { "docid": "bcaef01114d689ede6793b98fd316b6d", "text": "Sharing information online via social network sites (SNSs) is at an all-time high, yet research shows that users often exhibit a marked dissatisfaction in using such sites. A compelling explanation for this dichotomy is that users are struggling against their SNS environment in an effort to achieve their preferred levels of privacy for regulating social interactions. Our research investigates users' SNS boundary regulation behavior. This paper presents results from a qualitative interview-based study to identify \"coping mechanisms\" that users devise outside explicit boundary-regulation interface features in order to manage interpersonal boundaries. Our categorization of such mechanisms provides insight into interaction design issues and opportunities for new SNS features.", "title": "" }, { "docid": "6bcfc93a3bee13d2c5416e4cc5663646", "text": "The choice of an adequate object shape representation is critical for efficient grasping and robot manipulation. A good representation has to account for two requirements: it should allow uncertain sensory fusion in a probabilistic way and it should serve as a basis for efficient grasp and motion generation. We consider Gaussian process implicit surface potentials as object shape representations. Sensory observations condition the Gaussian process such that its posterior mean defines an implicit surface which becomes an estimate of the object shape. Uncertain visual, haptic and laser data can equally be fused in the same Gaussian process shape estimate. The resulting implicit surface potential can then be used directly as a basis for a reach and grasp controller, serving as an attractor for the grasp end-effectors and steering the orientation of contact points. Our proposed controller results in a smooth reach and grasp trajectory without strict separation of phases. We validate the shape estimation using Gaussian processes in a simulation on randomly sampled shapes and the grasp controller on a real robot with 7DoF arm and 7DoF hand.", "title": "" }, { "docid": "c01dd2ae90781291cb5915957bd42ae1", "text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.", "title": "" }, { "docid": "186ff95297d0918971374de8f9325eaf", "text": "Article history: Received 4 December 2009 Received in revised form 1 June 2011 Accepted 2 July 2011 Available online 12 July 2011", "title": "" }, { "docid": "83c1d0b0a1edc48ccc051b8848e6703e", "text": "Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsupervised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature binarization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two realworld datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.", "title": "" }, { "docid": "cb00fba4374d845da2f7e18c421b07df", "text": "The Internet of Things (IoT) is a new paradigm that combines aspects and technologies coming from different approaches. Ubiquitous computing, pervasive computing, Internet Protocol, sensing technologies, communication technologies, and embedded devices are merged together in order to form a system where the real and digital worlds meet and are continuously in symbiotic interaction. The smart object is the building block of the IoT vision. By putting intelligence into everyday objects, they are turned into smart objects able not only to collect information from the environment and interact/control the physical world, but also to be interconnected, to each other, through Internet to exchange data and information. The expected huge number of interconnected devices and the significant amount of available data open new opportunities to create services that will bring tangible benefits to the society, environment, economy and individual citizens. In this paper we present the key features and the driver technologies of IoT. In addition to identifying the application scenarios and the correspondent potential applications, we focus on research challenges and open issues to be faced for the IoT realization in the real world.", "title": "" }, { "docid": "e28438e023fbcbb1c1a7bd2cda3213e1", "text": "Recent studies provide evidence that Quality of Service (QoS) routing can provide increased network utilization compared to routing that is not sensitive to QoS requirements of traffic. However, there are still strong concerns about the increased cost of QoS routing, both in terms of more complex and frequent computations and increased routing protocol overhead. The main goals of this paper are to study these two cost components, and propose solutions that achieve good routing performance with reduced processing cost. First, we identify the parameters that determine the protocol traffic overhead, namely (a) policy for triggering updates, (b) sensitivity of this policy, and (c) clamp down timers that limit the rate of updates. Using simulation, we study the relative significance of these factors and investigate the relationship between routing performance and the amount of update traffic. In addition, we explore a range of design options to reduce the processing cost of QoS routing algorithms, and study their effect on routing performance. Based on the conclusions of these studies, we develop extensions to the basic QoS routing, that can achieve good routing performance with limited update generation rates. The paper also addresses the impact on the results of a number of secondary factors such as topology, high level admission control, and characteristics of network traffic.", "title": "" }, { "docid": "9c98b5467d454ca46116b479f63c2404", "text": "A learning style describes the attitudes and behaviors, which determine an individual’s preferred way of learning. Learning styles are particularly important in educational settings since they may help students and tutors become more self-aware of their strengths and weaknesses as learners. The traditional way to identify learning styles is using a test or questionnaire. Despite being reliable, these instruments present some problems that hinder the learning style identification. Some of these problems include students’ lack of motivation to fill out a questionnaire and lack of self-awareness of their learning preferences. Thus, over the last years, several approaches have been proposed for automatically detecting learning styles, which aim to solve these problems. In this work, we review and analyze current trends in the field of automatic detection of learning styles. We present the results of our analysis and discuss some limitations, implications and research gaps that can be helpful to researchers working in the field of learning styles.", "title": "" }, { "docid": "792e72e5dd6f949b8abb10241e516069", "text": "Distributed fiber-optic vibration sensors receive extensive investigation and play a significant role in the sensor panorama. Optical parameters such as light intensity, phase, polarization state, or light frequency will change when external vibration is applied on the sensing fiber. In this paper, various technologies of distributed fiber-optic vibration sensing are reviewed, from interferometric sensing technology, such as Sagnac, Mach-Zehnder, and Michelson, to backscattering-based sensing technology, such as phase-sensitive optical time domain reflectometer, polarization-optical time domain reflectometer, optical frequency domain reflectometer, as well as some combinations of interferometric and backscattering-based techniques. Their operation principles are presented and recent research efforts are also included. Finally, the applications of distributed fiber-optic vibration sensors are summarized, which mainly include structural health monitoring and perimeter security, etc. Overall, distributed fiber-optic vibration sensors possess the advantages of large-scale monitoring, good concealment, excellent flexibility, and immunity to electromagnetic interference, and thus show considerable potential for a variety of practical applications.", "title": "" }, { "docid": "838b599024a14e952145af0c12509e31", "text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.", "title": "" }, { "docid": "f43478501471f5b9b8de429958016b7d", "text": "A growing amount of consumers are making purchases online. Due to this rise in online retail, online credit card fraud is increasingly becoming a common type of theft. Previously used rule based systems are no longer scalable, because fraudsters can adapt their strategies over time. The advantage of using machine learning is that it does not require an expert to design rules which need to be updated periodically. Furthermore, algorithms can adapt to new fraudulent behaviour by retraining on newer transactions. Nevertheless, fraud detection by means of data mining and machine learning comes with a few challenges as well. The very unbalanced nature of the data and the fact that most payment processing companies only process a fragment of the incoming traffic from merchants, makes it hard to detect reliable patterns. Previously done research has focussed mainly on augmenting the data with useful features in order to improve the detectable patterns. These papers have proven that focussing on customer transaction behavior provides the necessary patterns in order to detect fraudulent behavior. In this thesis we propose several bayesian network models which rely on latent representations of fraudulent transactions, non-fraudulent transactions and customers. These representations are learned using unsupervised learning techniques. We show that the methods proposed in this thesis significantly outperform state-of-the-art models without using elaborate feature engineering strategies. A portion of this thesis focuses on re-implementing two of these feature engineering strategies in order to support this claim. Results from these experiments show that modeling fraudulent and non-fraudulent transactions individually generates the best performance in terms of classification accuracy. In addition, we focus on varying the dimensions of the latent space in order to assess its effect on performance. Our final results show that a higher dimensional latent space does not necessarily improve the performance of our models.", "title": "" }, { "docid": "7d3bd11696538d6e93ec81f2f385e13f", "text": "The Lattice Boltzmann Equation (LBE) method is reviewed and analyzed. The focus is on the fundamental principles of the approach; its `pros' and `cons' in comparison to other methods of the computational uid dynamics (CFD); and its perspectives as a competitive alternative computational approach for uid dynamics. An excursion into the history, physical background and details of the theory and numerical implementation is made, with special attention paid to the method's advantages, limitations and perspectives to be a useful framework to incorporate molecular interactions for description of complex interfacial phenomena; eÆciency and simplicity for modeling of hydrodynamics, comparing it to the methods, which directly solve for transport equations of macroscopic variables (\\traditional CFD\").", "title": "" }, { "docid": "2baa441b3daf9736154dd19864ec2497", "text": "In some stochastic environments the well-known reinforcement learning algorithm Q-learning performs very poorly. This poor performance is caused by large overestimations of action values. These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value. We introduce an alternative way to approximate the maximum expected value for any set of random variables. The obtained double estimator method is shown to sometimes underestimate rather than overestimate the maximum expected value. We apply the double estimator to Q-learning to construct Double Q-learning, a new off-policy reinforcement learning algorithm. We show the new algorithm converges to the optimal policy and that it performs well in some settings in which Q-learning performs poorly due to its overestimation.", "title": "" } ]
scidocsrr
7795748953b9d3135cde7cdc7e8ff754
EVENT DETECTION IN TIME SERIES OF MOBILE COMMUNICATION GRAPHS
[ { "docid": "40083241b498dc6ac14de7dcc0b38399", "text": "We report on an automated runtime anomaly detection method at the application layer of multi-node computer systems. Although several network management systems are available in the market, none of them have sufficient capabilities to detect faults in multi-tier Web-based systems with redundancy. We model a Web-based system as a weighted graph, where each node represents a \"service\" and each edge represents a dependency between services. Since the edge weights vary greatly over time, the problem we address is that of anomaly detection from a time sequence of graphs.In our method, we first extract a feature vector from the adjacency matrix that represents the activities of all of the services. The heart of our method is to use the principal eigenvector of the eigenclusters of the graph. Then we derive a probability distribution for an anomaly measure defined for a time-series of directional data derived from the graph sequence. Given a critical probability, the threshold value is adaptively updated using a novel online algorithm.We demonstrate that a fault in a Web application can be automatically detected and the faulty services are identified without using detailed knowledge of the behavior of the system.", "title": "" }, { "docid": "51eb8e36ffbf5854b12859602f7554ef", "text": "Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.", "title": "" }, { "docid": "3c3f3a9d6897510d5d5d3d55c882502c", "text": "Error-tolerant graph matching is a powerful concept that has various applications in pattern recognition and machine vision. In the present paper, a new distance measure on graphs is proposed. It is based on the maximal common subgraph of two graphs. The new measure is superior to edit distance based measures in that no particular edit operations together with their costs need to be defined. It is formally shown that the new distance measure is a metric. Potential algorithms for the efficient computation of the new measure are discussed. q 1998 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "bcd7af5c474d931c0a76b654775396c2", "text": "Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level “option policies” that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.", "title": "" }, { "docid": "106086a4b63a5bfe0554f36c9feff5f5", "text": "It seems uncontroversial that providing feedback after a test, in the form of the correct answer, enhances learning. In real-world educational situations, however, the time available for learning is often constrained-and feedback takes time. We report an experiment in which total time for learning was fixed, thereby creating a trade-off between spending time receiving feedback and spending time on other learning activities. Our results suggest that providing feedback is not universally beneficial. Indeed, under some circumstances, taking time to provide feedback can have a negative net effect on learning. We also found that learners appear to have some insight about the costs of feedback; when they were allowed to control feedback, they often skipped unnecessary feedback in favor of additional retrieval attempts, and they benefited from doing so. These results underscore the importance of considering the costs and benefits of interventions designed to enhance learning.", "title": "" }, { "docid": "49d533bf41f18bc96c404bb9a8bd12ae", "text": "A back-cavity shielded bow-tie antenna system working at 900MHz center frequency for ground-coupled GPR application is investigated numerically and experimentally in this paper. Bow-tie geometrical structure is modified for a compact design and back-cavity assembly. A layer of absorber is employed to overcome the back reflection by omni-directional radiation pattern of a bow-tie antenna in H-plane, thus increasing the SNR and improve the isolation between T and R antennas as well. The designed antenna system is applied to a prototype GPR system. Tested data shows that the back-cavity shielded antenna works satisfactorily in the 900MHz GPR system.", "title": "" }, { "docid": "ca807d3bed994a8e7492898e6bfe6dd2", "text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.", "title": "" }, { "docid": "3c7883682ae9c05ec8f517b1d69c10cd", "text": "The IS research community has investigated the evolving and changing role of the Chief Information Officer (CIO) for more than twenty-five years. This research sought to better understand the recent changes of the CIO role. Our research goals were threefold: 1) To identify whether the CIO’s job has changed from the characteristics suggested by previous studies; 2) to identify a profile of the attributes of CIOs, and 3) to understand what these developments suggest for the education and professional development of CIOs. We found that much of CIO role has evolved to the executive-level management and is centered on working with other business executives inside and outside of the firm to change the firm’s strategy and processes. CIOs are now seen as multi-dimensional C-level executives who need to be experienced with many functions within the organization and possess a diverse set of skills needed to influence the organization.", "title": "" }, { "docid": "a4d4a06d3e84183eddf7de6c0fd2721b", "text": "Reinforcement learning (RL) is a powerful paradigm for sequential decision-making under uncertainties, and most RL algorithms aim to maximize some numerical value which represents only one long-term objective. However, multiple long-term objectives are exhibited in many real-world decision and control systems, so recently there has been growing interest in solving multiobjective reinforcement learning (MORL) problems where there are multiple conflicting objectives. The aim of this paper is to present a comprehensive overview of MORL. The basic architecture, research topics, and naïve solutions of MORL are introduced at first. Then, several representative MORL approaches and some important directions of recent research are comprehensively reviewed. The relationships between MORL and other related research are also discussed, which include multiobjective optimization, hierarchical RL, and multiagent RL. Moreover, research challenges and open problems of MORL techniques are suggested.", "title": "" }, { "docid": "f67c46263e32b3f5d9a9478f3d76da4c", "text": "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.", "title": "" }, { "docid": "162bfca981e89b1b3174a030ad8f64c6", "text": "This paper addresses the consensus problem of multiagent systems with a time-invariant communication topology consisting of general linear node dynamics. A distributed observer-type consensus protocol based on relative output measurements is proposed. A new framework is introduced to address in a unified way the consensus of multiagent systems and the synchronization of complex networks. Under this framework, the consensus of multiagent systems with a communication topology having a spanning tree can be cast into the stability of a set of matrices of the same low dimension. The notion of consensus region is then introduced and analyzed. It is shown that there exists an observer-type protocol solving the consensus problem and meanwhile yielding an unbounded consensus region if and only if each agent is both stabilizable and detectable. A multistep consensus protocol design procedure is further presented. The consensus with respect to a time-varying state and the robustness of the consensus protocol to external disturbances are finally discussed. The effectiveness of the theoretical results is demonstrated through numerical simulations, with an application to low-Earth-orbit satellite formation flying.", "title": "" }, { "docid": "8f3323f43794789215e001b53fef149e", "text": "Human pose estimation is one of the key problems in computer vision that has been studied for well over 15 years. The reason for its importance is the abundance of applications that can benefit from such a technology. For example, human pose estimation allows for higher level reasoning in the context of humancomputer interaction and activity recognition; it is also one of the basic building blocks for marker-less motion capture (MoCap) technology. MoCap technology is useful for applications ranging from character animation to clinical analysis of gait pathologies. Despite many years of research, however, pose estimation remains a very difficult and still largely unsolved problem. Among the most significant challenges are: (1) variability of human visual appearance in images, (2) variability in lighting conditions, (3) variability in human physique, (4) partial occlusions due to self articulation and layering of objects in the scene, (5) complexity of human skeletal structure, (6) high dimensionality of the pose, and (7) the loss of 3d information that results from observing the pose from 2d planar image projections. To date, there is no approach that can produce satisfactory results in general, unconstrained settings while dealing with all of the aforementioned challenges.", "title": "" }, { "docid": "7c09cb7f935e2fb20a4d2e56a5471e61", "text": "This paper proposes and evaluates an approach to the parallelization, deployment and management of bioinformatics applications that integrates several emerging technologies for distributed computing. The proposed approach uses the MapReduce paradigm to parallelize tools and manage their execution, machine virtualization to encapsulate their execution environments and commonly used data sets into flexibly deployable virtual machines, and network virtualization to connect resources behind firewalls/NATs while preserving the necessary performance and the communication environment. An implementation of this approach is described and used to demonstrate and evaluate the proposed approach. The implementation integrates Hadoop, Virtual Workspaces, and ViNe as the MapReduce, virtual machine and virtual network technologies, respectively, to deploy the commonly used bioinformatics tool NCBI BLAST on a WAN-based test bed consisting of clusters at two distinct locations, the University of Florida and the University of Chicago. This WAN-based implementation, called CloudBLAST, was evaluated against both non-virtualized and LAN-based implementations in order to assess the overheads of machine and network virtualization, which were shown to be insignificant. To compare the proposed approach against an MPI-based solution, CloudBLAST performance was experimentally contrasted against the publicly available mpiBLAST on the same WAN-based test bed. Both versions demonstrated performance gains as the number of available processors increased, with CloudBLAST delivering speedups of 57 against 52.4 of MPI version, when 64 processors on 2 sites were used. The results encourage the use of the proposed approach for the execution of large-scale bioinformatics applications on emerging distributed environments that provide access to computing resources as a service.", "title": "" }, { "docid": "3e7e40f82ebb83b4314c974334c8ce0c", "text": "Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is <inline-formula><tex-math notation=\"LaTeX\">$<.004$</tex-math><alternatives> <inline-graphic xlink:href=\"martinez-ieq1-2772922.gif\"/></alternatives></inline-formula>, cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).", "title": "" }, { "docid": "736ee2bed70510d77b1f9bb13b3bee68", "text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.", "title": "" }, { "docid": "6470b7d1532012e938063d971f3ead29", "text": "As society continues to accumulate more and more data, demand for machine learning algorithms that can learn from data with limited human intervention only increases. Semi-supervised learning (SSL) methods, which extend supervised learning algorithms by enabling them to use unlabeled data, play an important role in addressing this challenge. In this thesis, a framework unifying the traditional assumptions and approaches to SSL is defined. A synthesis of SSL literature then places a range of contemporary approaches into this common framework. Our focus is on methods which use generative adversarial networks (GANs) to perform SSL. We analyse in detail one particular GAN-based SSL approach. This is shown to be closely related to two preceding approaches. Through synthetic experiments we provide an intuitive understanding and motivate the formulation of our focus approach. We then theoretically analyse potential alternative formulations of its loss function. This analysis motivates a number of research questions that centre on possible improvements to, and experiments to better understand the focus model. While we find support for our hypotheses, our conclusion more broadly is that the focus method is not especially robust.", "title": "" }, { "docid": "e5b90c749fd24baf98252cd616f0449d", "text": "Data gathered through community-based forest monitoring (CBFM) programs may be as accurate as those gathered by professional scientists, but acquired at a much lower cost and capable of providing more detailed data about the occurrence, extent and drivers of forest loss, degradation and regrowth at the community scale. In addition, CBFM enables greater survey repeatability. Therefore, CBFM should be a fundamental component of national forest monitoring systems and programs to measure, report and verify (MRV) REDD+ activities. To contribute to the development of more effective approaches to CBFM, in this paper we assess: (1) the feasibility of using small, low-cost drones (i.e., remotely piloted aerial vehicles) in CBFM programs; (2) their potential advantages and disadvantages for communities, partner organizations and forest data end-users; and (3) to what extent their utilization, coupled with ground surveys and local ecological knowledge, OPEN ACCESS Forests 2014, 5 1482 would improve tropical forest monitoring. To do so, we reviewed the existing literature regarding environmental applications of drones, including forest monitoring, and drew on our own firsthand experience flying small drones to map and monitor tropical forests and training people to operate them. We believe that the utilization of small drones can enhance CBFM and that this approach is feasible in many locations throughout the tropics if some degree of external assistance and funding is provided to communities. We suggest that the use of small drones can help tropical communities to better manage and conserve their forests whilst benefiting partner organizations, governments and forest data end-users, particularly those engaged in forestry, biodiversity conservation and climate change mitigation projects such as REDD+.", "title": "" }, { "docid": "cb535dfc305a86bd1556194b63d94203", "text": "Graph embedding is a central problem in social network analysis and many other applications, aiming to learn the vector representation for each node. While most existing approaches need to specify the neighborhood and the dependence form to the neighborhood, which may significantly degrades the flexibility of representation, we propose a novel graph node embedding method (namely GESF) via the set function technique. Our method can 1) learn an arbitrary form of representation function from neighborhood, 2) automatically decide the significance of neighbors at different distances, and 3) be applied to heterogeneous graph embedding, which may contain multiple types of nodes. Theoretical guarantee for the representation capability of our method has been proved for general homogeneous and heterogeneous graphs and evaluation results on benchmark data sets show that the proposed GESF outperforms the state-of-the-art approaches on producing node vectors for classification tasks.", "title": "" }, { "docid": "a9b0d197e41fc328502c71c0ddf7b91e", "text": "We propose a new full-rate space-time block code (STBC) for two transmit antennas which can be designed to achieve maximum diversity or maximum capacity while enjoying optimized coding gain and reduced-complexity maximum-likelihood (ML) decoding. The maximum transmit diversity (MTD) construction provides a diversity order of 2Nr for any number of receive antennas Nr at the cost of channel capacity loss. The maximum channel capacity (MCC) construction preserves the mutual information between the transmit and the received vectors while sacrificing diversity. The system designer can switch between the two constructions through a simple parameter change based on the operating signal-to-noise ratio (SNR), signal constellation size and number of receive antennas. Thanks to their special algebraic structure, both constructions enjoy low-complexity ML decoding proportional to the square of the signal constellation size making them attractive alternatives to existing full-diversity full-rate STBCs in [6], [3] which have high ML decoding complexity proportional to the fourth order of the signal constellation size. Furthermore, we design a differential transmission scheme for our proposed STBC, derive the exact ML differential decoding rule, and compare its performance with competitive schemes. Finally, we investigate transceiver design and performance of our proposed STBC in spatial multiple-access scenarios and over frequency-selective channels.", "title": "" }, { "docid": "1eae35badf1dd47462ce03a60db89e05", "text": "Convolutional Neural Network(CNN) based semantic segmentation require extensive pixel level manual annotation which is daunting for large microscopic images. The paper is aimed towards mitigating this labeling effort by leveraging the recent concept of generative adversarial network(GAN) wherein a generator maps latent noise space to realistic images while a discriminator differentiates between samples drawn from database and generator. We extend this concept to a multi task learning wherein a discriminator-classifier network differentiates between fake/real examples and also assigns correct class labels. Though our concept is generic, we applied it for the challenging task of vessel segmentation in fundus images. We show that proposed method is more data efficient than a CNN. Specifically, with 150K, 30K and 15K training examples, proposed method achieves mean AUC of 0.962, 0.945 and 0.931 respectively, whereas the simple CNN achieves AUC of 0.960, 0.921 and 0.916 respectively.", "title": "" }, { "docid": "82255315845c61fd6b8b33457a6dfbd8", "text": "Wireless Sensor Networks (WSNs) have been a subject of extensive research and have undergone explosive growth in the last few years. WSNs utilize collaborative measures such as data gathering, aggregation, processing, and management of sensing activities for enhanced performance. In order to communicate with the sink node, node having low power may have to traverse multi-hops. This requires neighbors' nodes to be used as relays. However, if the relay nodes are compromised or malicious, they may leak confidential information to unauthorized nodes in the WSN. Moreover, in many WSN applications, the deployment of sensor nodes is carried out in an ad-hoc fashion without careful examination. In such networks it is desirable to ensure the source to sink privacy and maximize the lifetime of the network, by finding secure energy-efficient route discovery and forwarding mechanisms. Careful management is also necessary, as processing required for secure routing is distributed over multiple nodes. An important consideration in this regard is energy-aware secure routing, which is significant in ensuring smooth operation of WSNs. As, these networks deal in sensitive data and are vulnerable to attack, it is important to make them secure against various types of threats. However, resource constraints could make the design, deployment and management of large WSNs a challenging proposition. The purpose of this paper is to highlight routing based security threats, provide a detailed assessment of existing solutions and present a Trust-based Energy Efficient Secure Routing Protocol (TEESR). The paper also highlights future research directions in of secure routing in multi-hop WSNs.", "title": "" }, { "docid": "8d0c5de2054b7c6b4ef97a211febf1d0", "text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:", "title": "" }, { "docid": "6f0fc401c11d7ee3faf2f265eb4b2baf", "text": "The inverted peno-scrotal flap is considered the standard technique for vaginoplasty in male-to-female transsexuals. Nowadays, great importance is also given by patients to the reconstruction of the clitoro-labial complex; this is also reconstructed with tissue coming from glans penis, penile skin envelop and scrotal skin. Since the first sex reassignment surgery for biological males performed in Thailand in 1975, Dr Preecha and his team developed the surgical technique for vaginoplasty; many refinements have been introduced during the past 40 years, with nearly 3000 patients operated on. The scope of this paper is to present the surgical technique currently in use for vaginoplasty and clitoro-labioplasty and the refinements introduced at the Chulalongkorn University and at the Preecha Aesthetic Institute, Bangkok, Thailand. These refinements consist of cavity dissection with blunt technique, the use of skin graft in addition to the penile flap, shaping of the clitoris complex from penis glans and clitoral hood, and the use of the urethral mucosa to line the anterior fourchette of the neo-vagina. With the refinements introduced, it has been possible to achieve a result that is very close to the biological female genitalia.", "title": "" } ]
scidocsrr
9949b673c84b955c4039d71dfc4ad3ac
Streaming trend detection in Twitter
[ { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" }, { "docid": "8732cabe1c2dc0e8587b1a7e03039ef0", "text": "With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. \n In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies <i>event threading</i>. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\n We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.", "title": "" } ]
[ { "docid": "da1cecae4f925f331fda67c784e6635d", "text": "This paper surveys recent literature on vehicular social networks that are a particular class of vehicular ad hoc networks, characterized by social aspects and features. Starting from this pillar, we investigate perspectives on next-generation vehicles under the assumption of social networking for vehicular applications (i.e., safety and entertainment applications). This paper plays a role as a starting point about socially inspired vehicles and mainly related applications, as well as communication techniques. Vehicular communications can be considered the “first social network for automobiles” since each driver can share data with other neighbors. For instance, heavy traffic is a common occurrence in some areas on the roads (e.g., at intersections, taxi loading/unloading areas, and so on); as a consequence, roads become a popular social place for vehicles to connect to each other. Human factors are then involved in vehicular ad hoc networks, not only due to the safety-related applications but also for entertainment purposes. Social characteristics and human behavior largely impact on vehicular ad hoc networks, and this arises to the vehicular social networks, which are formed when vehicles (individuals) “socialize” and share common interests. In this paper, we provide a survey on main features of vehicular social networks, from novel emerging technologies to social aspects used for mobile applications, as well as main issues and challenges. Vehicular social networks are described as decentralized opportunistic communication networks formed among vehicles. They exploit mobility aspects, and basics of traditional social networks, in order to create novel approaches of message exchange through the detection of dynamic social structures. An overview of the main state-of-the-art on safety and entertainment applications relying on social networking solutions is also provided.", "title": "" }, { "docid": "a15275cc08ad7140e6dd0039e301dfce", "text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.", "title": "" }, { "docid": "006ea5f44521c42ec513edc1cbff1c43", "text": "In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" }, { "docid": "0e30a01870bbbf32482b5ac346607afc", "text": "Hypothyroidism is the pathological condition in which the level of thyroid hormones declines to the deficiency state. This communication address the therapies employed for the management of hypothyroidism as per the Ayurvedic and modern therapeutic perspectives on the basis scientific papers collected from accepted scientific basis like Google, Google Scholar, PubMed, Science Direct, using various keywords. The Ayurveda describe hypothyroidism as the state of imbalance of Tridoshas and suggest the treatment via use of herbal plant extracts, life style modifications like practicing yoga and various dietary supplements. The modern medicine practice define hypothyroidism as the disease state originated due to formation of antibodies against thyroid gland and hormonal imbalance and incorporate the use of hormone replacement i.e. Levothyroxine, antioxidants. Various plants like Crataeva nurvula and dietary supplements like Capsaicin, Forskolin, Echinacea, Ginseng and Bladderwrack can serve as a potential area of research as thyrotropic agents.", "title": "" }, { "docid": "545064c02ed0ca14c53b3d083ff84eac", "text": "We present a novel polarization imaging sensor by monolithically integrating aluminum nanowire optical filters with an array of CCD imaging elements. The CCD polarization image sensor is composed of 1000 by 1000 imaging elements with 7.4m pixel pitch. The image sensor has a dynamic range of 65dB and signal-to-noise ratio of 45dB. The CCD array is covered with an array of pixel-pitch matched nanowire polarization filters with four different orientations offset by 45. The complete imaging sensor is used for real-time reconstruction of the shape of various objects.", "title": "" }, { "docid": "07905317dcdbcf1332fd57ffaa02f8d3", "text": "Motivation\nIdentifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters.\n\n\nResults\nHere, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs.\n\n\nAvailability and Implementation\nThe network specifications and solver definitions are provided in Supplementary Software 1.\n\n\nContact\nwilliam_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "f4535d47191caaa1e830e5d8fae6e1ba", "text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "title": "" }, { "docid": "216698730aa68b3044f03c64b77e0e62", "text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.", "title": "" }, { "docid": "ca9a7a1f7be7d494f6c0e3e4bb408a95", "text": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation.", "title": "" }, { "docid": "ea0b23e9c37fa35da9ff6d9091bbee5e", "text": "Since the invention of the wheel, Man has sought to reduce effort to get things done easily. Ultimately, it has resulted in the invention of the Robot, an Engineering Marvel. Up until now, the biggest factor that hampers wide proliferation of robots is locomotion and maneuverability. They are not dynamic enough to conform even to the most commonplace terrain such as stairs. To overcome this, we are proposing a stair climbing robot that looks a lot like the human leg and can adjust itself according to the height of the step. But, we are currently developing a unit to carry payload of about 4 Kg. The automatic adjustment in the robot according to the height of the stair is done by connecting an Android device that has an application programmed in OpenCV with an Arduino in Host mode. The Android Device uses it camera to calculate the height of the stair and sends it to the Arduino for further calculation. This design employs an Arduino Mega ADK 2560 board to control the robot and other home fabricated custom PCB to interface it with the Arduino Board. The bot is powered by Li-Ion batteries and Servo motors.", "title": "" }, { "docid": "9a3a73f35b27d751f237365cc34c8b28", "text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.", "title": "" }, { "docid": "4721173eea1997316b8c9eca8b4a8d05", "text": "Conventional centralized cloud computing is a success for benefits such as on-demand, elasticity, and high colocation of data and computation. However, the paradigm shift towards “Internet of things” (IoT) will pose some unavoidable challenges: (1) massive data volume impossible for centralized datacenters to handle; (2) high latency between edge “things” and centralized datacenters; (3) monopoly, inhibition of innovations, and non-portable applications due to the proprietary application delivery in centralized cloud. The emergence of edge cloud gives hope to address these challenges. In this paper, we propose a new framework called “HomeCloud” focusing on an open and efficient new application delivery in edge cloud integrating two complementary technologies: Network Function Virtualization (NFV) and Software-Defined Networking (SDN). We also present a preliminary proof-of-concept testbed demonstrating the whole process of delivering a simple multi-party chatting application in the edge cloud. In the future, the HomeCloud framework can be further extended to support other use cases that demand portability, cost-efficiency, scalability, flexibility, and manageability. To the best of our knowledge, this framework is the first effort aiming at facilitating new application delivery in such a new edge cloud context.", "title": "" }, { "docid": "e630891703d4a4e6e65fea11698f24c7", "text": "In spite of meticulous planning, well documentation and proper process control during software development, occurrences of certain defects are inevitable. These software defects may lead to degradation of the quality which might be the underlying cause of failure. In today‟s cutting edge competition it‟s necessary to make conscious efforts to control and minimize defects in software engineering. However, these efforts cost money, time and resources. This paper identifies causative factors which in turn suggest the remedies to improve software quality and productivity. The paper also showcases on how the various defect prediction models are implemented resulting in reduced magnitude of defects.", "title": "" }, { "docid": "c5ecfcebbbd577a0bc14ccb4613a98ac", "text": "When Jean-Dominique Bauby suffered from a cortico-subcortical stroke that led to complete paralysis with totally intact sensory and cognitive functions, he described his experience in The Diving-Bell and the Butterfly as “something like a giant invisible diving-bell holds my whole body prisoner”. This horrifying condition also occurs as a consequence of a progressive neurological disease, amyotrophic lateral sclerosis, which involves progressive degeneration of all the motor neurons of the somatic motor system. These ‘locked-in’ patients ultimately become unable to express themselves and to communicate even their most basic wishes or desires, as they can no longer control their muscles to activate communication devices. We have developed a new means of communication for the completely paralysed that uses slow cortical potentials (SCPs) of the electro-encephalogram to drive an electronic spelling device.", "title": "" }, { "docid": "9fdecc8854f539ddf7061c304616130b", "text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.", "title": "" }, { "docid": "5b507508fd3b3808d61e822d2a91eab9", "text": "In this brief, we propose a stand-alone system-on-a-programmable-chip (SOPC)-based cloud system to accelerate massive electrocardiogram (ECG) data analysis. The proposed system tightly couples network I/O handling hardware to data processing pipelines in a single field-programmable gate array (FPGA), offloading both networking operations and ECG data analysis. In this system, we first propose a massive-sessions optimized TCP/IP hardware stack using a macropipeline architecture to accelerate network packet processing. Second, we propose a streaming architecture to accelerate ECG signal processing, including QRS detection, feature extraction, and classification. We verify our design on XC6VLX550T FPGA using real ECG data. Compared to commercial servers, our system shows up to 38× improvement in performance and 142× improvement in energy efficiency.", "title": "" }, { "docid": "cf94d312bb426e64e364dfa33b09efeb", "text": "The attractiveness of a face is a highly salient social signal, influencing mate choice and other social judgements. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate brain regions that respond to attractive faces which manifested either a neutral or mildly happy face expression. Attractive faces produced activation of medial orbitofrontal cortex (OFC), a region involved in representing stimulus-reward value. Responses in this region were further enhanced by a smiling facial expression, suggesting that the reward value of an attractive face as indexed by medial OFC activity is modulated by a perceiver directed smile.", "title": "" }, { "docid": "986bd4907d512402a188759b5bdef513", "text": "► We consider a case of laparoscopic aortic lymphadenectomy for an early ovarian cancer including a comprehensive surgical staging. ► The patient was found to have a congenital anatomic abnormality: a right renal malrotation with an accessory renal artery. ► We used a preoperative CT angiography study to diagnose such anatomical variations and to adequate the proper surgical technique.", "title": "" } ]
scidocsrr
058ded2691202d3815458688af768757
Building Understanding of Smart City Initiatives
[ { "docid": "be05abd038de9b32cc255ca221634a2c", "text": "This paper sees a smart city not as a status of how smart a city is but as a city's effort to make itself smart. The connotation of a smart city represents city innovation in management and policy as well as technology. Since the unique context of each city shapes the technological, organizational and policy aspects of that city, a smart city can be considered a contextualized interplay among technological innovation, managerial and organizational innovation, and policy innovation. However, only little research discusses innovation in management and policy while the literature of technology innovation is abundant. This paper aims to fill the research gap by building a comprehensive framework to view the smart city movement as innovation comprised of technology, management and policy. We also discuss inevitable risks from innovation, strategies to innovate while avoiding risks, and contexts underlying innovation and risks.", "title": "" }, { "docid": "0521f79f13cdbe05867b5db733feac16", "text": "This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement.", "title": "" } ]
[ { "docid": "5cb4a7a6486eaba444b88b7a48e9cea8", "text": "UNLABELLED\nThis Guideline is an official statement of the European Society of Gastrointestinal Endoscopy (ESGE). The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system 1 2 was adopted to define the strength of recommendations and the quality of evidence.\n\n\nMAIN RECOMMENDATIONS\n1 ESGE recommends endoscopic en bloc resection for superficial esophageal squamous cell cancers (SCCs), excluding those with obvious submucosal involvement (strong recommendation, moderate quality evidence). Endoscopic mucosal resection (EMR) may be considered in such lesions when they are smaller than 10 mm if en bloc resection can be assured. However, ESGE recommends endoscopic submucosal dissection (ESD) as the first option, mainly to provide an en bloc resection with accurate pathology staging and to avoid missing important histological features (strong recommendation, moderate quality evidence). 2 ESGE recommends endoscopic resection with a curative intent for visible lesions in Barrett's esophagus (strong recommendation, moderate quality evidence). ESD has not been shown to be superior to EMR for excision of mucosal cancer, and for that reason EMR should be preferred. ESD may be considered in selected cases, such as lesions larger than 15 mm, poorly lifting tumors, and lesions at risk for submucosal invasion (strong recommendation, moderate quality evidence). 3 ESGE recommends endoscopic resection for the treatment of gastric superficial neoplastic lesions that possess a very low risk of lymph node metastasis (strong recommendation, high quality evidence). EMR is an acceptable option for lesions smaller than 10 - 15 mm with a very low probability of advanced histology (Paris 0-IIa). However, ESGE recommends ESD as treatment of choice for most gastric superficial neoplastic lesions (strong recommendation, moderate quality evidence). 4 ESGE states that the majority of colonic and rectal superficial lesions can be effectively removed in a curative way by standard polypectomy and/or by EMR (strong recommendation, moderate quality evidence). ESD can be considered for removal of colonic and rectal lesions with high suspicion of limited submucosal invasion that is based on two main criteria of depressed morphology and irregular or nongranular surface pattern, particularly if the lesions are larger than 20 mm; or ESD can be considered for colorectal lesions that otherwise cannot be optimally and radically removed by snare-based techniques (strong recommendation, moderate quality evidence).", "title": "" }, { "docid": "e6f506c3c90a15b5e4079ccb75eb3ff0", "text": "Stories of people's everyday experiences have long been the focus of psychology and sociology research, and are increasingly being used in innovative knowledge-based technologies. However, continued research in this area is hindered by the lack of standard corpora of sufficient size and by the costs of creating one from scratch. In this paper, we describe our efforts to develop a standard corpus for researchers in this area by identifying personal stories in the tens of millions of blog posts in the ICWSM 2009 Spinn3r Dataset. Our approach was to employ statistical text classification technology on the content of blog entries, which required the creation of a sufficiently large set of annotated training examples. We describe the development and evaluation of this classification technology and how it was applied to the dataset in order to identify nearly a million", "title": "" }, { "docid": "d88f57d173ec92334767360fef3d7f01", "text": "Seasonal influenza epidemics causes severe illnesses and 250,000 to 500,000 deaths worldwide each year. Other pandemics like the 1918 “Spanish Flu” may change into a devastating one. Reducing the impact of these threats is of paramount importance for health authorities, and studies have shown that effective interventions can be taken to contain the epidemics, if early detection can be made. In this paper, we introduce the Social Network Enabled Flu Trends (SNEFT), a continuous data collection framework which monitors flu related tweets and track the emergence and spread of an influenza. We show that text mining significantly enhances the correlation between the Twitter and the Influenza like Illness (ILI) rates provided by Centers for Disease Control and Prevention (CDC). For accurate prediction, we implemented an auto-regression with exogenous input (ARX) model which uses current Twitter data, and CDC ILI rates from previous weeks to predict current influenza statistics. Our results show that, while previous ILI data from CDC offer a true (but delayed) assessment of a flu epidemic, Twitter data provides a real-time assessment of the current epidemic condition and can be used to compensate for the lack of current ILI data. We observe that the Twitter data is highly correlated with the ILI rates across different regions within USA and can be used to effectively improve the accuracy of our prediction. Our age-based flu prediction analysis indicates that for most of the regions, Twitter data best fit the age groups of 5-24 and 25-49 years, correlating well with the fact that these are likely, the most active user age groups on Twitter. Therefore, Twitter data can act as supplementary indicator to gauge influenza within a population and helps discovering flu trends ahead of CDC.", "title": "" }, { "docid": "cd23b0dfd98fb42513229070035e0aa9", "text": "Sixteen residents in long-term care with advanced dementia (14 women; average age = 88) showed significantly more constructive engagement (defined as motor or verbal behaviors in response to an activity), less passive engagement (defined as passively observing an activity), and more pleasure while participating in Montessori-based programming than in regularly scheduled activities programming. Principles of Montessori-based programming, along with examples of such programming, are presented. Implications of the study and methods for expanding the use of Montessori-based dementia programming are discussed.", "title": "" }, { "docid": "cf3354d0a85ea1fa2431057bdf6b6d0f", "text": "Increasingly, scientific computing applications must accumulate and manage massive datasets, as well as perform sophisticated computations over these data. Such applications call for data-intensive scalable computer (DISC) systems, which differ in fundamental ways from existing high-performance computing systems.", "title": "" }, { "docid": "e3527c14558da7905490b434e0f78fb0", "text": "This paper presents a 3-D statistical channel model of the impulse response with small-scale spatially correlated random coefficients for multi-element transmitter and receiver antenna arrays, derived using the physically-based time cluster - spatial lobe (TCSL) clustering scheme. The small-scale properties of multipath amplitudes are modeled based on 28 GHz outdoor millimeter-wave small-scale local area channel measurements. The wideband channel capacity is evaluated by considering measurement-based Rician-distributed voltage amplitudes, and the spatial autocorrelation of multipath amplitudes for each pair of transmitter and receiver antenna elements. Results indicate that Rician channels may exhibit equal or possibly greater capacity compared to Rayleigh channels, depending on the number of antennas.", "title": "" }, { "docid": "15727b1d059064d118269d0217c0c014", "text": "Segment Routing is a proposed IETF protocol to improve traffic engineering and online route selection in IP networks. The key idea in segment routing is to break up the routing path into segments in order to enable better network utilization. Segment routing also enables finer control of the routing paths and can be used to route traffic through middle boxes. This paper considers the problem of determining the optimal parameters for segment routing in the offline and online cases. We develop a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing. We also show that both these algorithms work well in practice.", "title": "" }, { "docid": "c76fbef6cf978e6f14444cf231f0ce54", "text": "Today communication between deafmute and a normal person always have been a challenging task. Many researchers are focusing on communication between normal and deaf-mute person. To the best of our knowledge, little work has been done in this area. The main objective of this project is to present a system that can efficiently convert text to voice. In the deaf -mute communication interpreter is a device that translate the TEXT to auditory speech. For each text a signal is produced by the computer and send corresponding controller unit and it generate vibration signal. The deaf people are using this device by their mouth; hence they easily understand data given by the computer. The device can also be made to translate larger text. In addition, the system also includes a text to speech conversion (TTS) block which converts the matched gestures i.e. text to voice output, and also voice to voice output and also call making as output.", "title": "" }, { "docid": "78d7c61f7ca169a05e9ae1393712cd69", "text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.", "title": "" }, { "docid": "a7623185df940b128af6187d7d1e0b9c", "text": "Inflammasomes are high-molecular-weight protein complexes that are formed in the cytosolic compartment in response to danger- or pathogen-associated molecular patterns. These complexes enable activation of an inflammatory protease caspase-1, leading to a cell death process called pyroptosis and to proteolytic cleavage and release of pro-inflammatory cytokines interleukin (IL)-1β and IL-18. Along with caspase-1, inflammasome components include an adaptor protein, ASC, and a sensor protein, which triggers the inflammasome assembly in response to a danger signal. The inflammasome sensor proteins are pattern recognition receptors belonging either to the NOD-like receptor (NLR) or to the AIM2-like receptor family. While the molecular agonists that induce inflammasome formation by AIM2 and by several other NLRs have been identified, it is not well understood how the NLR family member NLRP3 is activated. Given that NLRP3 activation is relevant to a range of human pathological conditions, significant attempts are being made to elucidate the molecular mechanism of this process. In this review, we summarize the current knowledge on the molecular events that lead to activation of the NLRP3 inflammasome in response to a range of K (+) efflux-inducing danger signals. We also comment on the reported involvement of cytosolic Ca (2+) fluxes on NLRP3 activation. We outline the recent advances in research on the physiological and pharmacological mechanisms of regulation of NLRP3 responses, and we point to several open questions regarding the current model of NLRP3 activation.", "title": "" }, { "docid": "c8598e04ef93f6127333b79a83508daf", "text": "Nitric oxide (NO) is an important signaling molecule in multicellular organisms. Most animals produce NO from L-arginine via a family of dedicated enzymes known as NO synthases (NOSes). A rare exception is the roundworm Caenorhabditis elegans, which lacks its own NOS. However, in its natural environment, C. elegans feeds on Bacilli that possess functional NOS. Here, we demonstrate that bacterially derived NO enhances C. elegans longevity and stress resistance via a defined group of genes that function under the dual control of HSF-1 and DAF-16 transcription factors. Our work provides an example of interspecies signaling by a small molecule and illustrates the lifelong value of commensal bacteria to their host.", "title": "" }, { "docid": "47e06f5c195d2e1ecb6199b99ef1ee2d", "text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.", "title": "" }, { "docid": "ab57df7702fa8589f7d462c80d9a2598", "text": "The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.", "title": "" }, { "docid": "094570518e943330ff8d9e1c714698cb", "text": "The concept of taking surface wave as an assistant role to obtain wide beams with main directions tilting to endfire is introduced in this paper. Planar Yagi-Uda-like antennas support TE0 surface wave propagation and exhibit endfire radiation patterns. However, when such antennas are printed on a thin grounded substrate, there is no propagation of TE mode and beams tilting to broadside. Benefiting from the advantage that the high impedance surface (HIS) could support TE and/or TM modes propagation, the idea of placing a planar Yagi-Uda-like antenna in close proximity to a HIS to excite unidirectional predominately TE surface wave in HIS is proposed. Power radiated by the feed antenna, in combination with power diffracted by the surface wave determines the total radiation pattern, resulting in the desired pattern. For verification, a compact, low-profile, pattern-reconfigurable parasitic array (having an interstrip spacing of 0.048 λ0) with an integrated DC biasing circuit was fabricated and tested. Good agreement was obtained between measured and simulated results.", "title": "" }, { "docid": "03975198d57093b350c16d3df4e34392", "text": "Reactivating memories during sleep by re-exposure to associated memory cues (e.g., odors or sounds) improves memory consolidation. Here, we tested for the first time whether verbal cueing during sleep can improve vocabulary learning. We cued prior learned Dutch words either during non-rapid eye movement sleep (NonREM) or during active or passive waking. Re-exposure to Dutch words during sleep improved later memory for the German translation of the cued words when compared with uncued words. Recall of uncued words was similar to an additional group receiving no verbal cues during sleep. Furthermore, verbal cueing failed to improve memory during active and passive waking. High-density electroencephalographic recordings revealed that successful verbal cueing during NonREM sleep is associated with a pronounced frontal negativity in event-related potentials, a higher frequency of frontal slow waves as well as a cueing-related increase in right frontal and left parietal oscillatory theta power. Our results indicate that verbal cues presented during NonREM sleep reactivate associated memories, and facilitate later recall of foreign vocabulary without impairing ongoing consolidation processes. Likewise, our oscillatory analysis suggests that both sleep-specific slow waves as well as theta oscillations (typically associated with successful memory encoding during wakefulness) might be involved in strengthening memories by cueing during sleep.", "title": "" }, { "docid": "267445f1079566f74a05bc13d7cad1c1", "text": "When Procter & Gamble’s CEO Bob McDonald set a strategic goal announcing “We want to be the first company that digitizes from end to end” he turned to CIO and head of Global Business Services Filippo Passerini to lead the transformation. Many CIOs tell us their jobs are expanding and many CEOs tell us they would like their CIOs to do more. In addition to providing highquality and cost-effective IT services, today’s CIO often has other, and growing, responsibilities. These include helping with revenue generation, delivering shared services, optimizing enterprise business processes, improving the customer experience, overseeing business operations and digitizing the entire firm. We think these new responsibilities, and the pressures they place on CIOs, are a symptom of one of the biggest opportunities and challenges enterprises face today—the ever-increasing digitization of business as part of the move toward a more digital economy.3 Every interaction between a customer and a business, between a business and another business, between", "title": "" }, { "docid": "1906aa92c26bb95b4cb79b4bfe7e362f", "text": "As Artificial Intelligence (AI) techniques become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use.", "title": "" }, { "docid": "a166b3ed625a5d1db6c70ac41fbf1871", "text": "The main challenge of online multi-object tracking is to reliably associate object trajectories with detections in each video frame based on their tracking history. In this work, we propose the Recurrent Autoregressive Network (RAN), a temporal generative modeling framework to characterize the appearance and motion dynamics of multiple objects over time. The RAN couples an external memory and an internal memory. The external memory explicitly stores previous inputs of each trajectory in a time window, while the internal memory learns to summarize long-term tracking history and associate detections by processing the external memory. We conduct experiments on the MOT 2015 and 2016 datasets to demonstrate the robustness of our tracking method in highly crowded and occluded scenes. Our method achieves top-ranked results on the two benchmarks.", "title": "" }, { "docid": "b7ab1dbbea36a302f4d18524e340986d", "text": "Embedded zerotree wavelet EZW coding introduced by J M Shapiro is a very e ective and computationally simple technique for image compression Here we o er an alternative explanation of the principles of its operation so that the reasons for its excellent performance can be better understood These principles are partial ordering by magnitude with a set partitioning sorting algorithm ordered bit plane transmission and exploitation of self similarity across di erent scales of an image wavelet transform Moreover we present a new and di erent implementation based on set partitioning in hierarchical trees SPIHT which provides even better performance than our previosly reported extension of the EZW that surpassed the performance of the original EZW The image coding results calculated from actual le sizes and images reconstructed by the decoding algorithm are either compara ble to or surpass previous results obtained through much more sophisticated and computationally complex methods In addition the new coding and decoding pro cedures are extremely fast and they can be made even faster with only small loss in performance by omitting entropy coding of the bit stream by arithmetic code", "title": "" }, { "docid": "f9806d3542f575d53ef27620e4aa493b", "text": "Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.", "title": "" } ]
scidocsrr
434245382afc0d4c0bf8b7311a442456
Concentrated Differential Privacy
[ { "docid": "05f1812e2ede9b07b293f18e6e5442f0", "text": "Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved {\\em privacy-preserving synopses} of an input database. These are data structures that yield, for a given set $\\Q$ of queries over an input database, reasonably accurate estimates of the responses to every query in~$\\Q$, even when the number of queries is much larger than the number of rows in the database. Given a {\\em base synopsis generator} that takes a distribution on $\\Q$ and produces a ``weak'' synopsis that yields ``good'' answers for a majority of the weight in $\\Q$, our {\\em Boosting for Queries} algorithm obtains a synopsis that is good for all of~$\\Q$. We ensure privacy for the rows of the database, but the boosting is performed on the {\\em queries}. We also provide the first synopsis generators for arbitrary sets of arbitrary low-sensitivity queries, {\\it i.e.}, queries whose answers do not vary much under the addition or deletion of a single row. In the execution of our algorithm certain tasks, each incurring some privacy loss, are performed many times. To analyze the cumulative privacy loss, we obtain an $O(\\eps^2)$ bound on the {\\em expected} privacy loss from a single $\\eps$-\\dfp{} mechanism. Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides $\\eps$-differential privacy or one of its relaxations, and each of which operates on (potentially) different, adaptively chosen, databases.", "title": "" }, { "docid": "1d9004c4115c314f49fb7d2f44aaa598", "text": "We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems.", "title": "" } ]
[ { "docid": "e92299720be4d028b4a7d726c99bc216", "text": "Nowadays terahertz spectroscopy is a well-established technique and recent progresses in technology demonstrated that this new technique is useful for both fundamental research and industrial applications. Varieties of applications such as imaging, non destructive testing, quality control are about to be transferred to industry supported by permanent improvements from basic research. Since chemometrics is today routinely applied to IR spectroscopy, we discuss in this paper the advantages of using chemometrics in the framework of terahertz spectroscopy. Different analytical procedures are illustrates. We conclude that advanced data processing is the key point to validate routine terahertz spectroscopy as a new reliable analytical technique.", "title": "" }, { "docid": "573faddaa6fe37712776592a430d09cb", "text": "We present the largest and longest measurement of online tracking to date based on real users. The data, which is made publicly available, is generated from more than 780 million page loads over the course of the last 10 months. Previous attempts to measure the tracking ecosystem, are done via measurement platforms that do not interact with websites the same way a user does. We instrument a crowd-sourced measurement of third-parties across the web via users who consent to data collection via a browser extension. The collection is done with privacy-by-design in mind, and introduces no privacy side effects. This approach overcomes limitations of previous work by collecting real web usage across multiple countries, ISP and browser configurations, and on difficult to crawl pages, such as those behind logins, giving a more accurate portrayal of the online-tracking ecosystem. The data1, which we plan to continue contributing to and maintain in the future, and WhoTracks.Me website – the living representation of the data, are available for researchers, regulators, journalists, web developers and users to detect tracking behaviours, analyse the tracking landscape, develop efficient tools, devise policies and raise awareness of the negative externalities tracking introduces. We believe this work provides the transparency needed to shine a light on a very opaque industry.", "title": "" }, { "docid": "2c5ab4dddbb6aeae4542b42f57e54d72", "text": "Online action detection is a challenging problem: a system needs to decide what action is happening at the current frame, based on previous frames only. Fortunately in real-life, human actions are not independent from one another: there are strong (long-term) dependencies between them. An online action detection method should be able to capture these dependencies, to enable a more accurate early detection. At first sight, an LSTM seems very suitable for this problem. It is able to model both short-term and long-term patterns. It takes its input one frame at the time, updates its internal state and gives as output the current class probabilities. In practice, however, the detection results obtained with LSTMs are still quite low. In this work, we start from the hypothesis that it may be too difficult for an LSTM to learn both the interpretation of the input and the temporal patterns at the same time. We propose a two-stream feedback network, where one stream processes the input and the other models the temporal relations. We show improved detection accuracy on an artificial toy dataset and on the Breakfast Dataset [21] and the TVSeries Dataset [7], reallife datasets with inherent temporal dependencies between the actions.", "title": "" }, { "docid": "fd36ca11c37101b566245b6ee29cb7df", "text": "Hand, foot and mouth disease (HFMD) is considered a common disease among children. However, HFMD recent outbreaks in Sarawak had caused many death particularly children below the age of ten. In this study we are building a simple deterministic model based on the SIR (Susceptible-Infected-Recovered) model to predict the number of infected and the duration of an outbreak when it occurs. Our findings show that the disease spread quite rapidly and the parameter that may be able to control that would be the number of susceptible persons. We hope the model will allow public health personnel to plan intervention in an effective manner in order to reduce the effect of the disease in the coming outbreak.", "title": "" }, { "docid": "8503c9989f9706805a74bbd5c964ab07", "text": "Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.", "title": "" }, { "docid": "b01bc5df28e670c82d274892a407b0aa", "text": "We propose that many human behaviors can be accurately described as a set of dynamic models (e.g., Kalman filters) sequenced together by a Markov chain. We then use these dynamic Markov models to recognize human behaviors from sensory data and to predict human behaviors over a few seconds time. To test the power of this modeling approach, we report an experiment in which we were able to achieve 95 accuracy at predicting automobile drivers' subsequent actions from their initial preparatory movements.", "title": "" }, { "docid": "51506638cc0ea6da6f1e7ee1fd4d52d0", "text": "Autism spectrum disorder (ASD) is characterized by deficits in social cognition and competence, communication, highly circumscribed interests and a strong desire for routines. Besides, there are specific abnormalities in perception and language. Typical symptoms are already present in early childhood. Traditionally autism has been regarded as a severe form of neurodevelopmental disorder which goes along with overtly abnormal language, learning difficulties and low IQ in the majority of cases. However, over the last decades, it has become clear that there are also many patients with high-functioning variants of ASD. These are patients with normal language at a superficial level of description and normal and sometimes above average intelligence. In high-functioning variants of the disease, they may run unrecognized until late in adult life. High-functioning ASD is associated with a very high prevalence of comorbid classical psychiatric disorders such as depression, anxiety, ADHD, tics, psychotic symptoms or emotionally unstable syndromes. In many such cases, there is a causal relationship between ASD and the comorbid psychiatric conditions in that the specific ASD symptoms result in chronic conflicts, misunderstandings and failure in private and vocational relationships. These problems in turn often lead to depression, anxiety and sometimes psychosis-like stress reactions. In this constellation, ASD has to be regarded as a basic disorder with causal relevance for secondary psychiatric syndromes. In this paper, we summarize the classical presentation of high-functioning ASD in adult psychiatry and psychotherapy and suggest a nosological model to classify different ASD conditions instead. To conclude, we outline first treatment concepts in out- and in-patient settings.", "title": "" }, { "docid": "63435412232daf75eebd8ed973cb5334", "text": "With recent advances in devices, middleware, applications and networking infrastructure, the wireless Internet is becoming a reality. We believe that some of the major drivers of the wireless Internet will be emerging mobile applications such as mobile commerce. Although many of these are futuristic, some applications including user-and location-specific mobile advertising, location-based services, and mobile financial services are beginning to be commercialized. Mobile commerce applications present several interesting and complex challenges including location management of products, services, devices, and people. Further, these applications have fairly diverse requirements from the underlying wireless infrastructure in terms of location accuracy, response time, multicast support, transaction frequency and duration, and dependability. Therefore, research is necessary to address these important and complex challenges. In this article, we present an integrated location management architecture to support the diverse location requirements of m-commerce applications. The proposed architecture is capable of supporting a range of location accuracies, wider network coverage, wireless multicast, and infrastructure dependability for m-commerce applications. The proposed architecture can also support several emerging mobile applications. Additionally, several interesting research problems and directions in location management for wireless Internet applications are presented and discussed.", "title": "" }, { "docid": "0d996ba5c45d24cbc481ac4cd225f84d", "text": "In this paper, we design and evaluate a routine for the efficient generation of block-Jacobi preconditioners on graphics processing units (GPUs). Concretely, to exploit the architecture of the graphics accelerator, we develop a batched Gauss-Jordan elimination CUDA kernel for matrix inversion that embeds an implicit pivoting technique and handles the entire inversion process in the GPU registers. In addition, we integrate extraction and insertion CUDA kernels to rapidly set up the block-Jacobi preconditioner.\n Our experiments compare the performance of our implementation against a sequence of batched routines from the MAGMA library realizing the inversion via the LU factorization with partial pivoting. Furthermore, we evaluate the costs of different strategies for the block-Jacobi extraction and insertion steps, using a variety of sparse matrices from the SuiteSparse matrix collection. Finally, we assess the efficiency of the complete block-Jacobi preconditioner generation in the context of an iterative solver applied to a set of computational science problems, and quantify its benefits over a scalar Jacobi preconditioner.", "title": "" }, { "docid": "018d05daa52fb79c17519f29f31026d7", "text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.", "title": "" }, { "docid": "38a4b3c515ee4285aa88418b30937c62", "text": "Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update.", "title": "" }, { "docid": "c097c63ed1b33fd0e1a2432ec5ac82cb", "text": "In our previous research we developed a SmartShoe-a shoe based physical activity monitor that can reliably differentiate between major postures and activities, accurately estimate energy expenditure of individuals, measure temporal gait parameters, and estimate body weights. In this paper we present the development of the next stage of the SmartShoe evolution-SmartStep, a physical activity monitor that is fully integrated into an insole, maximizing convenience and social acceptance of the monitor. Encapsulating the sensors, Bluetooth Low Energy wireless interface and the energy source within an assembly repeatedly loaded with high forces created during ambulation presented new design challenges. In this preliminary study we tested the ability of the SmartStep to measure the pressure differences between static weight-bearing and non-weight-bearing activities (such as no load vs. sitting vs. standing) as well as capture pressure variations during walking. We also measured long-term stability of the sensors and insole assembly under cyclic loading in a mechanical testing system.", "title": "" }, { "docid": "fb491edc5d60f68cd584072c846d9e69", "text": "Stress is the root cause of many diseases and unhealthy behaviors. Being able to monitor when and why a person is stressed could inform personal stress management as well as interventions when necessary. In this work, we present StressAware, an application on the Amulet wearable platform that classifies the stress level (low, medium, high) of individuals continuously and in real time using heart rate (HR) and heart-rate variability (HRV) data from a commercial heart-rate monitor. We developed our stress-detection model using a Support Vector Machine (SVM). We trained and tested our model using data from three sources and had the following preliminary results: PhysioNet, a public physiological database (94.5% accurate with 10-fold cross validation), a field study (100% accurate with 10-fold cross validation) and a lab study (64.3% accurate with leave-one-out cross-validation). Testing the StressAware app revealed a projected battery life of up to 12 days. Also, the usability feedback from subjects showed that the Amulet has a potential to be used by people for monitoring their stress levels. The results are promising, indicating that the app may be used for stress detection, and eventually for the development of stress-related intervention that could improve the health of individuals.", "title": "" }, { "docid": "9869ef00a0f7237d6e57fa5afe390521", "text": "Land-use classification using remote sensing images covers a wide range of applications. With more detailed spatial and textural information provided in very high resolution (VHR) remote sensing images, a greater range of objects and spatial patterns can be observed than ever before. This offers us a new opportunity for advancing the performance of land-use classification. In this paper, we first introduce an effective midlevel visual elementsoriented land-use classification method based on “partlets,” which are a library of pretrained part detectors used for midlevel visual elements discovery. Taking advantage of midlevel visual elements rather than low-level image features, a partlets-based method represents images by computing their responses to a large number of part detectors. As the number of part detectors grows, a main obstacle to the broader application of this method is its computational cost. To address this problem, we next propose a novel framework to train coarse-to-fine shared intermediate representations, which are termed “sparselets,” from a large number of pretrained part detectors. This is achieved by building a single-hidden-layer autoencoder and a single-hidden-layer neural network with an L0-norm sparsity constraint, respectively. Comprehensive evaluations on a publicly available 21-class VHR landuse data set and comparisons with state-of-the-art approaches demonstrate the effectiveness and superiority of this paper.", "title": "" }, { "docid": "313d17a94bca05c96192db4079d06362", "text": "Power transformers are one of the most expensive elements in a power system and their failure due to any reason is a very bad event also to maintain & rectify the problems related to insulation failure become more expensive. Power transformers are mainly involved in the energy transmission and distribution. Unplanned power transformer outages have a considerable economics impact on the operation of electric power network. To have reliable operation of transformers, it is necessary to identify problems at an early stage before a catastrophic failure occurs. In spite of corrective and predictive maintenance, preventive maintenance of power transformer is gaining due importance in the modern era and it must be taken into account to obtain the highest reliability of power apparatus such as power transformers. The well known preventive maintenance techniques such as DGA, conditioning monitoring, partial discharge measurement, effect of moisture, Paper insulation, Oil insulation, mechanical strength, thermal conductivity, copper sulphur, bubble effect, drying process, thermal degradation, fault diagnosis, etc. are performed on transformer for a specific type of problem. There is a universal requirement for up-to-date bibliographic information on insulation system of power transformer in the academic, research and engineering communities. The same topic was earlier updated in 2008 and it is observed that many new areas have been identified by the researchers such as bubble effect, copper sulphur, mechanical strength, etc. This article lists relevant references grouped according to the topics described above. The research scholars can found all the research which have been carried out till date in this paper.", "title": "" }, { "docid": "7f69fbcda9d6ee11d5cc1591a88b6403", "text": "Voice conversion is defined as modifying the speech signal of one speaker (source speaker) so that it sounds as if it had been pronounced by a different speaker (target speaker). This paper describes a system for efficient voice conversion. A novel mapping function is presented which associates the acoustic space of the source speaker with the acoustic space of the target speaker. The proposed system is based on the use of a Gaussian Mixture Model, GMM, to model the acoustic space of a speaker and a pitch synchronous harmonic plus noise representation of the speech signal for prosodic modifications. The mapping function is a continuous parametric function which takes into account the probab ilistic classification provided by the mixture model (GMM). Evaluation by objective tests showed that the proposed system was able to reduce the perceptual distance between the source and target speaker by 70%. Formal listening tests also showed that 97% of the converted speech was judged to be spoken from the target speaker while maintaining high speech qua lity.", "title": "" }, { "docid": "bcbba4f99e33ac0daea893e280068304", "text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).", "title": "" }, { "docid": "23b0756f3ad63157cff70d4973c9e6bd", "text": "A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset1.", "title": "" }, { "docid": "9f52ee95148490555c10f699678b640d", "text": "Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being.", "title": "" }, { "docid": "bbb82578991de4bf3195a4c94fa218cf", "text": "According to the necessary function of Under Voltage Lock Out (UVLO) in DC-DC power management systems, an improved UVLO circuit is proposed. The circuit realizes stabilization of parameters such as threshold point voltage, hysteretic range of the comparator etc. without utilizing an extra bandgap reference voltage source for comparison. The UVLO circuit is implemented in CSMC 0.5µm BCD process. Hspice simulation results show that the UVLO circuit presents advantages such as simple topology, sensitive response, low temperature draft, low power consumption.", "title": "" } ]
scidocsrr
f733b732bd8b740bc2393f195e313bc4
Weakly Supervised Learning for Hedge Classification in Scientific Literature
[ { "docid": "98796507d092548983120639417aa800", "text": "Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.", "title": "" }, { "docid": "3ac2f2916614a4e8f6afa1c31d9f704d", "text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.", "title": "" } ]
[ { "docid": "18f9fff4bd06f28cd39c97ff40467d0f", "text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.", "title": "" }, { "docid": "87a319361ad48711eff002942735258f", "text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned", "title": "" }, { "docid": "d0ad2b6a36dce62f650323cb5dd40bc9", "text": "If two hospitals are providing identical services in all respects, except for the brand name, why are customers willing to pay more for one hospital than the other? That is, the brand name is not just a name, but a name that contains value (brand equity). Brand equity is the value that the brand name endows to the product, such that consumers are willing to pay a premium price for products with the particular brand name. Accordingly, a company needs to manage its brand carefully so that its brand equity does not depreciate. Although measuring brand equity is important, managers have no brand equity index that is psychometrically robust and parsimonious enough for practice. Indeed, index construction is quite different from conventional scale development. Moreover, researchers might still be unaware of the potential appropriateness of formative indicators for operationalizing particular constructs. Toward this end, drawing on the brand equity literature and following the index construction procedure, this study creates a brand equity index for a hospital. The results reveal a parsimonious five-indicator brand equity index that can adequately capture the full domain of brand equity. This study also illustrates the differences between index construction and scale development.", "title": "" }, { "docid": "36460eda2098bdcf3810828f54ee7d2b", "text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].", "title": "" }, { "docid": "0224c1abc7084ce3e68f1c6ceb5d5ece", "text": "A useful way of understanding personality traits is to examine the motivational nature of a trait because motives drive behaviors and influence attitudes. In two cross-sectional, self-report studies (N=942), we examined the relationships between fundamental social motives and dark personality traits (i.e., narcissism, psychopathy, sadism, spitefulness, and Machiavellianism) and examined the role of childhood socio-ecological conditions (Study 2 only). For example, we found that Machiavellianism and psychopathy were negatively associated with motivations that involved developing and maintaining good relationships with others. Sex differences in the darker aspects of personality were a function of, at least in part, fundamental social motives such as the desire for status. Fundamental social motives mediated the associations that childhood socio-ecological conditions had with the darker aspects of personality. Our results showed how motivational tendencies in men and women may provide insights into alternative life history strategies reflected in dark personality traits.", "title": "" }, { "docid": "5ac2930a623b542cf8ebbea6314c5ef1", "text": "BACKGROUND\nTelomerase continues to generate substantial attention both because of its pivotal roles in cellular proliferation and aging and because of its unusual structure and mechanism. By replenishing telomeric DNA lost during the cell cycle, telomerase overcomes one of the many hurdles facing cellular immortalization. Functionally, telomerase is a reverse transcriptase, and it shares structural and mechanistic features with this class of nucleotide polymerases. Telomerase is a very unusual reverse transcriptase because it remains stably associated with its template and because it reverse transcribes multiple copies of its template onto a single primer in one reaction cycle.\n\n\nSCOPE OF REVIEW\nHere, we review recent findings that illuminate our understanding of telomerase. Even though the specific emphasis is on structure and mechanism, we also highlight new insights into the roles of telomerase in human biology.\n\n\nGENERAL SIGNIFICANCE\nRecent advances in the structural biology of telomerase, including high resolution structures of the catalytic subunit of a beetle telomerase and two domains of a ciliate telomerase catalytic subunit, provide new perspectives into telomerase biochemistry and reveal new puzzles.", "title": "" }, { "docid": "b2e5a2395641c004bdc84964d2528b13", "text": "We propose a novel probabilistic model for visual question answering (Visual QA). The key idea is to infer two sets of embeddings: one for the image and the question jointly and the other for the answers. The learning objective is to learn the best parameterization of those embeddings such that the correct answer has higher likelihood among all possible answers. In contrast to several existing approaches of treating Visual QA as multi-way classification, the proposed approach takes the semantic relationships (as characterized by the embeddings) among answers into consideration, instead of viewing them as independent ordinal numbers. Thus, the learned embedded function can be used to embed unseen answers (in the training dataset). These properties make the approach particularly appealing for transfer learning for open-ended Visual QA, where the source dataset on which the model is learned has limited overlapping with the target dataset in the space of answers. We have also developed large-scale optimization techniques for applying the model to datasets with a large number of answers, where the challenge is to properly normalize the proposed probabilistic models. We validate our approach on several Visual QA datasets and investigate its utility for transferring models across datasets. The empirical results have shown that the approach performs well not only on in-domain learning but also on transfer learning.", "title": "" }, { "docid": "9e752ed6942afd640c7d521beaef9bc8", "text": "Every day, security professionals face off against adversaries who don't play by the rules. Traditional information security education programs further compound the problem by forcing students to behave in a flawlessly ethical manner. As an alternative, this article suggests techniques for fostering creativity and an adversary mindset in information security students through carefully structured classroom cheating exercises.", "title": "" }, { "docid": "fedbb3495c97c6341762b26c06307ec4", "text": "A robot testbed for writing Chinese and Japanese calligraphy characters is presented. Single strokes of the calligraphy characters are represented in a database and initialized with a scanned reference image and a manually chosen initial drawing spline. A learning procedure uses visual feedback to analyze each new iteration of the drawn stroke and updates the drawing spline such that every subsequent drawn stroke becomes more similar to the reference image. The learning procedure can be performed either in simulation, using a simple brush model to create simulated images of the strokes, or with a real robot arm equipped with a calligraphy brush and a camera that captures images of the drawn strokes. Results from both simulations and experiments with the robot arm are presented.", "title": "" }, { "docid": "e1096df0a86d37c11ed4a31d9e67ac6e", "text": "............................................................................................................................................... 4", "title": "" }, { "docid": "a56f197cdcf2dd02e1418268b611c345", "text": "Information visualization is traditionally viewed as a tool for data exploration and hypothesis formation. Because of its roots in scientific reasoning, visualization has traditionally been viewed as an analytical tool for sensemaking. In recent years, however, both the mainstreaming of computer graphics and the democratization of data sources on the Internet have had important repercussions in the field of information visualization. With the ability to create visual representations of data on home computers, artists and designers have taken matters into their own hands and expanded the conceptual horizon of infovis as artistic practice. This paper presents a brief survey of projects in the field of artistic information visualization and a preliminary examination of how artists appropriate and repurpose “scientific” techniques to create pieces that actively guide analytical reasoning and encourage a contextualized reading of their subject matter.", "title": "" }, { "docid": "2f39226a694311b793024210092fab37", "text": "n this paper, we introduce an embodied pedagogical approach for learning computational concepts, utilizing computational practices, and developing computational perspectives. During a five-week pilot, a group of students spent after-school time learning the basic elements of dance and then using them to program three-dimensional characters that could perform. Throughout the pilot, we found students consistently standing up in front of their computers and using their bodies to think through the actuation of their characters. Preliminary results suggest that designing a virtual-physical dance performance is a motivating and engaging social context in which to introduce students, especially girls, to alternative applications in computing.", "title": "" }, { "docid": "fa9abc74d3126e0822e7e815e135e845", "text": "Semantic interaction offers an intuitive communication mechanism between human users and complex statistical models. By shielding the users from manipulating model parameters, they focus instead on directly manipulating the spatialization, thus remaining in their cognitive zone. However, this technique is not inherently scalable past hundreds of text documents. To remedy this, we present the concept of multi-model semantic interaction, where semantic interactions can be used to steer multiple models at multiple levels of data scale, enabling users to tackle larger data problems. We also present an updated visualization pipeline model for generalized multi-model semantic interaction. To demonstrate multi-model semantic interaction, we introduce StarSPIRE, a visual text analytics prototype that transforms user interactions on documents into both small-scale display layout updates as well as large-scale relevancy-based document selection.", "title": "" }, { "docid": "65685bafe88b596530d4280e7e75d1c4", "text": "The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a low-rank change to A (an update/downdate, Ā = A ± WWT). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLAS-based) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package which forms the basis of x = A\\b MATLAB when A is sparse and symmetric positive definite.", "title": "" }, { "docid": "fe8c03f1dc9cac7ee0215ee2a6979d5c", "text": "We describe the development of radiation therapy for lymphoma from extended field radiotherapy of the past to modern conformal treatment with involved site radiation therapy based on advanced imaging, three-dimensional treatment planning and advanced treatment delivery techniques. Today, radiation therapy is part of the multimodality treatment of lymphoma, and the irradiated tissue volume is much smaller than before, leading to highly significant reductions in the risks of long-term complications.", "title": "" }, { "docid": "7f14c3551bee4de8590db2bc93dfb5cd", "text": "Predicting student performance, one of the tasks in educational data mining, has been taken into account recently [Toscher and Jahrer 2010; Yu et al. 2010; Cetintas et al. 2010; Thai-Nghe et al. 2011]. It was selected as a challenge task for the KDD Cup 2010 [Koedinger et al. 2010]. Concretely, predicting student performance is the task where we would like to know how the students learn (e.g. generally or narrowly), how quickly or slowly they adapt to new problems or if it is possible to infer the knowledge requirements to solve the problems directly from student performance data [Corbett and Anderson 1995; Feng et al. 2009], and eventually, we would like to know whether the students perform the tasks (exercises) correctly (or with some levels of certainty). As discussed in Cen et al. [2006], an improved model for predicting student performance could save millions of hours of students’ time and effort in learning algebra. In that time, students could move to other specific fields of their study or doing other things they enjoy. From educational data mining point of view, an accurate and reliable model in predicting student performance may replace some current standardized tests, and thus, reducing the pressure, time, as well as effort on “teaching and learning for examinations” [Feng et al. 2009; Thai-Nghe et al. 2011]. To address the problem of predicting student performance, many papers have been published but most of them are based on traditional classification/regression techniques [Cen et al. 2006; Feng et al. 2009; Yu et al. 2010; Pardos and Heffernan 2010]. Many other works can be found in Romero et al. [2010]. Recently, [Thai-Nghe et al. 2010; Toscher and Jahrer 2010; Thai-Nghe et al. 2011] have proposed using recommendation techniques, e.g. matrix factorization, for predicting student performance. The authors have shown that predicting student performance can be considered as rating prediction since the student, task, and performance would become user, item, and rating in recommender systems, respectively. We know that learning and problem-solving are complex cognitive and affective processes that are different to shopping and other e-commerce transactions, however, as discussed in Thai-Nghe et al. [2011], the factorization models in recommender systems are implicitly able to encode latent factors of students and tasks (e.g. “slip” and “guess”), and especially in case where we do not have enough meta data about students and tasks (or even we have not enough background knowledge of the domain), this mapping is a reasonable approach.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "3feeb5cef9226c4458229d3d7c1bcf44", "text": "In the European Union, more than 400,000 individuals are homeless on any one night and more than 600,000 are homeless in the USA. The causes of homelessness are an interaction between individual and structural factors. Individual factors include poverty, family problems, and mental health and substance misuse problems. The availability of low-cost housing is thought to be the most important structural determinant for homelessness. Homeless people have higher rates of premature mortality than the rest of the population, especially from suicide and unintentional injuries, and an increased prevalence of a range of infectious diseases, mental disorders, and substance misuse. High rates of non-communicable diseases have also been described with evidence of accelerated ageing. Although engagement with health services and adherence to treatments is often compromised, homeless people typically attend the emergency department more often than non-homeless people. We discuss several recommendations to improve the surveillance of morbidity and mortality in homeless people. Programmes focused on high-risk groups, such as individuals leaving prisons, psychiatric hospitals, and the child welfare system, and the introduction of national and state-wide plans that target homeless people are likely to improve outcomes.", "title": "" }, { "docid": "f7deaa9b65be6b8de9f45fb0dec3879d", "text": "This paper reports the first 8kV+ ESD-protected SP10T transmit/receive (T/R) antenna switch for quad-band (0.85/0.9/1.8/1.9-GHz) GSM and multiple W-CDMA smartphones fabricated in an 180-nm SOI CMOS. A novel physics-based switch-ESD co-design methodology is applied to ensure full-chip optimization for a SP10T test chip and its ESD protection circuit simultaneously.", "title": "" } ]
scidocsrr
970df210608bbc05dab06e6c11f79770
Geometry-Based Camera Calibration Using Five-Point Correspondences From a Single Image
[ { "docid": "44ba90b77cb6bc324fbeebe096b93cd0", "text": "With the growth of fandom population, a considerable amount of broadcast sports videos have been recorded, and a lot of research has focused on automatically detecting semantic events in the recorded video to develop an efficient video browsing tool for a general viewer. However, a professional sportsman or coach wonders about high level semantics in a different perspective, such as the offensive or defensive strategy performed by the players. Analyzing tactics is much more challenging in a broadcast basketball video than in other kinds of sports videos due to its complicated scenes and varied camera movements. In this paper, by developing a quadrangle candidate generation algorithm and refining the model fitting score, we ameliorate the court-based camera calibration technique to be applicable to broadcast basketball videos. Player trajectories are extracted from the video by a CamShift-based tracking method and mapped to the real-world court coordinates according to the calibrated results. The player position/trajectory information in the court coordinates can be further analyzed for professional-oriented applications such as detecting wide open event, retrieving target video clips based on trajectories, and inferring implicit/explicit tactics. Experimental results show the robustness of the proposed calibration and tracking algorithms, and three practicable applications are introduced to address the applicability of our system.", "title": "" } ]
[ { "docid": "d4488867e774e28abc2b960a9434d052", "text": "Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.", "title": "" }, { "docid": "a91c43ef77f03672011d0353f00a1c5d", "text": "Presence, the experience of ‘being there’ in a mediated environment, has become closely associated with VR and other advanced media. Different types of presence are discussed, including physical presence, social presence, and co-presence. Fidelity-based approaches to presence research emphasize the fact that as media become increasingly interactive, perceptually realistic, and immersive, the experience of presence becomes more convincing. In addition, the ecological-cultural approach is described, pointing out the importance of the possibility of action in mediated environments, as well as the role that a common cultural framework plays in engendering a sense of presence. In particular for multi-user or collaborative virtual environments (CVEs), processes of negotiation and community creation need to be supported by the CVE design to enable communication and the creation of a social context within the CVE.", "title": "" }, { "docid": "2697b5a4fd32edccfd95f4abe3d2a280", "text": "Autonomous unpowered flight is a challenge for control and guidance systems: all the energy the aircraft might use during flight has to be harvested directly from the atmosphere. We investigate the design of an algorithm that optimizes the closed-loop control of a glider’s bank and sideslip angles, while flying in the lower convective layer of the atmosphere in order to increase its mission endurance. Using a Reinforcement Learning approach, we demonstrate the possibility for real-time adaptation of the glider’s behaviour to the time-varying and noisy conditions associated with thermal soaring flight. Our approach is online, data-based and model-free, hence avoids the pitfalls of aerological and aircraft modelling and allow us to deal with uncertainties and non-stationarity. Additionally, we put a particular emphasis on keeping low computational requirements in order to make on-board execution feasible. This article presents the stochastic, time-dependent aerological model used for simulation, together with a standard aircraft model. Then we introduce an adaptation of a Q-learning algorithm and demonstrate its ability to control the aircraft and improve its endurance by exploiting updrafts in non-stationary scenarios. Mots-clés : Reinforcement learning control, Adaptive control applications, Adaptation and learning in physical agents, UAVs.", "title": "" }, { "docid": "64e93cfb58b7cf331b4b74fadb4bab74", "text": "Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n2) to O(np/m), and improves computation time to O(np2/m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http://code.google.com/p/psvm/.", "title": "" }, { "docid": "ff0395e9146ab7a3416cf911f42fcf7f", "text": "Financial Time Series analysis and prediction is one of the interesting areas in which past data could be used to anticipate and predict data and information about future. There are many artificial intelligence approaches used in the prediction of time series, such as Artificial Neural Networks (ANN) and Hidden Markov Models (HMM). In this paper HMM and HMM approaches for predicting financial time series are presented. ANN and HMM are used to predict time series that consists of highest and lowest Forex index series as input variable. Both of ANN and HMM are trained on the past dataset of the chosen currencies (such as EURO/ USD which is used in this paper). The trained ANN and HMM are used to search for the variable of interest behavioral data pattern from the past dataset. The obtained results was compared with real values from Forex (Foreign Exchange) market database [1]. The power and predictive ability of the two models are evaluated on the basis of Mean Square Error (MSE). The Experimental results obtained are encouraging, and it demonstrate that ANN and HMM can closely predict the currency market, with a small different in predicting performance.", "title": "" }, { "docid": "3cdbc153caaafcea54228b0c847aa536", "text": "BACKGROUND\nAlthough the use of filling agents for soft-tissue augmentation has increased worldwide, most consensus statements do not distinguish between ethnic populations. There are, however, significant differences between Caucasian and Asian faces, reflecting not only cultural disparities, but also distinctive treatment goals. Unlike aesthetic patients in the West, who usually seek to improve the signs of aging, Asian patients are younger and request a broader range of indications.\n\n\nMETHODS\nMembers of the Asia-Pacific Consensus group-comprising specialists from the fields of dermatology, plastic surgery, anatomy, and clinical epidemiology-convened to develop consensus recommendations for Asians based on their own experience using cohesive polydensified matrix, hyaluronic acid, and calcium hydroxylapatite fillers.\n\n\nRESULTS\nThe Asian face demonstrates differences in facial structure and cosmetic ideals. Improving the forward projection of the \"T zone\" (i.e., forehead, nose, cheeks, and chin) forms the basis of a safe and effective panfacial approach to the Asian face. Successful augmentation may be achieved with both (1) high- and low-viscosity cohesive polydensified matrix/hyaluronic acid and (2) calcium hydroxylapatite for most indications, although some constraints apply.\n\n\nCONCLUSION\nThe Asia-Pacific Consensus recommendations are the first developed specifically for the use of fillers in Asian populations.\n\n\nCLINCIAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" }, { "docid": "174d5b0f08fe4f9a48eb8e10b3b400b3", "text": "As mobile malware have increased in number and sophistication, it has become pertinent for users to have tools that can inform them of potentially malicious applications. To fulfill this need, we develop a cloud-based malware analysis service called ScanMe Mobile, for the Android platform. The objective of this service is to provide users with detailed information about Android Application Package (APK) files before installing them on their devices. With ScanMe Mobile, users are able to upload APK files from their device SD card, scan the APK in the malware detection system that could be deployed in the cloud, compile a comprehensive report, and store or share the report by publishing it to the website. ScanMe Mobile works by running the APK in a virtual sandbox to generate permission data, and analyzes the result in the machine learning detection system. Through our experimental results, we demonstrate that the proposed system can effectively detect malware on the Android platform.", "title": "" }, { "docid": "d3d471b6b377d8958886a2f6c89d5061", "text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.", "title": "" }, { "docid": "ae7347af720ab76ab098a62b3236c17c", "text": "We propose discriminative adversarial networks (DAN) for semi-supervised learning and loss function learning. Our DAN approach builds upon generative adversarial networks (GANs) and conditional GANs but includes the key differentiator of using two discriminators instead of a generator and a discriminator. DAN can be seen as a framework to learn loss functions for predictors that also implements semi-supervised learning in a straightforward manner. We propose instantiations of DAN for two different prediction tasks: classification and ranking. Our experimental results on three datasets of different tasks demonstrate that DAN is a promising framework for both semi-supervised learning and learning loss functions for predictors. For all tasks, the semi-supervised capability of DAN can significantly boost the predictor performance for small labeled sets with minor architecture changes across tasks. Moreover, the loss functions automatically learned by DANs are very competitive and usually outperform the standard pairwise and negative log-likelihood loss functions for semi-supervised learning.", "title": "" }, { "docid": "59a49feef4e3a79c5899fede208a183c", "text": "This study proposed and tested a model of consumer online buying behavior. The model posits that consumer online buying behavior is affected by demographics, channel knowledge, perceived channel utilities, and shopping orientations. Data were collected by a research company using an online survey of 999 U.S. Internet users, and were cross-validated with other similar national surveys before being used to test the model. Findings of the study indicated that education, convenience orientation, Página 1 de 20 Psychographics of the Consumers in Electronic Commerce 11/10/01 http://www.ascusc.org/jcmc/vol5/issue2/hairong.html experience orientation, channel knowledge, perceived distribution utility, and perceived accessibility are robust predictors of online buying status (frequent online buyer, occasional online buyer, or non-online buyer) of Internet users. Implications of the findings and directions for future research were discussed.", "title": "" }, { "docid": "30c5f12ecaec4f385c2be3bb8ef8eb1e", "text": "Human has the ability to roughly estimate the distance and size of an object because of the stereo vision of human's eyes. In this project, we proposed to utilize stereo vision system to accurately measure the distance and size (height and width) of object in view. Object size identification is very useful in building systems or applications especially in autonomous system navigation. Many recent works have started to use multiple vision sensors or cameras for different type of application such as 3D image constructions, occlusion detection and etc. Multiple cameras system has becoming more popular since cameras are now very cheap and easy to deploy and utilize. The proposed measurement system consists of object detection on the stereo images and blob extraction and distance and size calculation and object identification. The system also employs a fast algorithm so that the measurement can be done in real-time. The object measurement using stereo camera is better than object detection using a single camera that was proposed in many previous research works. It is much easier to calibrate and can produce a more accurate results.", "title": "" }, { "docid": "15dc2cd497f782d16311cd0e658e2e90", "text": "We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must first acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacrificing much flexibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation.", "title": "" }, { "docid": "22981a1731d35f35241271d14b85df31", "text": "New generations of distributed systems are opening novel perspectives for logic programming (LP): on the one hand, service-oriented architectures represent nowadays the standard approach for distributed systems engineering; on the other hand, pervasive systems mandate for situated intelligence. In this paper we introduce the notion of Logic Programming as a Service (LPaaS) as a means to address the needs of pervasive intelligent systems through logic engines exploited as a distributed service. First we define the abstract architectural model by re-interpreting classical LP notions in the new context; then we elaborate on the nature of LP interpreted as a service by describing the basic LPaaS interface. Finally, we show how LPaaS works in practice by discussing its implementation in terms of distributed tuProlog engines, accounting for basic issues such as interoperability and configurability.", "title": "" }, { "docid": "1c16d6b5072283cfc9301f6ae509ede1", "text": "T paper introduces a model of collective creativity that explains how the locus of creative problem solving shifts, at times, from the individual to the interactions of a collective. The model is grounded in observations, interviews, informal conversations, and archival data gathered in intensive field studies of work in professional service firms. The evidence suggests that although some creative solutions can be seen as the products of individual insight, others should be regarded as the products of a momentary collective process. Such collective creativity reflects a qualitative shift in the nature of the creative process, as the comprehension of a problematic situation and the generation of creative solutions draw from—and reframe—the past experiences of participants in ways that lead to new and valuable insights. This research investigates the origins of such moments, and builds a model of collective creativity that identifies the precipitating roles played by four types of social interaction: help seeking, help giving, reflective reframing, and reinforcing. Implications of this research include shifting the emphasis in research and management of creativity from identifying and managing creative individuals to understanding the social context and developing interactive approaches to creativity, and from a focus on relatively constant contextual variables to the alignment of fluctuating variables and their precipitation of momentary phenomena.", "title": "" }, { "docid": "6df61e330f6b71c4ef136e3a2220a5e2", "text": "In recent years, we have seen significant advancement in technologies to bring about smarter cities worldwide. The interconnectivity of things is the key enabler in these initiatives. An important building block is smart mobility, and it revolves around resolving land transport challenges in cities with dense populations. A transformative direction that global stakeholders are looking into is autonomous vehicles and the transport infrastructure to interconnect them to the traffic management system (that is, vehicle to infrastructure connectivity), as well as to communicate with one another (that is, vehicle to vehicle connectivity) to facilitate better awareness of road conditions. A number of countries had also started to take autonomous vehicles to the roads to conduct trials and are moving towards the plan for larger scale deployment. However, an important consideration in this space is the security of the autonomous vehicles. There has been an increasing interest in the attacks and defences of autonomous vehicles as these vehicles are getting ready to go onto the roads. In this paper, we aim to organize and discuss the various methods of attacking and defending autonomous vehicles, and propose a comprehensive attack and defence taxonomy to better categorize each of them. Through this work, we hope that it provides a better understanding of how targeted defences should be put in place for targeted attacks, and for technologists to be more mindful of the pitfalls when developing architectures, algorithms and protocols, so as to realise a more secure infrastructure composed of dependable autonomous vehicles.", "title": "" }, { "docid": "dcbc5e4da91571b3026afab2e4bf5717", "text": "Unilateral lower limb prosthesis users display temporal, kinematic, and kinetic asymmetries between limbs while ascending and descending stairs. These asymmetries are due, in part, to the inability of current prosthetic devices to effectively mimic normal ankle function. The purpose of this study was to provide a comprehensive set of biomechanical data for able-bodied and unilateral transtibial amputee (TTA) ankle-foot systems for level-ground (LG), stair ascent (SA), and stair descent (SD), and to characterize deviations from normal performance associated with prosthesis use. Ankle joint kinematics, kinetics, torque-angle curves, and effective shapes were calculated for twelve able-bodied individuals and twelve individuals with TTA. The data from this study demonstrated the prosthetic limb can more effectively mimic the range of motion and power output of a normal ankle-foot during LG compared to SA and SD. There were larger differences between the prosthetic and able-bodied limbs during SA and SD, most evident in the torque-angle curves and effective shapes. These data can be used by persons designing ankle-foot prostheses and provide comparative data for assessment of future ankle-foot prosthesis designs.", "title": "" }, { "docid": "8fa1c0e07edf702d996b1d62afdfcb9f", "text": "In this paper, we present a general framework and new effective algorithms to detect the syntactic structures that are at a level higher than shots. In sports video, such high-level structures are often characterized by the specific views (e.g., pitching or serve) and the subsequent temporal transition patterns within each temporal structural segment. We have developed robust statistical models for detecting the domain-specific views with real-time performance and high accuracy. The models combine domain-independent global color filtering method and domain-specific constraints on the spatio-temporal properties of the segmented regions (e.g., locations, shapes, and motion of the objects). The real-time performance was accomplished by using efficient compressed-domain processing at the front end and computational expensive object-level processing on filtered candidates only. High-level events (e.g., strokes, net plays, baseline plays) are also detected after the view recognition. Results of such structure and event detection allow for efficient browsing and summarization of long sports video programs.", "title": "" }, { "docid": "fd8b0bcd163823194746426916e0e17b", "text": "Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to generate non-frontal views from a single frontal face, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is to seek a rich embedding that encodes identity features, as well as non-identity ones such as pose and landmark locations. Finally, we propose a new feature reconstruction metric learning to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features, which is obtained from two images of the same subject. Experiments on both controlled and in-the-wild face datasets, such as MultiPIE, 300WLP and the profile view database CFP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations.", "title": "" }, { "docid": "c700a8a3dc4aa81c475e84fc1bbf9516", "text": "A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.", "title": "" } ]
scidocsrr
970b7b845290b332d7f5a4c5dcd1f039
Broadcast Cricket Highlight Treatment using a Quadrant Based Method for Global Motion
[ { "docid": "3284431912c05706fe61dfc56e2a38a5", "text": "In recent years social media have become indispensable tools for information dissemination, operating in tandem with traditional media outlets such as newspapers, and it has become critical to understand the interaction between the new and old sources of news. Although social media as well as traditional media have attracted attention from several research communities, most of the prior work has been limited to a single medium. In addition temporal analysis of these sources can provide an understanding of how information spreads and evolves. Modeling temporal dynamics while considering multiple sources is a challenging research problem. In this paper we address the problem of modeling text streams from two news sources - Twitter and Yahoo! News. Our analysis addresses both their individual properties (including temporal dynamics) and their inter-relationships. This work extends standard topic models by allowing each text stream to have both local topics and shared topics. For temporal modeling we associate each topic with a time-dependent function that characterizes its popularity over time. By integrating the two models, we effectively model the temporal dynamics of multiple correlated text streams in a unified framework. We evaluate our model on a large-scale dataset, consisting of text streams from both Twitter and news feeds from Yahoo! News. Besides overcoming the limitations of existing models, we show that our work achieves better perplexity on unseen data and identifies more coherent topics. We also provide analysis of finding real-world events from the topics obtained by our model.", "title": "" } ]
[ { "docid": "c2c8c8a40caea744e40eb7bf570a6812", "text": "OBJECTIVE\nTo investigate the association between single nucleotide polymorphisms (SNPs) of BARD1 gene and susceptibility of early-onset breast cancer in Uygur women in Xinjiang.\n\n\nMETHODS\nA case-control study was designed to explore the genotypes of Pro24Ser (C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene, detected by PCR-restriction fragment length polymorphism (PCR-RFLP) assay, in 144 early-onset breast cancer cases of Uygur women (≤ 40 years) and 136 cancer-free controls matched by age and ethnicity. The association between SNPs of BARD1 gene and risk of early-onset breast cancer in Uygur women was analyzed by unconditional logistic regression model.\n\n\nRESULTS\nEarly age at menarche, late age at first pregnancy, and positive family history of cancer may be important risk factors of early-onset breast cancer in Uygur women in Xinjiang. The frequencies of genotypes of Pro24Ser (C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene showed significant differences between the cancer cases and cancer-free controls (P < 0.05). Compared with wild-type genotype Pro24Ser CC, it showed a lower incidence of early-onset breast cancer in Uygur women with variant genotypes of Pro24Ser TT (OR = 0.117, 95%CI = 0.058 - 0.236), and dominance-genotype CT+TT (OR = 0.279, 95%CI = 0.157 - 0.494), or Arg378Ser CC (OR = 0.348, 95%CI = 0.145 - 0.834) and Val507Met AA(OR = 0.359, 95%CI = 0.167 - 0.774). Furthermore, SNPS in three polymorphisms would have synergistic effects on the risk of breast cancer. In addition, the SNP-SNP interactions of dominance-genotypes (CT+TT, GC+CC and GA+AA) showed a 52.1% lower incidence of early-onset breast cancer in Uygur women (OR = 0.479, 95%CI = 0.230 - 0.995). Stratified analysis indicated that the protective effect of carrying T variant genotype (CT/TT) in Pro24Ser and carrying C variant genotype (GC/CC) in Arg378Ser were more evident in subjects with early age at menarche and negative family history of cancer. With an older menarche age, the protective effect was weaker.\n\n\nCONCLUSIONS\nSNPs of Pro24Ser(C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene are associated with significantly decreased risk of early-onset breast cancer in Uygur women in Xinjiang. Early age at menarche and negative family history of cancer can enhance the protective effect of mutant allele.", "title": "" }, { "docid": "37ffd85867e68db6eadc244b2d20a403", "text": "This paper presents a distributed algorithm to direct evacuees to exits through arbitrarily complex building layouts in emergency situations. The algorithm finds the safest paths for evacuees taking into account predictions of the relative movements of hazards, such as fires, and evacuees. The algorithm is demonstrated on a 64 node wireless sensor network test platform and in simulation. The results of simulations are shown to demonstrate the navigation paths found by the algorithm.", "title": "" }, { "docid": "2ceb67df0c4b404540b625f93a1c62e5", "text": "AIM\nIn this paper, I call into question the widely-held assumption of a single, more or less unified paradigm of 'qualitative research' whose methodologies share certain epistemological and ontological characteristics, and explore the implications of this position for judgements about the quality of research studies.\n\n\nBACKGROUND\nAfter a quarter of a century of debate in nursing about how best to judge the quality of qualitative research, we appear to be no closer to a consensus, or even to deciding whether it is appropriate to try to achieve a consensus. The literature on this issue can be broadly divided into three positions: those writers who wish qualitative research to be judged according to the same criteria as quantitative research; those who believe that a different set of criteria is required; and those who question the appropriateness of any predetermined criteria for judging qualitative research. Of the three positions, the second appears to have generated most debate, and a number of different frameworks and guidelines for judging the quality of qualitative research have been devised over recent years.\n\n\nDISCUSSION\nThe second of the above positions is rejected in favour of the third. It argues that, if there is no unified qualitative research paradigm, then it makes little sense to attempt to establish a set of generic criteria for making quality judgements about qualitative research studies. We need either to acknowledge that the commonly perceived quantitative-qualitative dichotomy is in fact a continuum which requires a continuum of quality criteria, or to recognize that each study is individual and unique, and that the task of producing frameworks and predetermined criteria for assessing the quality of research studies is futile.\n\n\nCONCLUSION\nSome of the implications of this latter position are explored, including the requirement that all published research reports should include a reflexive research diary.", "title": "" }, { "docid": "38a74fff83d3784c892230255943ee23", "text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "title": "" }, { "docid": "6997c5bdf9e17a46d6f07fa38159482a", "text": "This paper presents a static analysis tool that can automatically find memory leaks and deletions of dangling pointers in large C and C++ applications.We have developed a type system to formalize a practical ownership model of memory management. In this model, every object is pointed to by one and only one owning pointer, which holds the exclusive right and obligation to either delete the object or to transfer the right to another owning pointer. In addition, a pointer-typed class member field is required to either always or never own its pointee at public method boundaries. Programs satisfying this model do not leak memory or delete the same object more than once.We have also developed a flow-sensitive and context-sensitive algorithm to automatically infer the likely ownership interfaces of methods in a program. It identifies statements inconsistent with the model as sources of potential leaks or double deletes. The algorithm is sound with respect to a large subset of the C and C++ language in that it will report all possible errors. It is also practical and useful as it identifies those warnings likely to correspond to errors and helps the user understand the reported errors by showing them the assumed method interfaces.Our techniques are validated with an implementation of a tool we call Clouseau. We applied Clouseau to a suite of applications: two web servers, a chat client, secure shell tools, executable object manipulation tools, and a compiler. The tool found a total of 134 serious memory errors in these applications. The tool analyzes over 50K lines of C++ code in about 9 minutes on a 2 GHz Pentium 4 machine and over 70K lines of C code in just over a minute.", "title": "" }, { "docid": "3266a3d561ee91e8f08d81e1aac6ac1b", "text": "The seminal work of Dwork et al. [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metric-fairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metricfairness does generalize, and leverage these generalization guarantees to construct polynomialtime PACF learning algorithms for the classes of linear and logistic predictors. rothblum@alum.mit.edu. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17). gal.yona@gmail.com. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17).", "title": "" }, { "docid": "3394eb51b71e5def4e4637963da347ab", "text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.", "title": "" }, { "docid": "5708b1e874b6f567090b5fbbc75cd812", "text": "A new compact planar ultrawideband (UWB) antenna designed for on-body communications is presented. The antenna is characterized in free space, on a homogeneous phantom modeling a human arm, and on a realistic high-resolution whole-body voxel model. In all configurations it demonstrates very satisfactory features for on-body propagation. The results are presented in terms of return loss, radiation pattern, efficiency, and E-field distribution. The antenna shows very good performance within the 3-11.2 GHz range, and therefore it might be used successfully for the 3.1-10.6 GHz IR-UWB systems. The simulation results for the return loss and radiation patterns are in good agreement with measurements. Finally, a time-domain analysis over the whole-body voxel model is performed for impulse radio applications, and transmission scenarios with several antennas placed on the body are analyzed and compared.", "title": "" }, { "docid": "9d79900424e8f41d4328163834dd8f09", "text": "Ghrelin receptors are expressed by key components of the arousal system. Exogenous ghrelin induces behavioral activation, promotes wakefulness and stimulates eating. We hypothesized that ghrelin-sensitive mechanisms play a role in the arousal system. To test this, we investigated the responsiveness of ghrelin receptor knockout (KO) mice to two natural wake-promoting stimuli. Additionally, we assessed the integrity of their homeostatic sleep-promoting system using sleep deprivation. There was no significant difference in the spontaneous sleep-wake activity between ghrelin receptor KO and wild-type (WT) mice. WT mice mounted robust arousal responses to a novel environment and food deprivation. Wakefulness increased for 6 h after cage change accompanied by increases in body temperature and locomotor activity. Ghrelin receptor KO mice completely lacked the wake and body temperature responses to new environment. When subjected to 48 h food deprivation, WT mice showed marked increases in their waking time during the dark periods of both days. Ghrelin receptor KO mice failed to mount an arousal response on the first night and wake increases were attenuated on the second day. The responsiveness to sleep deprivation did not differ between the two genotypes. These results indicate that the ghrelin-receptive mechanisms play an essential role in the function of the arousal system but not in homeostatic sleep-promoting mechanisms.", "title": "" }, { "docid": "231259ebaa0165c60ac8088de33b28d2", "text": "Hypertension in patients on hemodialysis (HD) contributes significantly to their morbidity and mortality. This study examined whether a supportive nursing intervention incorporating monitoring, goal setting, and reinforcement can improve blood pressure (BP) control in a chronic HD population. A randomized controlled design was used and 118 participants were recruited from six HD units in the Detroit metro area. The intervention consisted of (1) BP education sessions; (2) a 12-week intervention, including monitoring, goal setting, and reinforcement; and (3) a 30-day post-intervention follow-up period. Participants in the treatment were asked to monitor their BP, sodium, and fluid intake weekly for 12 weeks in weekly logs. BP, fluid and sodium logs were reviewed weekly with the researcher to determine if goals were met or not met. Reinforcement was given for goals met and problem solving offered when goals were not met. The control group received standard care. Both systolic and diastolic BPs were significantly decreased in the treatment group.", "title": "" }, { "docid": "a0b5183ad30c21b3085da64ee108ed06", "text": "This paper discusses design and control of a prismatic series elastic actuator with high mechanical power output in a small and lightweight form factor. We introduce a design that pushes the performance boundary of electric series elastic actuators by using high motor voltage coupled with an efficient drivetrain to enable large continuous actuator force while retaining speed. Compact size is achieved through the use of a novel piston-style ball screw support mechanism and a concentrically placed compliant element. We develop controllers for force and position tracking based on combinations of PID, model-based, and disturbance observer control structures. Finally, we demonstrate our actuator's performance with a series of experiments designed to operate the actuator at the limits of its mechanical and control capability.", "title": "" }, { "docid": "f1ad369db9e6e82b5ddce120f1308ade", "text": "Genuine moral disagreement exists and is widespread. To understand such disagreement, we must examine the basic kinds of social relationships people construct across cultures and the distinct moral obligations and prohibitions these relationships entail. We extend relational models theory (Fiske, 1991) to identify 4 fundamental and distinct moral motives. Unity is the motive to care for and support the integrity of in-groups by avoiding or eliminating threats of contamination and providing aid and protection based on need or empathic compassion. Hierarchy is the motive to respect rank in social groups where superiors are entitled to deference and respect but must also lead, guide, direct, and protect subordinates. Equality is the motive for balanced, in-kind reciprocity, equal treatment, equal say, and equal opportunity. Proportionality is the motive for rewards and punishments to be proportionate to merit, benefits to be calibrated to contributions, and judgments to be based on a utilitarian calculus of costs and benefits. The 4 moral motives are universal, but cultures, ideologies, and individuals differ in where they activate these motives and how they implement them. Unlike existing theories (Haidt, 2007; Hauser, 2006; Turiel, 1983), relationship regulation theory predicts that any action, including violence, unequal treatment, and \"impure\" acts, may be perceived as morally correct depending on the moral motive employed and how the relevant social relationship is construed. This approach facilitates clearer understanding of moral perspectives we disagree with and provides a template for how to influence moral motives and practices in the world.", "title": "" }, { "docid": "40a7f02bd762ea2b559b99323a31eb70", "text": "This letter proposes a new design of millimeter-wave (mm-Wave) array antenna package with beam steering characteristic for the fifth-generation (5G) mobile applications. In order to achieve a broad three-dimensional scanning coverage of the space with high-gain beams, three identical subarrays of patch antennas have been compactly arranged along the edge region of the mobile phone printed circuit board (PCB) to form the antenna package. By switching the feeding to one of the subarrays, the desired direction of coverage can be achieved. The proposed design has >10-dB gain in the upper spherical space, good directivity, and efficiency, which is suitable for 5G mobile communications. In addition, the impact of the user's hand on the antenna performance has been investigated.", "title": "" }, { "docid": "bf21e1b7a41e9e3a5ede61a61aed699d", "text": "In this paper classification and association rule mining algorithms are discussed and demonstrated. Particularly, the problem of association rule mining, and the investigation and comparison of popular association rules algorithms. The classic problem of classification in data mining will be also discussed. The paper also considers the use of association rule mining in classification approach in which a recently proposed algorithm is demonstrated for this purpose. Finally, a comprehensive experimental study against 13 UCI data sets is presented to evaluate and compare traditional and association rule based classification techniques with regards to classification accuracy, number of derived rules, rules features and processing time.", "title": "" }, { "docid": "cf7c5ae92a0514808232e4e9d006024a", "text": "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.", "title": "" }, { "docid": "af105dd5dca0642d119ca20661d5f633", "text": "This paper derives the forward and inverse kinematics of a humanoid robot. The specific humanoid that the derivation is for is a robot with 27 degrees of freedom but the procedure can be easily applied to other similar humanoid platforms. First, the forward and inverse kinematics are derived for the arms and legs. Then, the kinematics for the torso and the head are solved. Finally, the forward and inverse kinematic solutions for the whole body are derived using the kinematics of arms, legs, torso, and head.", "title": "" }, { "docid": "02a276b26400fe37804298601b16bc13", "text": "Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred.\n In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.\n The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.", "title": "" }, { "docid": "ccf105c61316ec4964955f2553bdba9f", "text": "Mobile-cloud offloading mechanisms delegate heavy mobile computation to the cloud. In real life use, the energy tradeoff of computing the task locally or sending the input data and the code of the task to the cloud is often negative, especially with popular communication intensive jobs like social-networking, gaming, and emailing. We design and build a working implementation of CDroid, a system that tightly couples the device OS to its cloud counterpart. The cloud-side handles data traffic through the device efficiently and, at the same time, caches code and data optimally for possible future offloading. In our system, when offloading decision takes place, input and code are likely to be already on the cloud. CDroid makes mobile cloud offloading more practical enabling offloading of lightweight jobs and communication intensive apps. Our experiments with real users in everyday life show excellent results in terms of energy savings and user experience.", "title": "" }, { "docid": "968c4077a4b6d62c31245c934e6b7126", "text": "Fault detection in photovoltaic (PV) arrays becomes difficult as the number of PV panels increases. Particularly, under low irradiance conditions with an active maximum power point tracking algorithm, line-to-line (L-L) faults may remain undetected because of low fault currents, resulting in loss of energy and potential fire hazards. This paper proposes a fault detection algorithm based on multiresolution signal decomposition for feature extraction, and two-stage support vector machine (SVM) classifiers for decision making. This detection method only requires data of the total voltage and current from a PV array and a limited amount of labeled data for training the SVM. Both simulation and experimental case studies verify the accuracy of the proposed method.", "title": "" }, { "docid": "8188bcd3b95952dbf2818cad6fc2c36c", "text": "Semi-supervised learning is by no means an unfamiliar concept to natural language processing researchers. Labeled data has been used to improve unsupervised parameter estimation procedures such as the EM algorithm and its variants since the beginning of the statistical revolution in NLP (e.g., Pereira and Schabes (1992)). Unlabeled data has also been used to improve supervised learning procedures, the most notable examples being the successful applications of self-training and co-training to word sense disambiguation (Yarowsky 1995) and named entity classification (Collins and Singer 1999). Despite its increasing importance, semi-supervised learning is not a topic that is typically discussed in introductory machine learning texts (e.g., Mitchell (1997), Alpaydin (2004)) or NLP texts (e.g., Manning and Schütze (1999), Jurafsky andMartin (2000)). Consequently, to learn about semi-supervised learning research, one has to consult the machine-learning literature. This can be a daunting task for NLP researchers who have little background in machine learning. Steven Abney’s book Semisupervised Learning for Computational Linguistics is targeted precisely at such researchers, aiming to provide them with a “broad and accessible presentation” of topics in semi-supervised learning. According to the preamble, the reader is assumed to have taken only an introductory course in NLP “that include statistical methods — concretely the material contained in Jurafsky andMartin (2000) andManning and Schütze (1999).”Nonetheless, I agreewith the author that any NLP researcher who has a solid background in machine learning is ready to “tackle the primary literature on semisupervised learning, and will probably not find this book particularly useful” (page 11). As the author promises, the book is self-contained and quite accessible to those who have little background in machine learning. In particular, of the 12 chapters in the book, three are devoted to preparatory material, including: a brief introduction to machine learning, basic unconstrained and constrained optimization techniques (e.g., gradient descent and the method of Lagrange multipliers), and relevant linear-algebra concepts (e.g., eigenvalues, eigenvectors, matrix and vector norms, diagonalization). The remaining chapters focus roughly on six types of semi-supervised learning methods:", "title": "" } ]
scidocsrr
4abd9db3363597b5ea9d074887b5b1b2
Tangible 3D tabletops: combining tangible tabletop interaction and 3D projection
[ { "docid": "c39a5cd2d7102516e26a2a37da0a85e5", "text": "Media façades comprise a category of urban computing concerned with the integration of displays into the built environment, including buildings and street furniture. This paper identifies and discusses eight challenges faced when designing urban media façades. The challenges concern a broad range of issues: interfaces, physical integration, robustness, content, stakeholders, situation, social relations, and emerging use. The challenges reflect the fact that the urban setting as a domain for interaction design is characterized by a number of circumstances and socio-cultural practices that differ from those of other domains. In order to exemplify the challenges and discuss how they may be addressed, we draw on our experiences from five experimental design cases, ranging from a 180 m2 interactive building façade to displays integrated into bus shelters.", "title": "" } ]
[ { "docid": "cfeaa5e7f3629ca89f7c55e2200900cc", "text": "The bootstrap provides a simple and powerful means of assessing the quality of estimators. However, in settings involving large data sets—which are increasingly prevalent— the calculation of bootstrap-based quantities can be prohibitively demanding computationally. Although variants such as subsampling and the m out of n bootstrap can be used in principle to reduce the cost of bootstrap computations, these methods are generally not robust to specification of tuning parameters (such as the number of subsampled data points), and they often require knowledge of the estimator’s convergence rate, in contrast with the bootstrap. As an alternative, we introduce the ‘bag of little bootstraps’ (BLB), which is a new procedure which incorporates features of both the bootstrap and subsampling to yield a robust, computationally efficient means of assessing the quality of estimators.The BLB is well suited to modern parallel and distributed computing architectures and furthermore retains the generic applicability and statistical efficiency of the bootstrap. We demonstrate the BLB’s favourable statistical performance via a theoretical analysis elucidating the procedure’s properties, as well as a simulation study comparing the BLB with the bootstrap, the m out of n bootstrap and subsampling. In addition, we present results from a large-scale distributed implementation of the BLB demonstrating its computational superiority on massive data, a method for adaptively selecting the BLB’s tuning parameters, an empirical study applying the BLB to several real data sets and an extension of the BLB to time series data.", "title": "" }, { "docid": "3258be27b22be228d2eae17c91a20664", "text": "In any non-deterministic environment, unexpected events can indicate true changes in the world (and require behavioural adaptation) or reflect chance occurrence (and must be discounted). Adaptive behaviour requires distinguishing these possibilities. We investigated how humans achieve this by integrating high-level information from instruction and experience. In a series of EEG experiments, instructions modulated the perceived informativeness of feedback: Participants performed a novel probabilistic reinforcement learning task, receiving instructions about reliability of feedback or volatility of the environment. Importantly, our designs de-confound informativeness from surprise, which typically co-vary. Behavioural results indicate that participants used instructions to adapt their behaviour faster to changes in the environment when instructions indicated that negative feedback was more informative, even if it was simultaneously less surprising. This study is the first to show that neural markers of feedback anticipation (stimulus-preceding negativity) and of feedback processing (feedback-related negativity; FRN) reflect informativeness of unexpected feedback. Meanwhile, changes in P3 amplitude indicated imminent adjustments in behaviour. Collectively, our findings provide new evidence that high-level information interacts with experience-driven learning in a flexible manner, enabling human learners to make informed decisions about whether to persevere or explore new options, a pivotal ability in our complex environment.", "title": "" }, { "docid": "be398b849ba0caf2e714ea9cc8468d78", "text": "Gadolinium based contrast agents (GBCAs) play an important role in the diagnostic evaluation of many patients. The safety of these agents has been once again questioned after gadolinium deposits were observed and measured in brain and bone of patients with normal renal function. This retention of gadolinium in the human body has been termed \"gadolinium storage condition\". The long-term and cumulative effects of retained gadolinium in the brain and elsewhere are not as yet understood. Recently, patients who report that they suffer from chronic symptoms secondary to gadolinium exposure and retention created gadolinium-toxicity on-line support groups. Their self-reported symptoms have recently been published. Bone and joint complaints, and skin changes were two of the most common complaints. This condition has been termed \"gadolinium deposition disease\". In this review we will address gadolinium toxicity disorders, from acute adverse reactions to GBCAs to gadolinium deposition disease, with special emphasis on the latter, as it is the most recently described and least known.", "title": "" }, { "docid": "3851498990939be88290b9ed2172dd3e", "text": "To achieve ubiquitous PCS, new and novel ways of classifying wireless environments will be needed that are both widely encompassing and reasonably compact. JGRGEN BACHANDER-SEN is u professor at Aalborh. Universip and head of thc Centerfor Personkommn-nikation. THEODORE S. RAPPA-PORT 0 un associareprafes-sor of electrical rngineenng at Viwnia Tech. profawr of electrical engineering at Kyoto Universip. ireless personal communica-tionscouldinprincipleusesev-era1 physical media, ranging from sound to radio to light. Since we want to overcome the limitations of acoustical communications , we shall concentrate on propagation of electromagnetic wavcs in the frequency range from some hundreds of MHz to a few GHz. Although thereisconsiderable interest atthe moment in millimeter wave communications in indoor environments, they will be mentioned only brieflyin this survey of propagation of signals. It is interesting to observe that propagation results influence personal communications systems in several ways. First there is obviously the distribution ofmeanpoweroveracertainareaorvolumeofinter-est, which is the basic requirement for reliable communications. The energy should be sufficient for the link in question, but not too strong, in order not to create cochannel interfcrcnce at a distance in another cell. Also, since the radio link is highly variable over short distances, not only the mean power is significant; the statistical distribution is also important. This is especially true when the fading distribution is dependent on thc bandwidth of the signal. Secondly. even if there is sufficient power available for communications, the quality of the signal may be such that large errors occur anyway. This results from rapid movement through thescatteringenvironment,or impairments due to long echoes leading to inter-symbol-interference. A basic understanding of the channel is important for finding modulation andcodingschemes that improve thc channel, for designing equalizers or, if this is not possible, for deploying basc station antcnnas in such a way that the detrimental effects are less likely to occur. In this article we will describe the type of signals that occur invarious cnvironments and the mod-eling of the propagation parameters. Models are essentially of two classes. The first class consists of parametric statistical models that on average describcthephenomenonwithinagivenerror.They are simple to use, but relativcly coarse. In the last few years a second class ofenvironment-specific mod-e1shasbeenintroduced.Thesemodelsareofamore", "title": "" }, { "docid": "713ade80a6c2e0164a0d6fe6ef07be37", "text": "We review recent work on the role of intrinsic amygdala networks in the regulation of classically conditioned defensive behaviors, commonly known as conditioned fear. These new developments highlight how conditioned fear depends on far more complex networks than initially envisioned. Indeed, multiple parallel inhibitory and excitatory circuits are differentially recruited during the expression versus extinction of conditioned fear. Moreover, shifts between expression and extinction circuits involve coordinated interactions with different regions of the medial prefrontal cortex. However, key areas of uncertainty remain, particularly with respect to the connectivity of the different cell types. Filling these gaps in our knowledge is important because much evidence indicates that human anxiety disorders results from an abnormal regulation of the networks supporting fear learning.", "title": "" }, { "docid": "20f8a5daa211a5461eaa166452aa1f89", "text": "Radio frequency identification (RFID) technology is considered as one of the most applicable wireless technologies in the present era. Readers and tags are two main components of this technology. Several adjacent readers are used in most cases of implementing RFID systems for commercial, industrial and medicinal applications. Collisions which come from readers’ simultaneous activities lead to a decrease in the performance of RFID systems. Therefore, a suitable solution to avoid collisions and minimize them in order to enhance the performance of these systems is necessary. Nowadays, several researches are done in this field, but most of them do not follow the rules and standards of RFID systems; and don’t use network resources proficiently. In this paper, a solution is provided to avoid collisions and readers’ simultaneous activities in dense passive RFID networks through the use of time division, CSMA techniques and measuring received signal power. The new anti-collision protocol provides higher throughput than other protocols without extra hardware in dense reader environment; in addition, the suggested method conforms to the European standards and rules.", "title": "" }, { "docid": "3ef1f71f47175d2687d5c11b0d023162", "text": "In attempting to fit a model of analogical problem solving to protocol data of students solving physics problems, several unexpected observations were made. Analogies between examples and exercises (a form of case-based reasoning) consisted of two distinct types of events. During an initialization event, the solver retrieved an example, set up a mapping between it and the problem, and decided whether the example was useful. During a transfer event, the solver inferred something about the problem’s solution. Many different types of initialization and transfer events were observed. Poor solvers tended to follow the example verbatim, copying each solution line over to the problem. Good solvers tried to solve the problem themselves, but referred to the example when they got stuck, or wanted to check a step, or wanted to avoid a detailed calculation. Rather than learn from analogies, both Good and Poor solvers tended to repeat analogies at subsequent similar situations. A revised version of the model is proposed (but not yet implemented) that appears to be consistent with all the findings observed in this and other studies of the same subjects.", "title": "" }, { "docid": "cafcfe9b29c3da7b03b95112424f16db", "text": "Orthogonal frequency division multiplexing (OFDM) provides an effective and low complexity means of eliminating inter symbol interference for transmission over frequency selective fading channels. This technique has received a lot of interest in mobile communication research as the radio channel is usually frequency selective and time variant. In OFDM system, modulation may be coherent or differential. Channel state information (CSI) is required for the OFDM receiver to perform coherent detection or diversity combining, if multiple transmit and receive antennas are deployed. In practice, CSI can be reliably estimated at the receiver by transmitting pilots along with data symbols. This paper discusses the channel estimation in OFDM and its implementation in MATLAB using pilot based block type channel estimation techniques by LS and MMSE algorithms. This paper starts with comparisons of OFDM using BPSK and QPSK on different channels, followed by modeling the LS and MMSE estimators on MATLAB. In the end, results of different simulations are compared to conclude that LS algorithm gives less complexity but MMSE algorithm provides comparatively better results.", "title": "" }, { "docid": "b410897c85f712b1b9228ae3c9b62608", "text": "Now-a-days as we open the newspaper, we find atleast one news of a road accident. With vehicles becoming increasingly affordable, there has been a surge in the number of vehicles on roads on an average all over the world. Accidents bring devastation upon victims, causing loss of precious time and money. It has been established, after extensive research, that a majority of accidents become fatalities because of lack of communication to the concerned medical authorities and the consequent lack of immediate medical support. This application helps sense the possible occurence of an accident on the road, with the help of sensors attached to the vehicle. This occurence will be immediately communicated to the concerned people so that further action can be taken without any further ado.", "title": "" }, { "docid": "caea6d9ec4fbaebafc894167cfb8a3d6", "text": "Although the positive effects of different kinds of physical activity (PA) on cognitive functioning have already been demonstrated in a variety of studies, the role of cognitive engagement in promoting children's executive functions is still unclear. The aim of the current study was therefore to investigate the effects of two qualitatively different chronic PA interventions on executive functions in primary school children. Children (N = 181) aged between 10 and 12 years were assigned to either a 6-week physical education program with a high level of physical exertion and high cognitive engagement (team games), a physical education program with high physical exertion but low cognitive engagement (aerobic exercise), or to a physical education program with both low physical exertion and low cognitive engagement (control condition). Executive functions (updating, inhibition, shifting) and aerobic fitness (multistage 20-m shuttle run test) were measured before and after the respective condition. Results revealed that both interventions (team games and aerobic exercise) have a positive impact on children's aerobic fitness (4-5% increase in estimated VO2max). Importantly, an improvement in shifting performance was found only in the team games and not in the aerobic exercise or control condition. Thus, the inclusion of cognitive engagement in PA seems to be the most promising type of chronic intervention to enhance executive functions in children, providing further evidence for the importance of the qualitative aspects of PA.", "title": "" }, { "docid": "aae3e8f023b90bc2050d7c38a3857cc5", "text": "Severe, chronic growth retardation of cattle early in life reduces growth potential, resulting in smaller animals at any given age. Capacity for long-term compensatory growth diminishes as the age of onset of nutritional restriction resulting in prolonged growth retardation declines. Hence, more extreme intrauterine growth retardation can result in slower growth throughout postnatal life. However, within the limits of beef production systems, neither severely restricted growth in utero nor from birth to weaning influences efficiency of nutrient utilisation later in life. Retail yield from cattle severely restricted in growth during pregnancy or from birth to weaning is reduced compared with cattle well grown early in life, when compared at the same age later in life. However, retail yield and carcass composition of low- and high-birth-weight calves are similar at the same carcass weight. At equivalent carcass weights, cattle grown slowly from birth to weaning have carcasses of similar or leaner composition than those grown rapidly. However, if high energy, concentrate feed is provided following severe growth restriction from birth to weaning, then at equivalent weights post-weaning the slowly-grown, small weaners may be fatter than their well-grown counterparts. Restricted prenatal and pre-weaning nutrition and growth do not adversely affect measures of beef quality. Similarly, bovine myofibre characteristics are little affected in the long term by growth in utero or from birth to weaning. Interactions were not evident between prenatal and pre-weaning growth for subsequent growth, efficiency, carcass, yield and beef-quality characteristics, within our pasture-based production systems. Furthermore, interactions between genotype and nutrition early in life, studied using offspring of Piedmontese and Wagyu sired cattle, were not evident for any growth, efficiency, carcass, yield and beef-quality parameters. We propose that within pasture-based production systems for beef cattle, the plasticity of the carcass tissues, particularly of muscle, allows animals that are growth-retarded early in life to attain normal composition at equivalent weights in the long term, albeit at older ages. However, the quality of nutrition during recovery from early life growth retardation may be important in determining the subsequent composition of young, light-weight cattle relative to their heavier counterparts. Finally, it should be emphasised that long-term consequences of more specific and/or acute environmental influences during specific stages of embryonic, foetal and neonatal calf development remain to be determined. This need for further research extends to consequences of nutrition and growth early in life for reproductive capacity.", "title": "" }, { "docid": "9d37baf5ce33826a59cc7bd0fd7955c0", "text": "A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.", "title": "" }, { "docid": "38102dfe63b707499c2f01e2e46b4031", "text": "Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.", "title": "" }, { "docid": "c7bbde452a68f84ca9d09c7da2cb29ab", "text": "Recently, application-specific requirement becomes one of main research challenges in the area of routing for delay tolerant networks. Among various requirements, in this paper, we focus on achieving the desired delivery ratio within bounded given deadline. For this goal, we use analytical model and develop a new forwarding scheme in respective phase. The proposed protocol dynamically adjusts the number of message copies by analytical model and the next hop node is determined depending on the delivery probability and the inter-meeting time of the encountering nodes as well as remaining time. Simulation results demonstrate that our proposed algorithm meets bounded delay with lower overhead than existing protocols in an adaptive way to varying network conditions.", "title": "" }, { "docid": "da7fc676542ccc6f98c36334d42645ae", "text": "Extracting the defects of the road pavement in images is difficult and, most of the time, one image is used alone. The difficulties of this task are: illumination changes, objects on the road, artefacts due to the dynamic acquisition. In this work, we try to solve some of these problems by using acquisitions from different points of view. In consequence, we present a new methodology based on these steps : the detection of defects in each image, the matching of the images and the merging of the different extractions. We show the increase in performances and more particularly how the false detections are reduced.", "title": "" }, { "docid": "56f7419fda31e86b4dffaabbe820a68a", "text": "In the context of animated movie characterization, we present an information fusion approach mixing very different types of data related to the activity within a movie. These data are the features extracted from images, words extracted from the synopses and expert knowledge. The difficulty of this fusion is due to the very different semantic level of these data. The aim of this work is to get a movie activity characterization in order to help the constitution of automatic summary, content based video retrieval system, etc. Two strategies are proposed : a first one aiming at giving a global description of the activity within the movie, and a second one providing a local description of activity. Tests and results are proposed on animated movies from the Annecy International Animation Film Festival.", "title": "" }, { "docid": "8b51b2ee7385649bc48ba4febe0ec4c3", "text": "This paper presents a HMM-based methodology for action recogni-tion using star skeleton as a representative descriptor of human posture. Star skeleton is a fast skeletonization technique by connecting from centroid of target object to contour extremes. To use star skeleton as feature for action recognition, we clearly define the fea-ture as a five-dimensional vector in star fashion because the head and four limbs are usually local extremes of human shape. In our proposed method, an action is composed of a series of star skeletons over time. Therefore, time-sequential images expressing human action are transformed into a feature vector sequence. Then the fea-ture vector sequence must be transformed into symbol sequence so that HMM can model the action. We design a posture codebook, which contains representative star skeletons of each action type and define a star distance to measure the similarity between feature vec-tors. Each feature vector of the sequence is matched against the codebook and is assigned to the symbol that is most similar. Conse-quently, the time-sequential images are converted to a symbol posture sequence. We use HMMs to model each action types to be recognized. In the training phase, the model parameters of the HMM of each category are optimized so as to best describe the training symbol sequences. For human action recognition, the model which best matches the observed symbol sequence is selected as the recog-nized category. We implement a system to automatically recognize ten different types of actions, and the system has been tested on real human action videos in two cases. One case is the classification of 100 video clips, each containing a single action type. A 98% recog-nition rate is obtained. The other case is a more realistic situation in which human takes a series of actions combined. An action-series recognition is achieved by referring a period of posture history using a sliding window scheme. The experimental results show promising performance.", "title": "" }, { "docid": "b584491152ad052b1c0be6ea7088f7c0", "text": "Recently several hierarchical inverse dynamics controllers based on cascades of quadratic programs have been proposed for application on torque controlled robots. They have important theoretical benefits but have never been implemented on a torque controlled robot where model inaccuracies and real-time computation requirements can be problematic. In this contribution we present an experimental evaluation of these algorithms in the context of balance control for a humanoid robot. The presented experiments demonstrate the applicability of the approach under real robot conditions (i.e. model uncertainty, estimation errors, etc). We propose a simplification of the optimization problem that allows us to decrease computation time enough to implement it in a fast torque control loop. We implement a momentum-based balance controller which shows robust performance in face of unknown disturbances, even when the robot is standing on only one foot. In a second experiment, a tracking task is evaluated to demonstrate the performance of the controller with more complicated hierarchies. Our results show that hierarchical inverse dynamics controllers can be used for feedback control of humanoid robots and that momentum-based balance control can be efficiently implemented on a real robot.", "title": "" }, { "docid": "c743c63848ca96f0eb47090ea648d897", "text": "Cyber-Physical Systems (CPSs) are the future generation of highly connected embedded systems having applications in diverse domains including Oil and Gas. Employing Product Line Engineering (PLE) is believed to bring potential benefits with respect to reduced cost, higher productivity, higher quality, and faster time-to-market. However, relatively few industrial field studies are reported regarding the application of PLE to develop large-scale systems, and more specifically CPSs. In this paper, we report about our experiences and insights gained from investigating the application of model-based PLE at a large international organization developing subsea production systems (typical CPSs) to manage the exploitation of oil and gas production fields. We report in this paper 1) how two systematic domain analyses (on requirements engineering and product configuration/derivation) were conducted to elicit CPS PLE requirements and challenges, 2) key results of the domain analysis (commonly observed in other domains), and 3) our initial experience of developing and applying two Model Based System Engineering (MBSE) PLE solution to address some of the requirements and challenges elicited during the domain analyses.", "title": "" }, { "docid": "0e6b54a70a1604caf7449c8eb1286d5e", "text": "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and nonexpert readers in statistics, computer science, mathematics, and engineering.", "title": "" } ]
scidocsrr
7c64e365de1080e07c8c602658b11d15
Investor Protection and Corporate Valuation
[ { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" } ]
[ { "docid": "050679bfbeba42b30f19f1a824ec518a", "text": "Principles of cognitive science hold the promise of helping children to study more effectively, yet they do not always make successful transitions from the laboratory to applied settings and have rarely been tested in such settings. For example, self-generation of answers to questions should help children to remember. But what if children cannot generate anything? And what if they make an error? Do these deviations from the laboratory norm of perfect generation hurt, and, if so, do they hurt enough that one should, in practice, spurn generation? Can feedback compensate, or are errors catastrophic? The studies reviewed here address three interlocking questions in an effort to better implement a computer-based study program to help children learn: (1) Does generation help? (2) Do errors hurt if they are corrected? And (3) what is the effect of feedback? The answers to these questions are: Yes, generation helps; no, surprisingly, errors that are corrected do not hurt; and, finally, feedback is beneficial in verbal learning. These answers may help put cognitive scientists in a better position to put their well-established principles in the service of children's learning.", "title": "" }, { "docid": "4c2108f46571303e64b568647e70171e", "text": "This paper proposes a cross modal retrieval system that leverages on image and text encoding. Most multimodal architectures employ separate networks for each modality to capture the semantic relationship between them. However, in our work image-text encoding can achieve comparable results in terms of cross modal retrieval without having to use separate network for each modality. We show that text encodings can capture semantic relationships between multiple modalities. In our knowledge, this work is the first of its kind in terms of employing a single network and fused image-text embedding for cross modal retrieval. We evaluate our approach on two famous multimodal datasets: MS-COCO and Flickr30K.", "title": "" }, { "docid": "8f6afe4ef9f6b4fc94840cf253eeba9c", "text": "Eric Hanushek and Steven Rivkin examine how salary and working conditions affect the quality of instruction in the classroom. The wages of teachers relative to those of other college graduates have fallen steadily since 1940. Today, average wages differ little, however, between urban and suburban districts. In some metropolitan areas urban districts pay more, while in others, suburban districts pay more. But working conditions in urban and suburban districts differ substantially, with urban teachers reporting far less administrator and parental support, worse materials, and greater student problems. Difficult working conditions may drive much of the difference in turnover of teachers and the transfer of teachers across schools. Using rich data from Texas public schools, the authors describe in detail what happens when teachers move from school to school. They examine how salaries and student characteristics change when teachers move and also whether turnover affects teacher quality and student achievement. They note that both wages and student characteristics affect teachers' choices and result in a sorting of teachers across schools, but they find little evidence that teacher transitions are detrimental to student learning. The extent to which variations in salaries and working conditions translate into differences in the quality of instruction depends importantly on the effectiveness of school personnel policies in hiring and retaining the most effective teachers and on constraints on both entry into the profession and the firing of low performers. The authors conclude that overall salary increases for teachers would be both expensive and ineffective. The best way to improve the quality of instruction would be to lower barriers to becoming a teacher, such as certification, and to link compensation and career advancement more closely with teachers' ability to raise student performance.", "title": "" }, { "docid": "0508e896f25f8e801f98e5efcc74bd17", "text": "In this work, we proposed an efficient system for animal recognition and classification based on texture features which are obtained from the local appearance and texture of animals. The classification of animals are done by training and subsequently testing two different machine learning techniques, namely k-Nearest Neighbors (k-NN) and Support Vector Machines (SVM). Computer-assisted technique when applied through parallel computing makes the work efficient by reducing the time taken for the task of animal recognition and classification. Here we propose a parallel algorithm for the same. Experimentation is done for about 30 different classes of animals containing more than 3000 images. Among the different classifiers, k-Nearest Neighbor classifiers have achieved a better accuracy.", "title": "" }, { "docid": "2cba0f9b3f4b227dfe0b40e3bebd99e4", "text": "In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.", "title": "" }, { "docid": "b29ddb800ec3b4f031a077e98a7fffb1", "text": "Networks or graphs can easily represent a diverse set of data sources that are characterized by interacting units or actors. Social networks, representing people who communicate with each other, are one example. Communities or clusters of highly connected actors form an essential feature in the structure of several empirical networks. Spectral clustering is a popular and computationally feasible method to discover these communities. The Stochastic Block Model (Holland et al., 1983) is a social network model with well defined communities; each node is a member of one community. For a network generated from the Stochastic Block Model, we bound the number of nodes “misclustered” by spectral clustering. The asymptotic results in this paper are the first clustering results that allow the number of clusters in the model to grow with the number of nodes, hence the name high-dimensional. In order to study spectral clustering under the Stochastic Block Model, we first show that under the more general latent space model, the eigenvectors of the normalized graph Laplacian asymptotically converge to the eigenvectors of a “population” normalized graph Laplacian. Aside from the implication for spectral clustering, this provides insight into a graph visualization technique. Our method of studying the eigenvectors of random matrices is original. AMS 2000 subject classifications: Primary 62H30, 62H25; secondary 60B20.", "title": "" }, { "docid": "35e73af4b9f6a32c0fd4e31fde871f8a", "text": "In this paper, a novel three-phase soft-switching inverter is presented. The inverter-switch turn on and turn off are performed under zero-voltage switching condition. This inverter has only one auxiliary switch, which is also soft switched. Having one auxiliary switch simplifies the control circuit considerably. The proposed inverter is analyzed, and its operating modes are explained in details. The design considerations of the proposed inverter are presented. The experimental results of the prototype inverter confirm the theoretical analysis.", "title": "" }, { "docid": "357a7c930f3beb730533e2220a94a022", "text": "The fused Lasso penalty enforces sparsity in both the coefficients and their successive differences, which is desirable for applications with features ordered in some meaningful way. The resulting problem is, however, challenging to solve, as the fused Lasso penalty is both non-smooth and non-separable. Existing algorithms have high computational complexity and do not scale to large-size problems. In this paper, we propose an Efficient Fused Lasso Algorithm (EFLA) for optimizing this class of problems. One key building block in the proposed EFLA is the Fused Lasso Signal Approximator (FLSA). To efficiently solve FLSA, we propose to reformulate it as the problem of finding an \"appropriate\" subgradient of the fused penalty at the minimizer, and develop a Subgradient Finding Algorithm (SFA). We further design a restart technique to accelerate the convergence of SFA, by exploiting the special \"structures\" of both the original and the reformulated FLSA problems. Our empirical evaluations show that, both SFA and EFLA significantly outperform existing solvers. We also demonstrate several applications of the fused Lasso.", "title": "" }, { "docid": "a3e88345a2bcd07bf756ca02968082f6", "text": "Bi-directional LSTMs have emerged as a standard method for obtaining per-token vector representations serving as input to various token labeling tasks (whether followed by Viterbi prediction or independent classification). This paper proposes an alternative to Bi-LSTMs for this purpose: iterated dilated convolutional neural networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. We describe a distinct combination of network structure, parameter sharing and training procedures that is not only more accurate than Bi-LSTM-CRFs, but also 8x faster at test time on long sequences. Moreover, ID-CNNs with independent classification enable a dramatic 14x testtime speedup, while still attaining accuracy comparable to the Bi-LSTM-CRF. We further demonstrate the ability of IDCNNs to combine evidence over long sequences by demonstrating their improved accuracy on whole-document (rather than per-sentence) inference. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, IDCNNs permit fixed-depth convolutions to run in parallel across entire documents. Today when many companies run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.", "title": "" }, { "docid": "931e6f034abd1a3004d021492382a47a", "text": "SARSA (Sutton, 1996) is applied to a simulated, traac-light control problem (Thorpe, 1997) and its performance is compared with several, xed control strategies. The performance of SARSA with four diierent representations of the current state of traac is analyzed using two reinforcement schemes. Training on one intersection is compared to, and is as eeective as training on all intersections in the environment. SARSA is shown to be better than xed-duration light timing and four-way stops for minimizing total traac travel time, individual vehicle travel times, and vehicle wait times. Comparisons of performance using a constant reinforcement function versus a variable reinforcement function dependent on the number of vehicles at an intersection showed that the variable reinforcement resulted in slightly improved performance for some cases.", "title": "" }, { "docid": "917458b0c9e26b878676d1edf542b5ea", "text": "The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.", "title": "" }, { "docid": "405bae0d413aa4b5fef0ac8b8c639235", "text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.", "title": "" }, { "docid": "9a5ef746c96a82311e3ebe8a3476a5f4", "text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.", "title": "" }, { "docid": "1419e2f53412b4ce2d6944bad163f13d", "text": "Determining the emotion of a song that best characterizes the affective content of the song is a challenging issue due to the difficulty of collecting reliable ground truth data and the semantic gap between human's perception and the music signal of the song. To address this issue, we represent an emotion as a point in the Cartesian space with valence and arousal as the dimensions and determine the coordinates of a song by the relative emotion of the song with respect to other songs. We also develop an RBF-ListNet algorithm to optimize the ranking-based objective function of our approach. The cognitive load of annotation, the accuracy of emotion recognition, and the subjective quality of the proposed approach are extensively evaluated. Experimental results show that this ranking-based approach simplifies emotion annotation and enhances the reliability of the ground truth. The performance of our algorithm for valence recognition reaches 0.326 in Gamma statistic.", "title": "" }, { "docid": "c934f44f485f41676dfed35afbf2d1f2", "text": "Many icon taxonomy systems have been developed by researchers that organise icons based on their graphic elements. Most of these taxonomies classify icons according to how abstract or concrete they are. Categories however overlap and different researchers use different terminology, sometimes to describe what in essence is the same thing. This paper describes nine taxonomies and compares the terminologies they use. Aware of the lack of icon taxonomy systems in the field of icon design, the authors provide an overview of icon taxonomy and develop an icon taxonomy system that could bring practical benefits to the performance of computer related tasks.", "title": "" }, { "docid": "8e0754baed82072945e1bf0c968bb0be", "text": "Previous studies examining the relationship between physical activity levels and broad-based measures of psychological wellbeing in adolescents have been limited by not controlling for potentially confounding variables. The present study examined the relationship between adolescents’ self-reported physical activity level, sedentary behaviour and psychological wellbeing; while controlling for a broad range of sociodemographic, health and developmental factors. The study entailed a cross-sectional school-based survey in ten British towns. Two thousand six hundred and twenty three adolescents (aged 13–16 years) reported physical activity levels, patterns of sedentary behaviour (TV/computer/video usage) and completed the strengths and difficulties questionnaire (SDQ). Lower levels of self-reported physical activity and higher levels of sedentary behaviour showed graded associations with higher SDQ total difficulties scores, both for boys (P < 0.001) and girls (P < 0.02) after adjustment for age and town. Additional adjustment for social class, number of parents, predicted school examination results, body mass index, ethnicity, alcohol intake and smoking status had little effect on these findings. Low levels of self-reported physical activity are independently associated with diminished psychological wellbeing among adolescents. Longitudinal studies may provide further insights into the relationship between wellbeing and activity levels in this population. Ultimately, randomised controlled trials are needed to evaluate the effects of increasing physical activity on psychological wellbeing among adolescents.", "title": "" }, { "docid": "1cdbeb23bf32c20441a208b3c3a05480", "text": "Indoor object localization can enable many ubicomp applications, such as asset tracking and object-related activity recognition. Most location and tracking systems rely on either battery-powered devices which create cost and maintenance issues or cameras which have accuracy and privacy issues. This paper introduces a system that is able to detect the 3D position and motion of a battery-free RFID tag embedded with an ultrasound detector and an accelerometer. Combining tags' acceleration with location improves the system's power management and supports activity recognition. We characterize the system's localization performance in open space as well as implement it in a smart wet lab application. The system is used to track real-time location and motion of the tags in the wet lab as well as recognize pouring actions performed on the objects to which the tag is attached. The median localization accuracy is 7.6cm -- (3.1, 5, 1.9)cm for each (x, y, z) axis -- with max update rates of 15 Sample/s using single RFID reader antenna.", "title": "" }, { "docid": "9078698db240725e1eb9d1f088fb05f4", "text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.", "title": "" }, { "docid": "7fbc3820c259d9ea58ecabaa92f8c875", "text": "The use of digital imaging devices, ranging from professional digital cinema cameras to consumer grade smartphone cameras, has become ubiquitous. The acquired image is a degraded observation of the unknown latent image, while the degradation comes from various factors such as noise corruption, camera shake, object motion, resolution limit, hazing, rain streaks, or a combination of them. Image restoration (IR), as a fundamental problem in image processing and low-level vision, aims to reconstruct the latent high-quality image from its degraded observation. Image degradation is, in general, irreversible, and IR is a typical ill-posed inverse problem. Due to the large space of natural image contents, prior information on image structures is crucial to regularize the solution space and produce a good estimation of the latent image. Image prior modeling and learning then are key issues in IR research. This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.", "title": "" }, { "docid": "e91e350cd2e3f385333be9156d38feac", "text": "Mobile devices store a diverse set of private user data and have gradually become a hub to control users' other personal Internet-of-Things devices. Access control on mobile devices is therefore highly important. The widely accepted solution is to protect access by asking for a password. However, password authentication is tedious, e.g., a user needs to input a password every time she wants to use the device. Moreover, existing biometrics such as face, fingerprint, and touch behaviors are vulnerable to forgery attacks. We propose a new touch-based biometric authentication system that is passive and secure against forgery attacks. In our touch-based authentication, a user's touch behaviors are a function of some random \"secret\". The user can subconsciously know the secret while touching the device's screen. However, an attacker cannot know the secret at the time of attack, which makes it challenging to perform forgery attacks even if the attacker has already obtained the user's touch behaviors. We evaluate our touch-based authentication system by collecting data from 25 subjects. Results are promising: the random secrets do not influence user experience and, for targeted forgery attacks, our system achieves 0.18 smaller Equal Error Rates (EERs) than previous touch-based authentication.", "title": "" } ]
scidocsrr
7fb36bed68401d2b9a793dcf332eb1dd
Face Recognition: From Traditional to Deep Learning Methods
[ { "docid": "476faf4be352d42277b0aa2f1c8b0c91", "text": "In this paper we describe a holistic face recognition method based on subspace Linear Dis-criminant Analysis (LDA). The method consists of two steps: rst we project the face image from the original vector space to a face subspace via Principal Component Analysis where the subspace dimension is carefully chosen, and then we use LDA to obtain a linear classiier in the subspace. The criterion we use to choose the subspace dimension enables us to generate class-separable features via LDA from the full subspace representation. Hence we are able to solve the generalization/overrtting problem when we perform face recognition on a large face dataset but with very few training face images available per testing person. In addition, we employ a weighted distance metric guided by the LDA eigenvalues to improve the performance of the subspace LDA method. Finally, the improved performance of the subspace LDA approach is demonstrated through experiments using the FERET dataset for face recognition/veriication, a large mugshot dataset for person veriication, and the MPEG-7 dataset. We believe that this approach provides a useful framework for other image recognition tasks as well.", "title": "" }, { "docid": "64f2091b23a82fae56751a78d433047c", "text": "Aging variation poses a serious problem to automatic face recognition systems. Most of the face recognition studies that have addressed the aging problem are focused on age estimation or aging simulation. Designing an appropriate feature representation and an effective matching framework for age invariant face recognition remains an open problem. In this paper, we propose a discriminative model to address face matching in the presence of age variation. In this framework, we first represent each face by designing a densely sampled local feature description scheme, in which scale invariant feature transform (SIFT) and multi-scale local binary patterns (MLBP) serve as the local descriptors. By densely sampling the two kinds of local descriptors from the entire facial image, sufficient discriminatory information, including the distribution of the edge direction in the face image (that is expected to be age invariant) can be extracted for further analysis. Since both SIFT-based local features and MLBP-based local features span a high-dimensional feature space, to avoid the overfitting problem, we develop an algorithm, called multi-feature discriminant analysis (MFDA) to process these two local feature spaces in a unified framework. The MFDA is an extension and improvement of the LDA using multiple features combined with two different random sampling methods in feature and sample space. By random sampling the training set as well as the feature space, multiple LDA-based classifiers are constructed and then combined to generate a robust decision via a fusion rule. Experimental results show that our approach outperforms a state-of-the-art commercial face recognition engine on two public domain face aging data sets: MORPH and FG-NET. We also compare the performance of the proposed discriminative model with a generative aging model. A fusion of discriminative and generative models further improves the face matching accuracy in the presence of aging.", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" }, { "docid": "7c799fdfde40289ba4e0ce549f02a5ad", "text": "In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.", "title": "" } ]
[ { "docid": "30997f1a8b350df688a8d85b3f7782a6", "text": "This paper proposes a facial expression recognition (FER) method in videos. The proposed method automatically selects the peak expression face from a video sequence using closeness of the face to the neutral expression. The severely non-frontal faces and poorly aligned faces are discarded in advance to eliminate their negative effects on the peak expression face selection and FER. To reduce the effect of the facial identity in the feature extraction, we compute difference information between the peak expression face and its intra class variation (ICV) face. An ICV face is generated by combining the training faces of an expression class and looks similar to the peak expression face in identity. Because the difference information is defined as the distances of locally pooled texture features between the two faces, the feature extraction is robust to face rotation and mis-alignment. Results show that the proposed method is practical with videos containing spontaneous facial expressions and pose variations. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0871a9e6c97a0f26811bd0f6ae534b03", "text": "OBJECTIVE\nTo measure the intracranial translucency (IT) and the cisterna magna (CM), to produce reference ranges and to examine the interobserver and intraobserver variability of those measurements. To examine the possible association of IT with chromosomal abnormalities.\n\n\nMETHODS\nProspective study on pregnancies assessed at 11 to 14 weeks. IT was measured retrospectively in 17 cases with aneuploidy.\n\n\nRESULTS\nTo produce reference ranges, 465 fetuses were used. IT and CM correlated linearly with crown-rump-length (CRL) and were independent of maternal demographic characteristics and biochemical indices. IT had a weak positive correlation with nuchal translucency. For IT the intraclass correlation coefficient was 0.88 for intraobserver variability and 0.83 for interobserver variability. For CM the intraclass correlation coefficient was 0.95 for intraobserver variability and 0.84 for interobserver variability. The IT multiple of the median was significantly increased in the chromosomally abnormal fetuses (1.02 for the normal and 1.28 for the chromosomally abnormal fetuses, Mann Whitney p < 0.001). IT multiple of the median was a significant predictor of chromosomal abnormality (Receiver Operator Characteristic curve analysis: Area under the curve = 0.86, CI=0.76-0.96, p<0.001).\n\n\nCONCLUSION\nIntracranial translucency and CM can be measured reliably at the 11 to 14 weeks examination and the measurements are highly reproducible. IT appears to be increased in fetuses with chromosomal abnormalities.", "title": "" }, { "docid": "313a902049654e951860b9225dc5f4e8", "text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.", "title": "" }, { "docid": "e210c3e4a5dbd49192aca2161b44c3c6", "text": "The purpose of this paper is to develop a novel hybrid optimization method (HRABC) based on artificial bee colony algorithm and Taguchi method. The proposed approach is applied to a structural design optimization of a vehicle component and a multi-tool milling optimization problem. A comparison of state-of-the-art optimization techniques for the design and manufacturing optimization problems is presented. The results have demonstrated the superiority of the HRABC over the other", "title": "" }, { "docid": "9e98eb5faf119d7b65eb8233a6a8fb6f", "text": "This paper deals with the method of driving performance evaluation of interior permanent magnet (IPM) motors with 6-pole NdFeB magnets rotor and 9-slot stator. The circuit parameters of IPM motor such as Ld(d-axis inductance), Lq(q-axis inductance), torque and speed are investigated using 2-d finite element analysis(FEA) in each current value with phase angle condition. The low speed region below the based point presents speed-torque curve according to limited current in constant torque condition for maximum torque-per-ampere (MTPA) control. On the other hand, the high speed region above the based point draws speed-torque curve according to frequency and current angle variation in limited current and voltage for field-weakening control. This curve shows that suitable driving region to drive the IPM motors.", "title": "" }, { "docid": "ef66f2be98e3b86e74676eb6367032d9", "text": "The aim of this study is to develop a once-daily sustained release matrix tablet of ibuprofen using hydroxypropyl methylcellulose (HPMC) as release controlling factor and to evaluate drug release parameters as per various release kinetic models. In order to achieve required sustained release profile tablets were directly compressed using Avicel pH 101 and Magnesium stearate. The formulated tablets were also characterized by physical and chemical parameters and results were found in acceptable limits. Different dissolution models were applied to drug release data in order to evaluate release mechanisms and kinetics. Criteria for selecting the most appropriate model was based on linearity (coefficient of correlation). The drug release data fit well to the Higuchi expression. Drug release mechanism was found as a complex mixture of diffusion, swelling and erosion.", "title": "" }, { "docid": "f76b587a1bc282a98cf8e42bdd6f5032", "text": "Ensemble-based methods are among the most widely used techniques for data stream classification. Their popularity is attributable to their good performance in comparison to strong single learners while being relatively easy to deploy in real-world applications. Ensemble algorithms are especially useful for data stream learning as they can be integrated with drift detection algorithms and incorporate dynamic updates, such as selective removal or addition of classifiers. This work proposes a taxonomy for data stream ensemble learning as derived from reviewing over 60 algorithms. Important aspects such as combination, diversity, and dynamic updates, are thoroughly discussed. Additional contributions include a listing of popular open-source tools and a discussion about current data stream research challenges and how they relate to ensemble learning (big data streams, concept evolution, feature drifts, temporal dependencies, and others).", "title": "" }, { "docid": "e2a863f5407ce843af196c105adfb2fe", "text": "We study the Student-Project Allocation problem (SPA), a generalisation of the classical Hospitals / Residents problem (HR). An instance of SPA involves a set of students, projects and lecturers. Each project is offered by a unique lecturer, and both projects and lecturers have capacity constraints. Students have preferences over projects, whilst lecturers have preferences over students. We present two optimal linear-time algorithms for allocating students to projects, subject to the preference and capacity constraints. In particular, each algorithm finds a stable matching of students to projects. Here, the concept of stability generalises the stability definition in the HR context. The stable matching produced by the first algorithm is simultaneously best-possible for all students, whilst the one produced by the second algorithm is simultaneously best-possible for all lecturers. We also prove some structural results concerning the set of stable matchings in a given instance of SPA. The SPA problem model that we consider is very general and has applications to a range of different contexts besides student-project allocation.", "title": "" }, { "docid": "30cd626772ad8c8ced85e8312d579252", "text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>", "title": "" }, { "docid": "d3b0a831715bd2f2de9d94811bdd47e7", "text": "Aspect Term Extraction (ATE) identifies opinionated aspect terms in texts and is one of the tasks in the SemEval Aspect Based Sentiment Analysis (ABSA) contest. The small amount of available datasets for supervised ATE and the costly human annotation for aspect term labelling give rise to the need for unsupervised ATE. In this paper, we introduce an architecture that achieves top-ranking performance for supervised ATE. Moreover, it can be used efficiently as feature extractor and classifier for unsupervised ATE. Our second contribution is a method to automatically construct datasets for ATE. We train a classifier on our automatically labelled datasets and evaluate it on the human annotated SemEval ABSA test sets. Compared to a strong rule-based baseline, we obtain a dramatically higher F-score and attain precision values above 80%. Our unsupervised method beats the supervised ABSA baseline from SemEval, while preserving high precision scores.", "title": "" }, { "docid": "aeb40d93f78904168e10d9d4db64196e", "text": "Haze removal or dehazing is a challenging ill-posed problem that has drawn a significant attention in the last few years. Despite this growing interest, the scientific community is still lacking a reference dataset to evaluate objectively and quantitatively the performance of proposed dehazing methods. The few datasets that are currently considered, both for assessment and training of learning-based dehazing techniques, exclusively rely on synthetic hazy images. To address this limitation, we introduce the first outdoor scenes database (named O-HAZE) composed of pairs of real hazy and corresponding haze-free images. In practice, hazy images have been captured in presence of real haze, generated by professional haze machines, and O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. To illustrate its usefulness, O-HAZE is used to compare a representative set of state-of-the-art dehazing techniques, using traditional image quality metrics such as PSNR, SSIM and CIEDE2000. This reveals the limitations of current techniques, and questions some of their underlying assumptions.", "title": "" }, { "docid": "904ea0bcb94be3214ac495829f9e0b3f", "text": "This work aims to provide a review of the routing protocols in the Internet of Vehicles (IoV) from routing algorithms to their evaluation approaches. We provide five different taxonomies of routing protocols. First, we classify them based on their transmission strategy into three categories: unicast, geocast, and broadcast ones. Second, we classify them into four categories based on information required to perform routing: topology-, position-, map-, and path-based ones. Third, we identify them in delay-sensitive and delay-tolerant ones. Fourth, we discuss them according to their applicability in different dimensions, i.e., 1-D, 2-D, and 3-D. Finally, we discuss their target networks, i.e., homogeneous and heterogeneous ones. As the evaluation is also a vital part in IoV routing protocol studies, we examine the evaluation approaches, i.e., simulation and real-world experiments. IoV includes not only the traditional vehicular ad hoc networks, which usually involve a small-scale and homogeneous network, but also a much larger scale and heterogeneous one. The composition of classical routing protocols and latest heterogeneous network approaches is a promising topic in the future. This work should motivate IoV researchers, practitioners, and new comers to develop IoV routing protocols and technologies.", "title": "" }, { "docid": "33fd2d1c4b3a7448df0382b0710f2a4d", "text": "We have built a CLQA (Cross Language Question Answering) system for a source language with limited data resources (e.g. Indonesian) using a machine learning approach. The CLQA system consists of four modules: question analyzer, keyword translator, passage retriever and answer finder. We used machine learning in two modules, the question classifier (part of the question analyzer) and the answer finder. In the question classifier, we classify the EAT (Expected Answer Type) of a question by using SVM (Support Vector Machine) method. Features for the classification module are basically the output of our shallow question parsing module. To improve the classification score, we use statistical information extracted from our Indonesian corpus. In the answer finder module, using an approach different from the common approach in which answer is located by matching the named entity of the word corpus with the EAT of question, we locate the answer by text chunking the word corpus. The features for the SVM based text chunking process consist of question features, word corpus features and similarity scores between the word corpus and the question keyword. In this way, we eliminate the named entity tagging process for the target document. As for the keyword translator module, we use an Indonesian-English dictionary to translate Indonesian keywords into English. We also use some simple patterns to transform some borrowed English words. The keywords are then combined in boolean queries in order to retrieve relevant passages using IDF scores. We first conducted an experiment using 2,837 questions (about 10% are used as the test data) obtained from 18 Indonesian college students. We next conducted a similar experiment using the NTCIR (NII Test Collection for IR Systems) 2005 CLQA task by translating the English questions into Indonesian. Compared to the Japanese-English and Chinese-English CLQA results in the NTCIR 2005, we found that our system is superior to others except for one system that uses a high data resource employing 3 dictionaries. Further, a rough comparison with two other Indonesian-English CLQA systems revealed that our system achieved higher accuracy score. key words: Cross Language Question Answering, Indonesian-English CLQA, limited resource language, machine learning", "title": "" }, { "docid": "9869ef00a0f7237d6e57fa5afe390521", "text": "Land-use classification using remote sensing images covers a wide range of applications. With more detailed spatial and textural information provided in very high resolution (VHR) remote sensing images, a greater range of objects and spatial patterns can be observed than ever before. This offers us a new opportunity for advancing the performance of land-use classification. In this paper, we first introduce an effective midlevel visual elementsoriented land-use classification method based on “partlets,” which are a library of pretrained part detectors used for midlevel visual elements discovery. Taking advantage of midlevel visual elements rather than low-level image features, a partlets-based method represents images by computing their responses to a large number of part detectors. As the number of part detectors grows, a main obstacle to the broader application of this method is its computational cost. To address this problem, we next propose a novel framework to train coarse-to-fine shared intermediate representations, which are termed “sparselets,” from a large number of pretrained part detectors. This is achieved by building a single-hidden-layer autoencoder and a single-hidden-layer neural network with an L0-norm sparsity constraint, respectively. Comprehensive evaluations on a publicly available 21-class VHR landuse data set and comparisons with state-of-the-art approaches demonstrate the effectiveness and superiority of this paper.", "title": "" }, { "docid": "77f5c568ed065e4f23165575c0a05da6", "text": "Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment.", "title": "" }, { "docid": "b9a5cedbec1b6cd5091fb617c0513a13", "text": "The cerebellum undergoes a protracted development, making it particularly vulnerable to a broad spectrum of developmental events. Acquired destructive and hemorrhagic insults may also occur. The main steps of cerebellar development are reviewed. The normal imaging patterns of the cerebellum in prenatal ultrasound and magnetic resonance imaging (MRI) are described with emphasis on the limitations of these modalities. Because of confusion in the literature regarding the terminology used for cerebellar malformations, some terms (agenesis, hypoplasia, dysplasia, and atrophy) are clarified. Three main pathologic settings are considered and the main diagnoses that can be suggested are described: retrocerebellar fluid enlargement with normal or abnormal biometry (Dandy-Walker malformation, Blake pouch cyst, vermian agenesis), partially or globally decreased cerebellar biometry (cerebellar hypoplasia, agenesis, rhombencephalosynapsis, ischemic and/or hemorrhagic damage), partially or globally abnormal cerebellar echogenicity (ischemic and/or hemorrhagic damage, cerebellar dysplasia, capillary telangiectasia). The appropriate timing for performing MRI is also discussed.", "title": "" }, { "docid": "a6ed725fb7325eaeab50d0c9a7741cb4", "text": "Plant-microbe associations are thought to be beneficial for plant growth and resistance against biotic or abiotic stresses, but for natural ecosystems, the ecological analysis of microbiome function remains in its infancy. We used transformed wild tobacco plants (Nicotiana attenuata) which constitutively express an antimicrobial peptide (Mc-AMP1) of the common ice plant, to establish an ecological tool for plant-microbe studies in the field. Transgenic plants showed in planta activity against plant-beneficial bacteria and were phenotyped within the plants´ natural habitat regarding growth, fitness and the resistance against herbivores. Multiple field experiments, conducted over 3 years, indicated no differences compared to isogenic controls. Pyrosequencing analysis of the root-associated microbial communities showed no major alterations but marginal effects at the genus level. Experimental infiltrations revealed a high heterogeneity in peptide tolerance among native isolates and suggests that the diversity of natural microbial communities can be a major obstacle for microbiome manipulations in nature.", "title": "" }, { "docid": "c734c98b1ca8261694386c537870c2f3", "text": "Uncontrolled wind turbine configuration, such as stall-regulation captures, energy relative to the amount of wind speed. This configuration requires constant turbine speed because the generator that is being directly coupled is also connected to a fixed-frequency utility grid. In extremely strong wind conditions, only a fraction of available energy is captured. Plants designed with such a configuration are economically unfeasible to run in these circumstances. Thus, wind turbines operating at variable speed are better alternatives. This paper focuses on a controller design methodology applied to a variable-speed, horizontal axis wind turbine. A simple but rigid wind turbine model was used and linearised to some operating points to meet the desired objectives. By using blade pitch control, the deviation of the actual rotor speed from a reference value is minimised. The performances of PI and PID controllers were compared relative to a step wind disturbance. Results show comparative responses between these two controllers. The paper also concludes that with the present methodology, despite the erratic wind data, the wind turbine still manages to operate most of the time at 88% in the stable region.", "title": "" }, { "docid": "4a0bbd8fad443294a8da61cb976a537c", "text": "The microservice architecture (MSA) is an emerging cloud software system, which provides fine-grained, self-contained service components (microservices) used in the construction of complex software systems. DevOps techniques are commonly used to automate the process of development and operation through continuous integration and continuous deployment. Monitoring software systems created by DevOps, makes it possible for MSA to obtain the feedback necessary to improve the system quickly and easily. Nonetheless, systematic, SDLC-driven methods (SDLC: software development life cycle) are lacking to facilitate the migration of software systems from a traditional monolithic architecture to MSA. Therefore, this paper proposes a migration process based on SDLC, including all of the methods and tools required during design, development, and implementation. The mobile application, EasyLearn, was used as an illustrative example to demonstrate the efficacy of the proposed migration process. We believe that this paper could provide valuable references for other development teams seeking to facilitate the migration of existing applications to MSA.", "title": "" } ]
scidocsrr
c92a8278118253113f889929adb79f46
Knowledge sharing models: Do they really fit public organizations?
[ { "docid": "916051a69190e66239f7eeed3c745578", "text": "This paper contributes to our understanding of an increasingly important practical problem, namely the effectiveness of knowledge management in organizations. As with many other managerial innovations, knowledge management appears to have been adopted firstly by manufacturing firms, and is only now beginning to permeate the service sector, predominantly in professional services such as consulting (Hansen et al., 1999; Sarvary, 1999). Public services, traditionally slower to embrace innovative management practices, are only beginning to recognize the importance of knowledge management. There is, as yet, little published research of its implementation in this context (Bate & Robert, 2002). ABSTRACT", "title": "" } ]
[ { "docid": "1a3cad2f10dd5c6a5aacb3676ca8917a", "text": "BACKGROUND\nRecent findings suggest that the mental health costs of unemployment are related to both short- and long-term mental health scars. The main policy tools for dealing with young people at risk of labor market exclusion are Active Labor Market Policy programs for youths (youth programs). There has been little research on the potential effects of participation in youth programs on mental health and even less on whether participation in such programs alleviates the long-term mental health scarring caused by unemployment. This study compares exposure to open youth unemployment and exposure to youth program participation between ages 18 and 21 in relation to adult internalized mental health immediately after the end of the exposure period at age 21 and two decades later at age 43.\n\n\nMETHODS\nThe study uses a five wave Swedish 27-year prospective cohort study consisting of all graduates from compulsory school in an industrial town in Sweden initiated in 1981. Of the original 1083 participants 94.3% of those alive were still participating at the 27-year follow up. Exposure to open unemployment and youth programs were measured between ages 18-21. Mental health, indicated through an ordinal level three item composite index of internalized mental health symptoms (IMHS), was measured pre-exposure at age 16 and post exposure at ages 21 and 42. Ordinal regressions of internalized mental health at ages 21 and 43 were performed using the Polytomous Universal Model (PLUM). Models were controlled for pre-exposure internalized mental health as well as other available confounders.\n\n\nRESULTS\nResults show strong and significant relationships between exposure to open youth unemployment and IMHS at age 21 (OR = 2.48, CI = 1.57-3.60) as well as at age 43 (OR = 1.71, CI = 1.20-2.43). No such significant relationship is observed for exposure to youth programs at age 21 (OR = 0.95, CI = 0.72-1.26) or at age 43 (OR = 1.23, CI = 0.93-1.63).\n\n\nCONCLUSIONS\nA considered and consistent active labor market policy directed at youths could potentially reduce the short- and long-term mental health costs of youth unemployment.", "title": "" }, { "docid": "7aad80319743ac72d2c4e117e5f831fa", "text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.", "title": "" }, { "docid": "62a51c43d4972d41d3b6cdfa23f07bb9", "text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.", "title": "" }, { "docid": "f918ca37dcf40512c4efa013567a126b", "text": "In the field of robots' obstacle avoidance and navigation, indirect contact sensors such as visual, ultrasonic and infrared detection are widely used. However, the performance of these sensors is always influenced by the severe environment, especially under the dark, dense fog, underwater conditions. The obstacle avoidance robot based on tactile sensor is proposed in this paper to realize the autonomous obstacle avoidance navigation by only using three dimensions force sensor. In addition, the mathematical model and algorithm are optimized to make up the deficiency of tactile sensor. Finally, the feasibility and reliability of this study are verified by the simulation results.", "title": "" }, { "docid": "5f982acc9a377b4e9c96029fe8e0ae90", "text": "【Abstract】Since the first collision differential with its full differential path was presented for MD5 function by Wang et al. in 2004, renewed interests on collision attacks for the MD family of hash functions have surged over the world of cryptology. To date, however, no cryptanalyst can give a second computationally feasible collision differential for MD5 with its full differential path, even no improved differential paths based on Wang’s MD5 collision differential have appeared in literature. Firstly in this paper, a new differential cryptanalysis called signed difference is defined, and some principles or recipes on finding collision differentials and designing differential paths are proposed, the signed difference generation or elimination rules which are implicit in the auxiliary functions, are derived. Then, based on these newly found properties and rules, this paper comes up with a new computationally feasible collision differential for MD5 with its full differential path, which is simpler thus more understandable than Wang’s, and a set of sufficient conditions considering carries that guarantees a full collision is derived from the full differential path. Finally, a multi-message modification-based fast collision attack algorithm for searching collision messages is specialized for the full differential path, resulting in a computational complexity of 36 2 and 32 2 MD5 operations, respectively for the first and second blocks. As for examples, two collision message pairs with different first blocks are obtained.", "title": "" }, { "docid": "f435edc49d4907e8132f436cc43338db", "text": "OBJECTIVE\nDepression is common among patients with diabetes, but its relationship to glycemic control has not been systematically reviewed. Our objective was to determine whether depression is associated with poor glycemic control.\n\n\nRESEARCH DESIGN AND METHODS\nMedline and PsycINFO databases and published reference lists were used to identify studies that measured the association of depression with glycemic control. Meta-analytic procedures were used to convert the findings to a common metric, calculate effect sizes (ESs), and statistically analyze the collective data.\n\n\nRESULTS\nA total of 24 studies satisfied the inclusion and exclusion criteria for the meta-analysis. Depression was significantly associated with hyperglycemia (Z = 5.4, P < 0.0001). The standardized ES was in the small-to-moderate range (0.17) and was consistent, as the 95% CI was narrow (0.13-0.21). The ES was similar in studies of either type 1 or type 2 diabetes (ES 0.19 vs. 0.16) and larger when standardized interviews and diagnostic criteria rather than self-report questionnaires were used to assess depression (ES 0.28 vs. 0.15).\n\n\nCONCLUSIONS\nDepression is associated with hyperglycemia in patients with type 1 or type 2 diabetes. Additional studies are needed to establish the directional nature of this relationship and to determine the effects of depression treatment on glycemic control and the long-term course of diabetes.", "title": "" }, { "docid": "3a2740b7f65841f7eb4f74a1fb3c9b65", "text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.", "title": "" }, { "docid": "95fa1dac07ce26c1ccd64a9c86c96a22", "text": "Eyelid bags are the result of relaxation of lid structures like the skin, the orbicularis muscle, and mainly the septum, with subsequent protrusion or pseudo herniation of intraorbital fat contents. The logical treatment of baggy upper and lower eyelids should therefore include repositioning the herniated fat into the orbit and strengthening the attenuated septum in the form of a septorhaphy as a hernia repair. The preservation of orbital fat results in a more youthful appearance. The operative technique of the orbital septorhaphy is demonstrated for the upper and lower eyelid. A prospective series of 60 patients (50 upper and 90 lower blepharoplasties) with a maximum follow-up of 17 months were analyzed. Pleasing results were achieved in 56 patients. A partial recurrence was noted in 3 patients and widening of the palpebral fissure in 1 patient. Orbital septorhaphy for baggy eyelids is a rational, reliable procedure to correct the herniation of orbital fat in the upper and lower eyelids. Tightening of the orbicularis muscle and skin may be added as usual. The procedure is technically simple and without trauma to the orbital contents. The morbidity is minimal, the rate of complications is low, and the results are pleasing and reliable.", "title": "" }, { "docid": "b6ef94ddc3b07c737100c0f0d157c698", "text": "To date, much is known about the neural mechanisms underlying working-memory (WM) maintenance and long-term-memory (LTM) encoding. However, these topics have typically been examined in isolation, and little is known about how these processes might interact. Here, we investigated whether EEG oscillations arising specifically during the delay of a delayed matching-to-sample task reflect successful LTM encoding. Given previous findings of increased alpha and theta power with increasing WM load, together with the assumption that successful memory encoding involves processes that are similar to those that are invoked by increasing WM load, alpha and theta power should be higher for subsequently remembered stimuli. Consistent with this assumption, we found stronger alpha power for subsequently remembered stimuli over occipital-to-parietal scalp sites. Furthermore, stronger theta power was found for subsequently remembered stimuli over parietal-to-central electrodes. These results support the idea that alpha and theta oscillations modulate successful LTM encoding.", "title": "" }, { "docid": "b43c4d5d97120963a3ea84a01d029819", "text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.", "title": "" }, { "docid": "47ae3428ecddd561b678e5715dfd59ab", "text": "Social media have become an established feature of the dynamic information space that emerges during crisis events. Both emergency responders and the public use these platforms to search for, disseminate, challenge, and make sense of information during crises. In these situations rumors also proliferate, but just how fast such information can spread is an open question. We address this gap, modeling the speed of information transmission to compare retransmission times across content and context features. We specifically contrast rumor-affirming messages with rumor-correcting messages on Twitter during a notable hostage crisis to reveal differences in transmission speed. Our work has important implications for the growing field of crisis informatics.", "title": "" }, { "docid": "91b924c8dbb22ca4593150c5fadfd38b", "text": "This paper investigates the power allocation problem of full-duplex cooperative non-orthogonal multiple access (FD-CNOMA) systems, in which the strong users relay data for the weak users via a full duplex relaying mode. For the purpose of fairness, our goal is to maximize the minimum achievable user rate in a NOMA user pair. More specifically, we consider the power optimization problem for two different relaying schemes, i.e., the fixed relaying power scheme and the adaptive relaying power scheme. For the fixed relaying scheme, we demonstrate that the power allocation problem is quasi-concave and a closed-form optimal solution is obtained. Then, based on the derived results of the fixed relaying scheme, the optimal power allocation policy for the adaptive relaying scheme is also obtained by transforming the optimization objective function as a univariate function of the relay transmit power $P_R$. Simulation results show that the proposed FD- CNOMA scheme with adaptive relaying can always achieve better or at least the same performance as the conventional NOMA scheme. In addition, there exists a switching point between FD-CNOMA and half- duplex cooperative NOMA.", "title": "" }, { "docid": "5106155fbe257c635fb9621240fd7736", "text": "AIM\nThe aim of this study was to investigate the prevalence of pain and pain assessment among inpatients in a university hospital.\n\n\nBACKGROUND\nPain management could be considered an indicator of quality of care. Few studies report on prevalence measures including all inpatients.\n\n\nDESIGN\nQuantitative and explorative.\n\n\nMETHOD\nSurvey.\n\n\nRESULTS\nOf the inpatients at the hospital who answered the survey, 494 (65%) reported having experienced pain during the preceding 24 hours. Of the patients who reported having experienced pain during the preceding 24 hours, 81% rated their pain >3 and 42.1% rated their pain >7. Of the patients who reported having experienced pain during the preceding 24 hours, 38.7% had been asked to self-assess their pain using a Numeric Rating Scale (NRS); 29.6% of the patients were completely satisfied, and 11.5% were not at all satisfied with their participation in pain management.\n\n\nCONCLUSIONS\nThe result showed that too many patients are still suffering from pain and that the NRS is not used to the extent it should be. Efforts to overcome under-implementation of pain assessment are required, particularly on wards where pain is not obvious, e.g., wards that do not deal with surgery patients. Work to improve pain management must be carried out through collaboration across professional groups.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nUsing a pain assessment tool such as the NRS could help patients express their pain and improve communication between nurses and patients in relation to pain as well as allow patients to participate in their own care. Carrying out prevalence pain measures similar to those used here could be helpful in performing quality improvement work in the area of pain management.", "title": "" }, { "docid": "129dd084e485da5885e2720a4bddd314", "text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.", "title": "" }, { "docid": "d1ab78928c003109eda9e02384e7ca3f", "text": "Code-switching is commonly used in the free-form text environment, such as social media, and it is especially favored in emotion expressions. Emotions in codeswitching texts differ from monolingual texts in that they can be expressed in either monolingual or bilingual forms. In this paper, we first utilize two kinds of knowledge, i.e. bilingual and sentimental information to bridge the gap between different languages. Moreover, we use a term-document bipartite graph to incorporate both bilingual and sentimental information, and propose a label propagation based approach to learn and predict in the bipartite graph. Empirical studies demonstrate the effectiveness of our proposed approach in detecting emotion in code-switching texts.", "title": "" }, { "docid": "a583c568e3c2184e5bda272422562a12", "text": "Video games are primarily designed for the players. However, video game spectating is also a popular activity, boosted by the rise of online video sites and major gaming tournaments. In this paper, we focus on the spectator, who is emerging as an important stakeholder in video games. Our study focuses on Starcraft, a popular real-time strategy game with millions of spectators and high level tournament play. We have collected over a hundred stories of the Starcraft spectator from online sources, aiming for as diverse a group as possible. We make three contributions using this data: i) we find nine personas in the data that tell us who the spectators are and why they spectate; ii) we strive to understand how different stakeholders, like commentators, players, crowds, and game designers, affect the spectator experience; and iii) we infer from the spectators' expressions what makes the game entertaining to watch, forming a theory of distinct types of information asymmetry that create suspense for the spectator. One design implication derived from these findings is that, rather than presenting as much information to the spectator as possible, it is more important for the stakeholders to be able to decide how and when they uncover that information.", "title": "" }, { "docid": "ac6344574ced223d007bd3b352b4b1b0", "text": "Mobile personal devices, such as smartphones, USB thumb drives, and sensors, are becoming essential elements of our modern lives. Their large-scale pervasive deployment within the population has already attracted many malware authors, cybercriminals, and even governments. Since the first demonstration of mobile malware by Marcos Velasco, millions of these have been developed with very sophisticated capabilities. They infiltrate highly secure networks using air-gap jumping capability (e.g., “Hammer Drill” and “Brutal Kangaroo”) and spread through heterogeneous computing and communication platforms. Some of these cross-platform malware attacks are capable of infiltrating isolated control systems which might be running a variety of operating systems, such as Windows, Mac OS X, Solaris, and Linux. This paper investigates cross-platform/heterogeneous mobile malware that uses removable media, such as USB connection, to spread between incompatible computing platforms and operating systems. Deep analysis and modeling of cross-platform mobile malware are conducted at the micro (infection) and macro (spread) levels. The micro-level analysis aims to understand the cross-platform malware states and transitions between these states during node-to-node infection. The micro-level analysis helps derive the parameters essential for macro-level analysis, which are also crucial for the elaboration of suitable detection and prevention solutions. The macro-level analysis aims to identify the most important factors affecting cross-platform mobile malware spread within a digitized population. Through simulation, we show that identifying these factors helps to mitigate any outbreaks.", "title": "" }, { "docid": "95c535a587344fd0efbd5d9d299b1b98", "text": "We propose a method to integrate feature extraction and prediction as a single optimization task by stacking a three-layer model as a deep learning structure. The first layer of the deep structure is a Long Short Term Memory (LSTM) model which deals with the sequential input data from a group of assets. The output of the LSTM model is followed by meanpooling, and the result is fed to the second layer. The second layer is a neural network layer, which further learns the feature representation. The output of the second layer is connected to a survival model as the third layer for predicting asset health condition. The parameters of the three-layer model are optimized together via stochastic gradient decent. The proposed method was tested on a small dataset collected from a fleet of mining haul trucks. The model resulted in the “individualized” failure probability representation for assessing the health condition of each individual asset, which well separates the in-service and failed trucks. The proposed method was also tested on a large open source hard drive dataset, and it showed promising result.", "title": "" }, { "docid": "2cab3b3bed055eff92703d23b1edc69d", "text": "Due to their nonvolatile nature, excellent scalability, and high density, memristive nanodevices provide a promising solution for low-cost on-chip storage. Integrating memristor-based synaptic crossbars into digital neuromorphic processors (DNPs) may facilitate efficient realization of brain-inspired computing. This article investigates architectural design exploration of DNPs with memristive synapses by proposing two synapse readout schemes. The key design tradeoffs involving different analog-to-digital conversions and memory accessing styles are thoroughly investigated. A novel storage strategy optimized for feedforward neural networks is proposed in this work, which greatly reduces the energy and area cost of the memristor array and its peripherals.", "title": "" }, { "docid": "c18cec45829e4aec057443b9da0eeee5", "text": "This paper presents a synthesis on the application of fuzzy integral as an innovative tool for criteria aggregation in decision problems. The main point is that fuzzy integrals are able to model interaction between criteria in a flexible way. The methodology has been elaborated mainly in Japan, and has been applied there successfully in various fields such as design, reliability, evaluation of goods, etc. It seems however that this technique is still very little known in Europe. It is one of the aim of this review to disseminate this emerging technology in many industrial fields.", "title": "" } ]
scidocsrr
8b800864adfdc232391e4bdb523a6cc6
Analysis of sports data by using bivariate Poisson models
[ { "docid": "2ab8c692ef55d2501ff61f487f91da9c", "text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.", "title": "" } ]
[ { "docid": "7e25cdacce18dcb236f5347b8d2eaa76", "text": "Nowadays, quadrotors have become a popular UAV research platform because full control can be achieved through speed variations in each and every one of its four rotors. Here, a non-linear dynamic model based on quaternions for attitude is presented as well as its corresponding LQR Gain Scheduling Control. All considerations for the quadrotor movements are described through their state variables. Modeling is carried out through the Newton-Euler formalism. Finally, the control system is simulated and the results shown in a novel and direct unit quaternion. Thus, a successful trajectory and attitude control of a quadrotor is achieved.", "title": "" }, { "docid": "809b5194b8f842a6e3f7e5b8748fefc3", "text": "Failure modes and mechanisms of AlGaN/GaN high-electron-mobility transistors are reviewed. Data from three de-accelerated tests are presented, which demonstrate a close correlation between failure modes and bias point. Maximum degradation was found in \"semi-on\" conditions, close to the maximum of hot-electron generation which was detected with the aid of electroluminescence (EL) measurements. This suggests a contribution of hot-electron effects to device degradation, at least at moderate drain bias (VDS<30 V). A procedure for the characterization of hot carrier phenomena based on EL microscopy and spectroscopy is described. At high drain bias (VDS>30-50 V), new failure mechanisms are triggered, which induce an increase of gate leakage current. The latter is possibly related with the inverse piezoelectric effect leading to defect generation due to strain relaxation, and/or to localized permanent breakdown of the AlGaN barrier layer. Results are compared with literature data throughout the text.", "title": "" }, { "docid": "82c4aa6bc189e011556ca7aa6d1688b9", "text": "Two aspects of children’s early gender development the spontaneous production of gender labels and sex-typed play were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children’s gender labeling as based on mothers’ biweekly reports on their children’s language from 9 through 21 months. Videotapes of children’s play both alone and with mother at 17 and 21 months were independently analyzed for play with gender stereotyped and neutral toys. Finally, the relation between gender labeling and sex-typed play was examined. Children transitioned to using gender labels at approximately 19 months on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in sex-typed play, suggesting that knowledge of gender categories might influence sex-typing before the age of 2.", "title": "" }, { "docid": "94076bd2a4587df2bee9d09e81af2109", "text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.", "title": "" }, { "docid": "ddc0b599dc2cb3672e9a2a1f5a9a9163", "text": "Head and modifier detection is an important problem for applications that handle short texts such as search queries, ads keywords, titles, captions, etc. In many cases, short texts such as search queries do not follow grammar rules, and existing approaches for head and modifier detection are coarse-grained, domain specific, and/or require labeling of large amounts of training data. In this paper, we introduce a semantic approach for head and modifier detection. We first obtain a large number of instance level head-modifier pairs from search log. Then, we develop a conceptualization mechanism to generalize the instance level pairs to concept level. Finally, we derive weighted concept patterns that are concise, accurate, and have strong generalization power in head and modifier detection. Furthermore, we identify a subset of modifiers that we call constraints. Constraints are usually specific and not negligible as far as the intent of the short text is concerned, while non-constraint modifiers are more subjective. The mechanism we developed has been used in production for search relevance and ads matching. We use extensive experiment results to demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "4207c7f69d65c5b46abce85a369dada1", "text": "We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search.", "title": "" }, { "docid": "d4563e034ae0fb98f037625ca1b5b50a", "text": "This book focuses on the super resolution of images and video. The authors’ use of the term super resolution (SR) is used to describe the process of obtaining a high resolution (HR) image, or a sequence of HR images, from a set of low resolution (LR) observations. This process has also been referred to in the literature as resolution enhancement (RE). SR has been applied primarily to spatial and temporal RE, but also to hyperspectral image enhancement. This book concentrates on motion based spatial RE, although the authors also describe motion free and hyperspectral image SR problems. Also examined is the very recent research area of SR for compression, which consists of the intentional downsampling, during pre-processing, of a video sequence to be compressed and the application of SR techniques, during post-processing, on the compressed sequence. It is clear that there is a strong interplay between the tools and techniques developed for SR and a number of other inverse problems encountered in signal processing (e.g., image restoration, motion estimation). SR techniques are being applied to a variety of fields, such as obtaining improved still images from video sequences (video printing), high definition television, high performance color Liquid Crystal Display (LCD) screens, improvement of the quality of color images taken by one CCD, video surveillance, remote sensing, and medical imaging. The authors believe that the SR/RE area has matured enough to develop a body of knowledge that can now start to provide useful and practical solutions to challenging real problems and that SR techniques can be an integral part of an image and video codec and can drive the development of new coder-decoders (codecs) and standards.", "title": "" }, { "docid": "441aea2c0b53d453e15d0e27a5cd1bb5", "text": "The Benes network is a rearrangeable nonblocking network which can realize any arbitrary permutation. Overall, the r-dimensional Benes network connects 2 r inputs to 2 r outputs through 2r ? 1 levels of 2 2 switches. Each level of switches consists of 2 r?1 switches, and hence the size of the network has to be a power of two. In this paper, we extend Benes networks to arbitrary sizes. We also show that the looping routing algorithm used in Benes networks can be slightly modiied and applied to arbitrary size Benes networks.", "title": "" }, { "docid": "457e2f2583a94bf8b6f7cecbd08d7b34", "text": "We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.", "title": "" }, { "docid": "3c2f33b76e2b9da568e2d699e71ef93a", "text": "A novel approach to designing approximately linear phase infinite-impulse-response (IIR) digital filters in the passband region is introduced. The proposed approach yields digital IIR filters whose numerators represent linear phase finite-impulse-response (FIR) filters. As an example, low-pass IIR differentiators are introduced. The range and high-frequency suppression of the proposed low-pass differentiators are comparable to those obtained by higher order FIR low-pass differentiators. In addition, the differentiators exhibit almost linear phases in the passband regions", "title": "" }, { "docid": "dbec1cf4a0904af336e0c75c211f49b7", "text": "BACKGROUND\nBoron neutron capture therapy (BNCT) is based on the nuclear reaction that occurs when boron-10 is irradiated with low-energy thermal neutrons to yield high linear energy transfer alpha particles and recoiling lithium-7 nuclei. Clinical interest in BNCT has focused primarily on the treatment of high-grade gliomas and either cutaneous primaries or cerebral metastases of melanoma, most recently, head and neck and liver cancer. Neutron sources for BNCT currently are limited to nuclear reactors and these are available in the United States, Japan, several European countries, and Argentina. Accelerators also can be used to produce epithermal neutrons and these are being developed in several countries, but none are currently being used for BNCT.\n\n\nBORON DELIVERY AGENTS\nTwo boron drugs have been used clinically, sodium borocaptate (Na(2)B(12)H(11)SH) and a dihydroxyboryl derivative of phenylalanine called boronophenylalanine. The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting to achieve boron concentrations ( approximately 20 microg/g tumor) sufficient to deliver therapeutic doses of radiation to the tumor with minimal normal tissue toxicity. Over the past 20 years, other classes of boron-containing compounds have been designed and synthesized that include boron-containing amino acids, biochemical precursors of nucleic acids, DNA-binding molecules, and porphyrin derivatives. High molecular weight delivery agents include monoclonal antibodies and their fragments, which can recognize a tumor-associated epitope, such as epidermal growth factor, and liposomes. However, it is unlikely that any single agent will target all or even most of the tumor cells, and most likely, combinations of agents will be required and their delivery will have to be optimized.\n\n\nCLINICAL TRIALS\nCurrent or recently completed clinical trials have been carried out in Japan, Europe, and the United States. The vast majority of patients have had high-grade gliomas. Treatment has consisted first of \"debulking\" surgery to remove as much of the tumor as possible, followed by BNCT at varying times after surgery. Sodium borocaptate and boronophenylalanine administered i.v. have been used as the boron delivery agents. The best survival data from these studies are at least comparable with those obtained by current standard therapy for glioblastoma multiforme, and the safety of the procedure has been established.\n\n\nCONCLUSIONS\nCritical issues that must be addressed include the need for more selective and effective boron delivery agents, the development of methods to provide semiquantitative estimates of tumor boron content before treatment, improvements in clinical implementation of BNCT, and a need for randomized clinical trials with an unequivocal demonstration of therapeutic efficacy. If these issues are adequately addressed, then BNCT could move forward as a treatment modality.", "title": "" }, { "docid": "3d8f937692b9c0e2bb2c5b0148e1ef2c", "text": "BACKGROUND\nAttenuated peripheral perfusion in patients with advanced chronic heart failure (CHF) is partially the result of endothelial dysfunction. This has been causally linked to an impaired endogenous regenerative capacity of circulating progenitor cells (CPC). The aim of this study was to elucidate whether exercise training (ET) affects exercise intolerance and left ventricular (LV) performance in patients with advanced CHF (New York Heart Association class IIIb) and whether this is associated with correction of peripheral vasomotion and induction of endogenous regeneration.\n\n\nMETHODS AND RESULTS\nThirty-seven patients with CHF (LV ejection fraction 24+/-2%) were randomly assigned to 12 weeks of ET or sedentary lifestyle (control). At the beginning of the study and after 12 weeks, maximal oxygen consumption (Vo(2)max) and LV ejection fraction were determined; the number of CD34(+)/KDR(+) CPCs was quantified by flow cytometry and CPC functional capacity was determined by migration assay. Flow-mediated dilation was assessed by ultrasound. Capillary density was measured in skeletal muscle tissue samples. In advanced CHF, ET improved Vo(2)max by +2.7+/-2.2 versus -0.8+/-3.1 mL/min/kg in control (P=0.009) and LV ejection fraction by +9.4+/-6.1 versus -0.8+/-5.2% in control (P<0.001). Flow-mediated dilation improved by +7.43+/-2.28 versus +0.09+/-2.18% in control (P<0.001). ET increased the number of CPC by +83+/-60 versus -6+/-109 cells/mL in control (P=0.014) and their migratory capacity by +224+/-263 versus -12+/-159 CPC/1000 plated CPC in control (P=0.03). Skeletal muscle capillary density increased by +0.22+/-0.10 versus -0.02+/-0.16 capillaries per fiber in control (P<0.001).\n\n\nCONCLUSIONS\nTwelve weeks of ET in patients with advanced CHF is associated with augmented regenerative capacity of CPCs, enhanced flow-mediated dilation suggestive of improvement in endothelial function, skeletal muscle neovascularization, and improved LV function. Clinical Trial Registration- http://www.clinicaltrials.gov. Unique Identifier: NCT00176384.", "title": "" }, { "docid": "58de521ab563333c2051b590592501a8", "text": "Prognostics and systems health management (PHM) is an enabling discipline that uses sensors to assess the health of systems, diagnoses anomalous behavior, and predicts the remaining useful performance over the life of the asset. The advent of the Internet of Things (IoT) enables PHM to be applied to all types of assets across all sectors, thereby creating a paradigm shift that is opening up significant new business opportunities. This paper introduces the concepts of PHM and discusses the opportunities provided by the IoT. Developments are illustrated with examples of innovations from manufacturing, consumer products, and infrastructure. From this review, a number of challenges that result from the rapid adoption of IoT-based PHM are identified. These include appropriate analytics, security, IoT platforms, sensor energy harvesting, IoT business models, and licensing approaches.", "title": "" }, { "docid": "ff6b4840787027df75873f38fbb311b4", "text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.", "title": "" }, { "docid": "dc22f9ee68e7c81a353a128a9cc32152", "text": "In this paper we describe a new global alignment method called AVID. The method is designed to be fast, memory efficient, and practical for sequence alignments of large genomic regions up to megabases long. We present numerous applications of the method, ranging from the comparison of assemblies to alignment of large syntenic genomic regions and whole genome human/mouse alignments. We have also performed a quantitative comparison of AVID with other popular alignment tools. To this end, we have established a format for the representation of alignments and methods for their comparison. These formats and methods should be useful for future studies. The tools we have developed for the alignment comparisons, as well as the AVID program, are publicly available. See Web Site References section for AVID Web address and Web addresses for other programs discussed in this paper.", "title": "" }, { "docid": "d473619f76f81eced041df5bc012c246", "text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.", "title": "" }, { "docid": "919d9d27045587eceb4b12f8118c92c5", "text": "The authors present an approach for designing self-monitoring technology called \"semi-automated tracking,\" which combines both manual and automated data collection methods. Through this approach, they aim to lower the capture burdens, collect data that is typically hard to track automatically, and promote awareness to help people achieve their self-monitoring goals. They first specify three design considerations for semi-automated tracking: data capture feasibility, the purpose of self-monitoring, and the motivation level. They then provide examples of semi-automated tracking applications in the domains of sleep, mood, and food tracking to demonstrate strategies they developed to find the right balance between manual tracking and automated tracking, combining each of their benefits while minimizing their associated limitations.", "title": "" }, { "docid": "503100a80d6aac3d2549825be8b64ef8", "text": "Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net’s input given the value of a hidden layer several levels above. However, there is no accompanying “proof of correctness” for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated. The current paper takes a more theoretical tack. It presents a very simple generative model for ReLU deep nets, with the following characteristics: (i) The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is A then the reverse transformation is A . (This can be seen as an explanation of the old weight tying idea for denoising autoencoders.) (ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in real-life deep nets behave like random numbers. Under this assumption —which is experimentally tested on real-life nets like AlexNet— it is formally proved that feed forward net is a correct inference method for recovering the hidden layer. The generative model suggests a simple modification for training: use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of random-like deep nets; and that it helps the training. This extended abstract provides a succinct description of our results while the full paper is available on arXiv.", "title": "" }, { "docid": "d8f21e77a60852ea83f4ebf74da3bcd0", "text": "In recent years different lines of evidence have led to the idea that motor actions and movements in both vertebrates and invertebrates are composed of elementary building blocks. The entire motor repertoire can be spanned by applying a well-defined set of operations and transformations to these primitives and by combining them in many different ways according to well-defined syntactic rules. Motor and movement primitives and modules might exist at the neural, dynamic and kinematic levels with complicated mapping among the elementary building blocks subserving these different levels of representation. Hence, while considerable progress has been made in recent years in unravelling the nature of these primitives, new experimental, computational and conceptual approaches are needed to further advance our understanding of motor compositionality.", "title": "" }, { "docid": "0ccf20f28baf8a11c78d593efb9f6a52", "text": "From a traction application point of view, proper operation of the synchronous reluctance motor over a wide speed range and mechanical robustness is desired. This paper presents new methods to improve the rotor mechanical integrity and the flux weakening capability at high speed using geometrical and variable ampere-turns concepts. The results from computer-aided analysis and experiment are compared to evaluate the methods. It is shown that, to achieve a proper design at high speed, the magnetic and mechanical performances need to be simultaneously analyzed due to their mutual effect.", "title": "" } ]
scidocsrr
718c28006c1b10242323656dc6df5662
Extraction and Recognition of the Vehicle License Plate for Passing under Outside Environment
[ { "docid": "31a3750823b0c8dc4302fae37c81c022", "text": "Automatic Number Plate Recognition (ANPR) is a mass surveillance system that captures the image of vehicles and recognizes their license number. ANPR can be assisted in the detection of stolen vehicles. The detection of stolen vehicles can be done in an efficient manner by using the ANPR systems located in the highways. This paper presents a recognition method in which the vehicle plate image is obtained by the digital cameras and the image is processed to get the number plate information. A rear image of a vehicle is captured and processed using various algorithms. In this context, the number plate area is localized using a novel „feature-based number plate localization‟ method which consists of many algorithms. But our study mainly focusing on the two fast algorithms i.e., Edge Finding Method and Window Filtering Method for the better development of the number plate detection system", "title": "" } ]
[ { "docid": "e93f4f5c5828a7e82819964bbd29f8d4", "text": "BACKGROUND\nAlthough hyaluronic acid (HA) specifications such as molecular weight and particle size are fairly well characterized, little information about HA ultrastructural and morphologic characteristics has been reported in clinical literature.\n\n\nOBJECTIVE\nTo examine uniformity of HA structure, the effects of extrusion, and lidocaine dilution of 3 commercially available HA soft-tissue fillers.\n\n\nMATERIALS AND METHODS\nUsing scanning electron microscopy and energy-dispersive x-ray analysis, investigators examined the soft-tissue fillers at various magnifications for ultrastructural detail and elemental distributions.\n\n\nRESULTS\nAll HAs contained oxygen, carbon, and sodium, but with uneven distributions. Irregular particulate matter was present in RES but BEL and JUV were largely particle free. Spacing was more uniform in BEL than JUV and JUV was more uniform than RES. Lidocaine had no apparent effect on morphology; extrusion through a 30-G needle had no effect on ultrastructure.\n\n\nCONCLUSION\nDescriptions of the ultrastructural compositions and nature of BEL, JUV, and RES are helpful for matching the areas to be treated with the HA soft-tissue filler architecture. Lidocaine and extrusion through a 30-G needle exerted no influence on HA structure. Belotero Balance shows consistency throughout the syringe and across manufactured lots.", "title": "" }, { "docid": "79a180a30dd47d50ae15081aab68badd", "text": "For a description of light propagation in biological tissue it is usually assumed that tissue is a random medium. We report a pronounced light guiding effect in cubes of human dentin that cannot be described by this standard model. Monte Carlo simulations which consider the microstructure of dentin are performed and successfully compared to experiments. Contrary to explanations so far, we show that light guiding is due to scattering by the tissue's microstructure. Exploiting this concept, light can be guided in arbitrary directions or locations without involving reflections or wave effects.", "title": "" }, { "docid": "6477206bc2547c8bac755a9d326258b1", "text": "Due to recent advances in digital technologies, and availability of credible data, an area of artificial intelligence, deep learning, has emerged and has demonstrated its ability and effectiveness in solving complex learning problems not possible before. In particular, convolutional neural networks (CNNs) have demonstrated their effectiveness in the image detection and recognition applications. However, they require intensive CPU operations and memory bandwidth that make general CPUs fail to achieve the desired performance levels. Consequently, hardware accelerators that use application-specific integrated circuits, field-programmable gate arrays (FPGAs), and graphic processing units have been employed to improve the throughput of CNNs. More precisely, FPGAs have been recently adopted for accelerating the implementation of deep learning networks due to their ability to maximize parallelism and their energy efficiency. In this paper, we review the recent existing techniques for accelerating deep learning networks on FPGAs. We highlight the key features employed by the various techniques for improving the acceleration performance. In addition, we provide recommendations for enhancing the utilization of FPGAs for CNNs acceleration. The techniques investigated in this paper represent the recent trends in the FPGA-based accelerators of deep learning networks. Thus, this paper is expected to direct the future advances on efficient hardware accelerators and to be useful for deep learning researchers.", "title": "" }, { "docid": "49f96e96623502ffe6053cab43054edf", "text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.", "title": "" }, { "docid": "bf9da537d5efcc5b90609db9f9ec39b9", "text": "why the pattern is found in other types of skin lesions with active vascularization, such as our patient’s scars. When first described in actinic keratosis, rosettes were characterized as ‘‘4 white points arranged as a 4-leaf clover.’’2 The sign has since been reported in other skin lesions such as squamous cell carcinoma, basal cell carcinoma, melanoma, and lichenoid keratosis.3--7 Rosettes are believed to be the result of an optical effect caused by interaction between polarized light and follicular openings.6 The rainbow pattern and rosettes are not considered to be specific dermoscopic features of the lesion. Since it appears that they are secondary effects of the interaction between different skin structures and polarized light, they will likely be observed in various types of skin lesions. References", "title": "" }, { "docid": "4b42068a2a3a7a51cc6b1ef4991c282c", "text": "Group sparsity or nonlocal image representation has shown great potential in image denoising. However, most existing methods only consider the nonlocal self-similarity (NSS) prior of noisy input image, that is, the similar patches collected only from degraded input, which makes the quality of image denoising largely depend on the input itself. In this paper we propose a new prior model for image denoising, called group sparsity residual constraint (GSRC). Different from the most existing NSS prior-based denoising methods, two kinds of NSS prior (i.e., NSS priors of noisy input image and pre-filtered image) are simultaneously used for image denoising. In particular, to boost the performance of group sparse-based image denoising, the group sparsity residual is proposed, and thus the problem of image denoising is transformed into one that reduces the group sparsity residual. To reduce the residual, we first obtain a good estimation of the group sparse coefficients of the original image by pre-filtering and then the group sparse coefficients of noisy input image are used to approximate the estimation. To improve the accuracy of the nonlocal similar patches selection, an adaptive patch search scheme is proposed. Moreover, to fuse these two NSS priors better, an effective iterative shrinkage algorithm is developed to solve the proposed GSRC model. Experimental results have demonstrated that the proposed GSRC modeling outperforms many state-of-the-art denoising methods in terms of the objective and the perceptual qualities.", "title": "" }, { "docid": "748c2047817ad53abf60a26624612a9e", "text": "In this paper, we propose a new method to efficiently synthesi ze character motions that involve close contacts such as wearing a T-shirt, passing the arms through the strin gs of a knapsack, or piggy-back carrying an injured person. We introduce the concept of topology coordinates, i n which the topological relationships of the segments are embedded into the attributes. As a result, the computati on for collision avoidance can be greatly reduced for complex motions that require tangling the segments of the bo dy. Our method can be combinedly used with other prevalent frame-based optimization techniques such as inv erse kinematics.", "title": "" }, { "docid": "fecf5e7f8be440b0e77e4483f3526998", "text": "Image translation between two domains is a class of problems aiming to learn mapping from an input image in the source domain to an output image in the target domain. It has been applied to numerous domains, such as data augmentation, domain adaptation and unsupervised training. When paired training data is not accessible, image translation becomes an ill-posed problem. We constrain the problem with the assumption that the translated image needs to be perceptually similar to the original image and also appears to be drawn from the new domain, and propose a simple yet effective image translation model consisting of a single generator trained with a self-regularization term and an adversarial term. We further notice that existing image translation techniques [56, 28] are agnostic to the subjects of interest and often introduce unwanted changes or artifacts to the input. Thus we propose to add an attention module to predict an attention map to guide the image translation process. The module learns to attend to key parts of the image while keeping everything else unaltered, essentially avoiding undesired artifacts or changes. The predicted attention map also opens door to applications such as unsupervised segmentation and saliency detection. Extensive experiments and evaluations show that our model while being simpler, achieves significantly better performance than existing image translation methods.", "title": "" }, { "docid": "999eda741a3c132ac8640e55721b53bb", "text": "This paper presents an overview of color and texture descriptors that have been approved for the Final Committee Draft of the MPEG-7 standard. The color and texture descriptors that are described in this paper have undergone extensive evaluation and development during the past two years. Evaluation criteria include effectiveness of the descriptors in similarity retrieval, as well as extraction, storage, and representation complexities. The color descriptors in the standard include a histogram descriptor that is coded using the Haar transform, a color structure histogram, a dominant color descriptor, and a color layout descriptor. The three texture descriptors include one that characterizes homogeneous texture regions and another that represents the local edge distribution. A compact descriptor that facilitates texture browsing is also defined. Each of the descriptors is explained in detail by their semantics, extraction and usage. Effectiveness is documented by experimental results.", "title": "" }, { "docid": "dddef6d3c0b8d32f215094f7fd8a5f54", "text": "Complex systems are often characterized by distinct types of interactions between the same entities. These can be described as a multilayer network where each layer represents one type of interaction. These layers may be interdependent in complicated ways, revealing different kinds of structure in the network. In this work we present a generative model, and an efficient expectation-maximization algorithm, which allows us to perform inference tasks such as community detection and link prediction in this setting. Our model assumes overlapping communities that are common between the layers, while allowing these communities to affect each layer in a different way, including arbitrary mixtures of assortative, disassortative, or directed structure. It also gives us a mathematically principled way to define the interdependence between layers, by measuring how much information about one layer helps us predict links in another layer. In particular, this allows us to bundle layers together to compress redundant information and identify small groups of layers which suffice to predict the remaining layers accurately. We illustrate these findings by analyzing synthetic data and two real multilayer networks, one representing social support relationships among villagers in South India and the other representing shared genetic substring material between genes of the malaria parasite.", "title": "" }, { "docid": "aeae1018401535451478c473ce53ce92", "text": "This paper reports three experiments using the secondary task methodology of working memory, in the task analysis of a complex computer game, ‘SPACE FORTRESS. Unlike traditional studies of working memory, the primary task relies on perceptual-motor skills and accurate timing of responses as well as shortand long-term strategic decisions. In experiment 1, highly trained game performance was affected by the requirement to generate concurrent, paced responses and by concurrent loads on working memory, but not by the requirement to produce a vocal or a tapping response to a secondary stimulus. In experiment 2, expert performance was substantially affected by secondary tasks which had high v&o-spatial or verbal cognitive processing loads, but was not contingent upon the nature (verbal or visuo-spatial) of the processing requirement. In experiment 3, subjects were tested on dual-task performance after only 3 hours practice on Space Fortress, and again after a further five hours practice on the game. Early in training, paced generation of responses had very little effect on game performance. Game performance was affected by general working memory load, but an analysis of component measures showed that a wider range and rather different aspects of performance were disrupted by a visuo-spatial memory load than were affected by a secondary verbal load. With further training this pattern changed such that the differential nature of the disruption by a secondary visuo-spatial task was much reduced. Also, paced generation of responses had a small effect on game performance. However the disruption was not as dramatic as that shown for expert players. Subjective ratings of task difficulty were poor predictors of performance in all of the three experiments. These results suggested that general working memory load was an important aspect of performance at all levels", "title": "" }, { "docid": "7db08db3dc8ea195b2c2e3b48d358367", "text": "Relationships between authors based on characteristics of published literature have been studied for decades. Author cocitation analysis using mapping techniques has been most frequently used to study how closely two authors are thought to be in intellectual space based on how members of the research community co-cite their works. Other approaches exist to study author relatedness based more directly on the text of their published works. In this study we present static and dynamic word-based approaches using vector space modeling, as well as a topic-based approach based on Latent Dirichlet Allocation for mapping author research relatedness. Vector space modeling is used to define an author space consisting of works by a given author. Outcomes for the two word-based approaches and a topic-based approach for 50 prolific authors in library and information science are compared with more traditional author cocitation analysis using multidimensional scaling and hierarchical cluster analysis. The two word-based approaches produced similar outcomes except where two authors were frequent co-authors for the majority of their articles. The topic-based approach produced the most distinctive map.", "title": "" }, { "docid": "486d77b1e951e5c87454490c15d91ae5", "text": "BACKGROUND\nThe influence of menopausal status on depressive symptoms is unclear in diverse ethnic groups. This study examined the longitudinal relationship between changes in menopausal status and the risk of clinically relevant depressive symptoms and whether the relationship differed according to initial depressive symptom level.\n\n\nMETHODS\n3302 African American, Chinese, Hispanic, Japanese, and White women, aged 42-52 years at entry into the Study of Women's Health Across the Nation (SWAN), a community-based, multisite longitudinal observational study, were evaluated annually from 1995 through 2002. Random effects multiple logistic regression analyses were used to determine the relationship between menopausal status and prevalence of low and high depressive symptom scores (CES-D <16 or > or =16) over 5 years.\n\n\nRESULTS\nAt baseline, 23% of the sample had elevated CES-D scores. A woman was more likely to report CES-D > or =16 when she was early peri-, late peri-, postmenopausal or currently/formerly using hormone therapy (HT), relative to when she was premenopausal (OR range 1.30 to 1.71). Effects were somewhat stronger for women with low CES-D scores at baseline. Health and psychosocial factors increased the odds of having a high CES-D and in some cases, were more important than menopausal status.\n\n\nLIMITATIONS\nWe used a measure of current depressive symptoms rather than a diagnosis of clinical depression. Thus, we can only make conclusions about symptoms current at annual assessments.\n\n\nCONCLUSION\nMost midlife women do not experience high depressive symptoms. Those that do are more likely to experience high depressive symptom levels when perimenopausal or postmenopausal than when premenopausal, independent of factors such as difficulty paying for basics, negative attitudes, poor perceived health, and stressful events.", "title": "" }, { "docid": "c675a2f1fed4ccb5708be895190b02cd", "text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.", "title": "" }, { "docid": "a9814f2847c6e1bf66893e4fa1a9c50e", "text": "This paper is aimed at obtaining some new lower and upper bounds for the functions cosx , sinx/x , x/coshx , thus establishing inequalities involving circulr, hyperbolic and exponential functions.", "title": "" }, { "docid": "ea624ba3a83c4f042fb48f4ebcba705a", "text": "Using magnetic field data as fingerprints for smartphone indoor positioning has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which result in unreliable systems, or impose strong restrictions on smartphone such as fixed phone orientation, which are not practical for real-life use. In this paper, we present a novel indoor positioning system for smartphones, which is built on our proposed reliability-augmented particle filter. We create several innovations on the motion model, the measurement model, and the resampling model to enhance the basic particle filter. To minimize errors in motion estimation and improve the robustness of the basic particle filter, we propose a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model, combining a new magnetic fingerprinting model and the existing magnitude fingerprinting model, to improve system performance, and importantly avoid calibrating magnetometers for different smartphones. In addition, we propose an adaptive sampling algorithm to reduce computation overhead, which in turn improves overall usability tremendously. Finally, we also analyze the “Kidnapped Robot Problem” and present a practical solution. We conduct comprehensive experimental studies, and the results show that our system achieves an accuracy of 1~2 m on average in a large building.", "title": "" }, { "docid": "df9c6dc1d6d1df15b78b7db02f055f70", "text": "The robotic grasp detection is a great challenge in the area of robotics. Previous work mainly employs the visual approaches to solve this problem. In this paper, a hybrid deep architecture combining the visual and tactile sensing for robotic grasp detection is proposed. We have demonstrated that the visual sensing and tactile sensing are complementary to each other and important for the robotic grasping. A new THU grasp dataset has also been collected which contains the visual, tactile and grasp configuration information. The experiments conducted on a public grasp dataset and our collected dataset show that the performance of the proposed model is superior to state of the art methods. The results also indicate that the tactile data could help to enable the network to learn better visual features for the robotic grasp detection task.", "title": "" }, { "docid": "e118157c12c2cfd7a91bf668021da477", "text": "We describe a prototype 73 gram, 21 cm diameter micro quadrotor with onboard attitude estimation and control that operates autonomously with an external localization system. We argue that the reduction in size leads to agility and the ability to operate in tight formations and provide experimental arguments in support of this claim. The robot is shown to be capable of 1850◦/sec roll and pitch, performs a 360◦ flip in 0.4 seconds and exhibits a lateral step response of 1 body length in 1 second. We describe the architecture and algorithms to coordinate a team of quadrotors, organize them into groups and fly through known three-dimensional environments. We provide experimental results for a team of 20 micro quadrotors.", "title": "" }, { "docid": "300cd3e2d8e21f0c8dcf5ecba72cf283", "text": "Accurate and reliable traffic forecasting for complicated transportation networks is of vital importance to modern transportation management. The complicated spatial dependencies of roadway links and the dynamic temporal patterns of traffic states make it particularly challenging. To address these challenges, we propose a new capsule network (CapsNet) to extract the spatial features of traffic networks and utilize a nested LSTM (NLSTM) structure to capture the hierarchical temporal dependencies in traffic sequence data. A framework for network-level traffic forecasting is also proposed by sequentially connecting CapsNet and NLSTM. On the basis of literature review, our study is the first to adopt CapsNet and NLSTM in the field of traffic forecasting. An experiment on a Beijing transportation network with 278 links shows that the proposed framework with the capability of capturing complicated spatiotemporal traffic patterns outperforms multiple state-of-the-art traffic forecasting baseline models. The superiority and feasibility of CapsNet and NLSTM are also demonstrated, respectively, by visualizing and quantitatively evaluating the experimental results.", "title": "" } ]
scidocsrr
788004135ff47671a8b6d8918d24ccd8
Single Image Layer Separation via Deep Admm Unrolling
[ { "docid": "7113e7db7246d733a695736f826812f5", "text": "We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.", "title": "" }, { "docid": "1f003b16c5343f0abdee26bcde53b86e", "text": "Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a “rain component” and a “nonrain component” by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.", "title": "" } ]
[ { "docid": "95b48a41d796aec0a1f23b3fc0879ed9", "text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.", "title": "" }, { "docid": "d30cdd113970fa8570a795af6b5193e1", "text": "Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter/intra subject variability. In this paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between two subjects. CTW extends previous work on CCA in two ways: (i) it combines CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing local spatial deformations. We show CTW’s effectiveness in three experiments: alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two people. Our results demonstrate that CTW provides both visually and qualitatively better alignment than state-of-the-art techniques based on DTW.", "title": "" }, { "docid": "01209a2ace1a4bc71ad4ff848bb8a3f4", "text": "For data storage outsourcing services, it is important to allow data owners to efficiently and securely verify that the storage server stores their data correctly. To address this issue, several proof-of-retrievability (POR) schemes have been proposed wherein a storage server must prove to a verifier that all of a client's data are stored correctly. While existing POR schemes offer decent solutions addressing various practical issues, they either have a non-trivial (linear or quadratic) communication complexity, or only support private verification, i.e., only the data owner can verify the remotely stored data. It remains open to design a POR scheme that achieves both public verifiability and constant communication cost simultaneously.\n In this paper, we solve this open problem and propose the first POR scheme with public verifiability and constant communication cost: in our proposed scheme, the message exchanged between the prover and verifier is composed of a constant number of group elements; different from existing private POR constructions, our scheme allows public verification and releases the data owners from the burden of staying online. We achieved these by tailoring and uniquely combining techniques such as constant size polynomial commitment and homomorphic linear authenticators. Thorough analysis shows that our proposed scheme is efficient and practical. We prove the security of our scheme based on the Computational Diffie-Hellman Problem, the Strong Diffie-Hellman assumption and the Bilinear Strong Diffie-Hellman assumption.", "title": "" }, { "docid": "b3962fd4000fced796f3764d009c929e", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "f06aaad6da36bfd60c1937c20390f3bb", "text": "Spinal cord injury (SCI) is a devastating neurological disorder. Autophagy is induced and plays a crucial role in SCI. Ginsenoside Rb1 (Rb1), one of the major active components extracted from Panax Ginseng CA Meyer, has exhibited neuroprotective effects in various neurodegenerative diseases. However, it remains unknown whether autophagy is involved in the neuroprotection of Rb1 on SCI. In this study, we examined the regulation of autophagy following Rb1 treatment and its involvement in the Rb1-induced neuroprotection in SCI and in vitro injury model. Firstly, we found that Rb1 treatment decreased the loss of motor neurons and promoted function recovery in the SCI model. Furthermore, we found that Rb1 treatment inhibited autophagy in neurons, and suppressed neuronal apoptosis and autophagic cell death in the SCI model. Finally, in the in vitro injury model, Rb1 treatment increased the viability of PC12 cells and suppressed apoptosis by inhibiting excessive autophagy, whereas stimulation of autophagy by rapamycin abolished the anti-apoptosis effect of Rb1. Taken together, these findings suggest that the inhibition of autophagy is involved in the neuroprotective effects of Rb1 on SCI.", "title": "" }, { "docid": "253072dcfdf4c417819ce8eee6af886f", "text": "The majority of theoretical work in machine learning is done under the assumption of exchangeability: essentially, it is assumed that the examples are generated from the same probability distribution independently. This paper is concerned with the problem of testing the exchangeability assumption in the on-line mode: examples are observed one by one and the goal is to monitor on-line the strength of evidence against the hypothesis of exchangeability. We introduce the notion of exchangeability martingales, which are on-line procedures for detecting deviations from exchangeability; in essence, they are betting schemes that never risk bankruptcy and are fair under the hypothesis of exchangeability. Some specific exchangeability martingales are constructed using Transductive Confidence Machine. We report experimental results showing their performance on the USPS benchmark data set of hand-written digits (known to be somewhat heterogeneous); one of them multiplies the initial capital by more than 10; this means that the hypothesis of exchangeability is rejected at the significance level 10−18.", "title": "" }, { "docid": "a51a3e1ae86e4d178efd610d15415feb", "text": "The availability of semantically annotated image and video assets constitutes a critical prerequisite for the realisation of intelligent knowledge management services pertaining to realistic user needs. Given the extend of the challenges involved in the automatic extraction of such descriptions, manually created metadata play a significant role, further strengthened by their deployment in training and evaluation tasks related to the automatic extraction of content descriptions. The different views taken by the two main approaches towards semantic content description, namely the Semantic Web and MPEG-7, as well as the traits particular to multimedia content due to the multiplicity of information levels involved, have resulted in a variety of image and video annotation tools, adopting varying description aspects. Aiming to provide a common framework of reference and furthermore to highlight open issues, especially with respect to the coverage and the interoperability of the produced metadata, in this chapter we present an overview of the state of the art in image and video annotation tools.", "title": "" }, { "docid": "ef96ba2a3fde7f645c7920443176af88", "text": "Caulerpa racemosa, a common and opportunistic species widely distributed in tropical and warm-temperate regions, is known to form monospecific stands outside its native range (Verlaque et al. 2003). In October 2011, we observed an alteration in benthic community due to a widespread overgrowth of C. racemosa around the inhabited island of Magoodhoo (3 04¢N; 72 57¢E, Republic of Maldives). The algal mats formed a continuous dense meadow (Fig. 1a) that occupied an area of 95 · 120 m (~11,000 m) previously dominated by the branching coral Acropora muricata. Partial mortality and total mortality (Fig. 1b, c) were recorded on 45 and 30% of A. muricata colonies, respectively. The total area of influence of C. racemosa was, however, much larger (~25,000 m) including smaller coral patches near to the meadow, where mortality in contact with the algae was also observed on colonies of Isopora palifera, Lobophyllia corymbosa, Pavona varians, Pocillopora damicornis, and Porites solida. Although species of the genus Caulerpa are not usually abundant on oligotrophic coral reefs, nutrient enrichment from natural and/or anthropogenic sources is known to promote green algal blooms (Lapointe and Bedford 2009). Considering the current state of regression of many reefs in the Maldives (Lasagna et al. 2010), we report an unusual phenomenon that could possibly become more common.", "title": "" }, { "docid": "03e48fbf57782a713bd218377290044c", "text": "Several researchers have shown that the efficiency of value iteration, a dynamic programming algorithm for Markov decision processes, can be improved by prioritizing the order of Bellman backups to focus computation on states where the value function can be improved the most. In previous work, a priority queue has been used to order backups. Although this incurs overhead for maintaining the priority queue, previous work has argued that the overhead is usually much less than the benefit from prioritization. However this conclusion is usually based on a comparison to a non-prioritized approach that performs Bellman backups on states in an arbitrary order. In this paper, we show that the overhead for maintaining the priority queue can be greater than the benefit, when it is compared to very simple heuristics for prioritizing backups that do not require a priority queue. Although the order of backups induced by our simple approach is often sub-optimal, we show that its smaller overhead allows it to converge faster than other state-of-the-art priority-based solvers.", "title": "" }, { "docid": "d7f878ed79899f72d5d7bf58a7dcaa40", "text": "We report in detail the decoding strategy that we used for the past two Darpa Rich Transcription evaluations (RT’03 and RT’04) which is based on finite state automata (FSA). We discuss the format of the static decoding graphs, the particulars of our Viterbi implementation, the lattice generation and the likelihood evaluation. This paper is intended to familiarize the reader with some of the design issues encountered when building an FSA decoder. Experimental results are given on the EARS database (English conversational telephone speech) with emphasis on our faster than real-time system.", "title": "" }, { "docid": "5455e7d53e6de4cbe97cbcdf6eea9806", "text": "OBJECTIVE\nTo evaluate the clinical and radiological results in the surgical treatment of moderate and severe hallux valgus by performing percutaneous double osteotomy.\n\n\nMATERIAL AND METHOD\nA retrospective study was conducted on 45 feet of 42 patients diagnosed with moderate-severe hallux valgus, operated on in a single centre and by the same surgeon from May 2009 to March 2013. Two patients were lost to follow-up. Clinical and radiological results were recorded.\n\n\nRESULTS\nAn improvement from 48.14 ± 4.79 points to 91.28 ± 8.73 points was registered using the American Orthopedic Foot and Ankle Society (AOFAS) scale. A radiological decrease from 16.88 ± 2.01 to 8.18 ± 3.23 was observed in the intermetatarsal angle, and from 40.02 ± 6.50 to 10.51 ± 6.55 in hallux valgus angle. There was one case of hallux varus, one case of non-union, a regional pain syndrome type I, an infection that resolved with antibiotics, and a case of loosening of the osteosynthesis that required an open surgical refixation.\n\n\nDISCUSSION\nPercutaneous distal osteotomy of the first metatarsal when performed as an isolated procedure, show limitations when dealing with cases of moderate and severe hallux valgus. The described technique adds the advantages of minimally invasive surgery by expanding applications to severe deformities.\n\n\nCONCLUSIONS\nPercutaneous double osteotomy is a reproducible technique for correcting severe deformities, with good clinical and radiological results with a complication rate similar to other techniques with the advantages of shorter surgical times and less soft tissue damage.", "title": "" }, { "docid": "63fef6099108f7990da0a7687e422e14", "text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.", "title": "" }, { "docid": "893a8c073b8bd935fbea419c0f3e0b17", "text": "Computing as a service model in cloud has encouraged High Performance Computing to reach out to wider scientific and industrial community. Many small and medium scale HPC users are exploring Infrastructure cloud as a possible platform to run their applications. However, there are gaps between the characteristic traits of an HPC application and existing cloud scheduling algorithms. In this paper, we propose an HPC-aware scheduler and implement it atop Open Stack scheduler. In particular, we introduce topology awareness and consideration for homogeneity while allocating VMs. We demonstrate the benefits of these techniques by evaluating them on a cloud setup on Open Cirrus test-bed.", "title": "" }, { "docid": "06ff54cb5c44fdc49000f6c1b5a2bf01", "text": "Ego-disturbances have been a topic in schizophrenia research since the earliest clinical descriptions of the disorder. Manifesting as a feeling that one's \"self,\" \"ego,\" or \"I\" is disintegrating or that the border between one's self and the external world is dissolving, \"ego-disintegration\" or \"dissolution\" is also an important feature of the psychedelic experience, such as is produced by psilocybin (a compound found in \"magic mushrooms\"). Fifteen healthy subjects took part in this placebo-controlled study. Twelve-minute functional MRI scans were acquired on two occasions: subjects received an intravenous infusion of saline on one occasion (placebo) and 2 mg psilocybin on the other. Twenty-two visual analogue scale ratings were completed soon after scanning and the first principal component of these, dominated by items referring to \"ego-dissolution\", was used as a primary measure of interest in subsequent analyses. Employing methods of connectivity analysis and graph theory, an association was found between psilocybin-induced ego-dissolution and decreased functional connectivity between the medial temporal lobe and high-level cortical regions. Ego-dissolution was also associated with a \"disintegration\" of the salience network and reduced interhemispheric communication. Addressing baseline brain dynamics as a predictor of drug-response, individuals with lower diversity of executive network nodes were more likely to experience ego-dissolution under psilocybin. These results implicate MTL-cortical decoupling, decreased salience network integrity, and reduced inter-hemispheric communication in psilocybin-induced ego disturbance and suggest that the maintenance of \"self\"or \"ego,\" as a perceptual phenomenon, may rest on the normal functioning of these systems.", "title": "" }, { "docid": "9469d888646ad4c8373e855b4a2c650d", "text": "This paper addresses the problem of learning over-complete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods by applying this method to single image super-resolution. The experimental results show that dictionaries learned by our method produces the best super-resolution results compared to other state-of-the-art methods.", "title": "" }, { "docid": "de721f4b839b0816f551fa8f8ee2065e", "text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", "title": "" }, { "docid": "8b64d5f3c59737369e2e6d8a12fc4c20", "text": "A microcontroller based advanced technique of generating sine wave with lowest harmonics is designed and implemented in this paper. The main objective of our proposed technique is to design a low cost, low harmonics voltage source inverter. In our project we used PIC16F73 microcontroller to generate 4 KHz pwm switching signal. The design is essentially focused upon low power electronic appliances such as light, fan, chargers, television etc. In our project we used STP55NF06 NMOSFET, which is a depletion type N channel MOSFET. For driving the MOSFET we used TLP250 and totem pole configuration as a MOSFET driver. The inverter input is 12VDC and its output is 220VAC across a transformer. The complete design is modeled in proteus software and its output is verified practically.", "title": "" }, { "docid": "857658968e3e237b33073ed87ff0fa1a", "text": "Analysis of a worldwide sample of sudden deaths of politicians reveals a market-adjusted 1.7% decline in the value of companies headquartered in the politician’s hometown. The decline in value is followed by a drop in the rate of growth in sales and access to credit. Our results are particularly pronounced for family firms, firms with high growth prospects, firms in industries over which the politician has jurisdiction, and firms headquartered in highly corrupt countries.", "title": "" }, { "docid": "86af81e39bce547a3f29b4851d033356", "text": "Empirical studies largely support the continuity hypothesis of dreaming. Despite of previous research efforts, the exact formulation of the continuity hypothesis remains vague. The present paper focuses on two aspects: (1) the differential incorporation rate of different waking-life activities and (2) the magnitude of which interindividual differences in waking-life activities are reflected in corresponding differences in dream content. Using a correlational design, a positive, non-zero correlation coefficient will support the continuity hypothesis. Although many researchers stress the importance of emotional involvement on the incorporation rate of waking-life experiences into dreams, formulated the hypothesis that highly focused cognitive processes such as reading, writing, etc. are rarely found in dreams due to the cholinergic activation of the brain during dreaming. The present findings based on dream diaries and the exact measurement of waking activities replicated two recent questionnaire studies. These findings indicate that it will be necessary to specify the continuity hypothesis more fully and include factors (e.g., type of waking-life experience, emotional involvement) which modulate the incorporation rate of waking-life experiences into dreams. Whether the cholinergic state of the brain during REM sleep or other alterations of brain physiology (e.g., down-regulation of the dorsolateral prefrontal cortex) are the underlying factors of the rare occurrence of highly focused cognitive processes in dreaming remains an open question. Although continuity between waking life and dreaming has been demonstrated, i.e., interindividual differences in the amount of time spent with specific waking-life activities are reflected in dream content, methodological issues (averaging over a two-week period, small number of dreams) have limited the capacity for detecting substantial relationships in all areas. Nevertheless, it might be concluded that the continuity hypothesis in its present general form is not valid and should be elaborated and tested in a more specific way.", "title": "" }, { "docid": "024bece8c4aff29e55467a96667c4612", "text": "Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.", "title": "" } ]
scidocsrr
63a29e0cb47d33e6f675560ab64874cb
Context Aware Document Embedding
[ { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "4f973dfbea2cd0273d060f6917eac0af", "text": "For an understanding of the aberrant biology seen in mouse mutations and identification of more subtle phenotype variation, there is a need for a full clinical and pathological characterization of the animals. Although there has been some use of sophisticated techniques, the majority of behavioral and functional analyses in mice have been qualitative rather than quantitative in nature. There is, however, no comprehensive routine screening and testing protocol designed to identify and characterize phenotype variation or disorders associated with the mouse genome. We have developed the SHIRPA procedure to characterize the phenotype of mice in three stages. The primary screen utilizes standard methods to provide a behavioral and functional profile by observational assessment. The secondary screen involves a comprehensive behavioral assessment battery and pathological analysis. These protocols provide the framework for a general phenotype assessment that is suitable for a wide range of applications, including the characterization of spontaneous and induced mutants, the analysis of transgenic and gene-targeted phenotypes, and the definition of variation between strains. The tertiary screening stage described is tailored to the assessment of existing or potential models of neurological disease, as well as the assessment of phenotypic variability that may be the result of unknown genetic influences. SHIRPA utilizes standardized protocols for behavioral and functional assessment that provide a sensitive measure for quantifying phenotype expression in the mouse. These paradigms can be refined to test the function of specific neural pathways, which will, in turn, contribute to a greater understanding of neurological disorders.", "title": "" }, { "docid": "235edeee5ed3a16b88960400d13cb64f", "text": "Product service systems (PSS) can be understood as an innovation / business strategy that includes a set of products and services that are realized by an actor network. More recently, PSS that comprise System of Systems (SoS) have been of increasing interest, notably in the transportation (autonomous vehicle infrastructures, multi-modal transportation) and energy sector (smart grids). Architecting such PSS-SoS goes beyond classic SoS engineering, as they are often driven by new technology, without an a priori client and actor network, and thus, a much larger number of potential architectures. However, it seems that neither the existing PSS nor SoS literature provides solutions to how to architect such PSS. This paper presents a methodology for architecting PSS-SoS that are driven by technological innovation. The objective is to design PSS-SoS architectures together with their value proposition and business model from an initial technology impact assessment. For this purpose, we adapt approaches from the strategic management, business modeling, PSS and SoS architecting literature. We illustrate the methodology by applying it to the case of an automobile PSS.", "title": "" }, { "docid": "e89a1c0fb1b0736b238373f2fbca91a0", "text": "In this paper, we provide a comprehensive study of elliptic curve cryptography (ECC) for wireless sensor networks (WSN) security provisioning, mainly for key management and authentication modules. On the other hand, we present and evaluate a side-channel attacks (SCAs) experimental bench solution for energy evaluation, especially simple power analysis (SPA) attacks experimental bench to measure dynamic power consumption of ECC operations. The goal is the best use of the already installed SCAs experimental bench by performing the robustness test of ECC devices against SPA as well as the estimate of its energy and dynamic power consumption. Both operations are tested: point multiplication over Koblitz curves and doubling points over binary curves, with respectively affine and projective coordinates. The experimental results and its comparison with simulation ones are presented. They can lead to accurate power evaluation with the maximum reached error less than 30%.", "title": "" }, { "docid": "d73b72c1dee7c132419d07ebe4b60782", "text": "Race detection algorithms for multi-threaded programs using the common lock-based synchronization idiom must correlate locks with the memory locations they guard. The heart of a proof of race freedom is showing that if two locks are distinct, then the memory locations they guard are also distinct. This is an example of a general property we call conditional must not aliasing: Under the assumption that two objects are not aliased, prove that two other objects are not aliased. This paper introduces and gives an algorithm for conditional must not alias analysis and discusses experimental results for sound race detection of Java programs.", "title": "" }, { "docid": "872d06c4d3702d79cb1c7bcbc140881a", "text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.", "title": "" }, { "docid": "607977a85696ecc91816cd9f2cf04bbf", "text": "the paper presents a model integrating theories from collaboration research (i.e., social presence theory, channel expansion theory, and the task closure model) with a recent theory from technology adoption research (i.e., unified theory of acceptance and use of technology, abbreviated to utaut) to explain the adoption and use of collaboration technology. we theorize that collaboration technology characteristics, individual and group characteristics, task characteristics, and situational characteristics are predictors of performance expectancy, effort expectancy, social influence, and facilitating conditions in utaut. we further theorize that the utaut constructs, in concert with gender, age, and experience, predict intention to use a collaboration technology, which in turn predicts use. we conducted two field studies in Finland among (1) 349 short message service (SMS) users and (2) 447 employees who were potential users of a new collaboration technology in an organization. Our model was supported in both studies. the current work contributes to research by developing and testing a technology-specific model of adoption in the collaboration context. key worDS anD phraSeS: channel expansion theory, collaboration technologies, social presence theory, task closure model, technology acceptance, technology adoption, unified theory of acceptance and use of technology. technology aDoption iS one of the moSt mature StreamS in information systems (IS) research (see [65, 76, 77]). the benefit of such maturity is the availability of frameworks and models that can be applied to the study of interesting problems. while practical contributions are certain to accrue from such investigations, a key challenge for researchers is to ensure that studies yield meaningful scientific contributions. there have been several models explaining technology adoption and use, particularly since the late 1980s [76]. In addition to noting the maturity of this stream of research, Venkatesh et al. identified several important directions for future research and suggested that “one of the most important directions for future research is to tie this mature stream [technology adoption] of research into other established streams of work” [76, p. 470] (see also [70]). In research on technology adoption, the technology acceptance model (taM) [17] is the most widely employed theoretical model [76]. taM has been applied to a range of technologies and has been very predictive of individual technology adoption and use. the unified theory of acceptance and use of technology (utaut) [76] integrated eight distinct models of technology adoption and use, including taM. utaut extends taM by incorporating social influence and facilitating conditions. utaut is based in PrEDICtING COllaBOratION tEChNOlOGY uSE 11 the rich tradition of taM and provides a foundation for future research in technology adoption. utaut also incorporates four different moderators of key relationships. although utaut is more integrative, like taM, it still suffers from the limitation of being predictive but not particularly useful in providing explanations that can be used to design interventions that foster adoption (e.g., [72, 73]). there has been some research on general antecedents of perceived usefulness and perceived ease of use that are technology independent (e.g., [69, 73]). But far less attention has been paid to technology-specific antecedents that may provide significantly stronger guidance for the successful design and implementation of specific types of systems. Developing theory that is more focused and context specific—here, technology specific—is considered an important frontier for advances in IS research [53, 70]. Building on utaut to develop a model that will be more helpful will require a better understanding of how the utaut factors play out with different technologies [7, 76]. as a first step, it is important to extend utaut to a specific class of technologies [70, 76]. a model focused on a specific class of technology will be more explanatory compared to a general model that attempts to address many classes of technologies [70]. Such a focused model will also provide designers and managers with levers to augment adoption and use. One example is collaboration technology [20], a technology designed to assist two or more people to work together at the same place and time or at different places or different times [25, 26]. technologies that facilitate collaboration via electronic means have become an important component of day-to-day life (both in and out of the workplace). thus, it is not surprising that collaboration technologies have received considerable research attention over the past decades [24, 26, 77]. Several studies have examined the adoption of collaboration technologies, such as voice mail, e-mail, and group support systems (e.g., [3, 4, 44, 56, 63]). these studies focused on organizational factors leading to adoption (e.g., size, centralization) or on testing the boundary conditions of taM (e.g., could taM be applied to collaboration technologies). Given that adoption of collaboration technologies is not progressing as fast or as broadly as expected [20, 54], it seems a different approach is needed. It is possible that these two streams could inform each other to develop a more complete understanding of collaboration technology use, one in which we can begin to understand how collaboration factors influence adoption and use. a model that integrates knowledge from technology adoption and collaboration technology research is lacking, a void that this paper seeks to address. In doing so, we answer the call for research by Venkatesh et al. [76] to integrate the technology adoption stream with another dominant research stream, which in turn will move us toward a more cumulative and expansive nomological network (see [41, 70]). we also build on the work of wixom and todd [80] by examining the important role of technology characteristics leading to use. the current study will help us take a step toward alleviating one of the criticisms of IS research discussed by Benbasat and Zmud, especially in the context of technology adoption research: “we should neither focus our research on variables outside the nomological net nor exclusively on intermediate-level variables, such as ease of use, usefulness or behavioral intentions, without clarifying 12 BrOwN, DENNIS, aND VENkatESh the IS nuances involved” [6, p. 193]. Specifically, our work accomplishes the goal of “developing conceptualizations and theories of It [information technology] artifacts; and incorporating such conceptualizations and theories of It artifacts” [53, p. 130] by extending utaut to incorporate the specific artifact of collaboration technology and its related characteristics. In addition to the scientific value, such a model will provide greater value to practitioners who are attempting to foster successful use of a specific technology. Given this background, the primary objective of this paper is to develop and test a model to understand collaboration technology adoption that integrates utaut with key constructs from theories about collaboration technologies. we identify specific antecedents to utaut constructs by drawing from social presence theory [64], channel expansion theory [11] (a descendant of media richness theory [16]), and the task closure model [66], as well as a broad range of prior collaboration technology research. we test our model in two different studies conducted in Finland: the use of short message service (SMS) among working professionals and the use of a collaboration technology in an organization.", "title": "" }, { "docid": "b8963bbc58acc4699e5778cf50583208", "text": "Conceptual Metaphor Theory is a promising model that despite its deficiencies can be used to account for a number of phenomena in figurative language use. The paper reviews the arguments in favour of and against Conceptual Metaphor Theory in terms of the data, methodology and content. Since the model focuses on regularities, it is less useful in the study of idioms, where irregularities are also found. It has, however, enormous potential as it integrates corpusand discourse-driven findings.", "title": "" }, { "docid": "5be572ea448bfe40654956112cecd4e1", "text": "BACKGROUND\nBeta blockers reduce mortality in patients who have chronic heart failure, systolic dysfunction, and are on background treatment with diuretics and angiotensin-converting enzyme inhibitors. We aimed to compare the effects of carvedilol and metoprolol on clinical outcome.\n\n\nMETHODS\nIn a multicentre, double-blind, and randomised parallel group trial, we assigned 1511 patients with chronic heart failure to treatment with carvedilol (target dose 25 mg twice daily) and 1518 to metoprolol (metoprolol tartrate, target dose 50 mg twice daily). Patients were required to have chronic heart failure (NYHA II-IV), previous admission for a cardiovascular reason, an ejection fraction of less than 0.35, and to have been treated optimally with diuretics and angiotensin-converting enzyme inhibitors unless not tolerated. The primary endpoints were all-cause mortality and the composite endpoint of all-cause mortality or all-cause admission. Analysis was done by intention to treat.\n\n\nFINDINGS\nThe mean study duration was 58 months (SD 6). The mean ejection fraction was 0.26 (0.07) and the mean age 62 years (11). The all-cause mortality was 34% (512 of 1511) for carvedilol and 40% (600 of 1518) for metoprolol (hazard ratio 0.83 [95% CI 0.74-0.93], p=0.0017). The reduction of all-cause mortality was consistent across predefined subgroups. The composite endpoint of mortality or all-cause admission occurred in 1116 (74%) of 1511 on carvedilol and in 1160 (76%) of 1518 on metoprolol (0.94 [0.86-1.02], p=0.122). Incidence of side-effects and drug withdrawals did not differ by much between the two study groups.\n\n\nINTERPRETATION\nOur results suggest that carvedilol extends survival compared with metoprolol.", "title": "" }, { "docid": "c040df6f014e52b5fe76234bb4f277b3", "text": "CRISPR–Cas systems provide microbes with adaptive immunity by employing short DNA sequences, termed spacers, that guide Cas proteins to cleave foreign DNA. Class 2 CRISPR–Cas systems are streamlined versions, in which a single RNA-bound Cas protein recognizes and cleaves target sequences. The programmable nature of these minimal systems has enabled researchers to repurpose them into a versatile technology that is broadly revolutionizing biological and clinical research. However, current CRISPR–Cas technologies are based solely on systems from isolated bacteria, leaving the vast majority of enzymes from organisms that have not been cultured untapped. Metagenomics, the sequencing of DNA extracted directly from natural microbial communities, provides access to the genetic material of a huge array of uncultivated organisms. Here, using genome-resolved metagenomics, we identify a number of CRISPR–Cas systems, including the first reported Cas9 in the archaeal domain of life, to our knowledge. This divergent Cas9 protein was found in little-studied nanoarchaea as part of an active CRISPR–Cas system. In bacteria, we discovered two previously unknown systems, CRISPR–CasX and CRISPR–CasY, which are among the most compact systems yet discovered. Notably, all required functional components were identified by metagenomics, enabling validation of robust in vivo RNA-guided DNA interference activity in Escherichia coli. Interrogation of environmental microbial communities combined with in vivo experiments allows us to access an unprecedented diversity of genomes, the content of which will expand the repertoire of microbe-based biotechnologies.", "title": "" }, { "docid": "93c24024349853033a60ce06aa2b700e", "text": "Mines deployed in post-war countries pose severe threats to civilians and hamper the reconstruction effort in war hit societies. In the scope of the EU FP7 TIRAMISU Project, a toolbox for humanitarian demining missions is being developed by the consortium members. In this article we present the FSR Husky, an affordable, lightweight and autonomous all terrain robotic system, developed to assist human demining operation teams. Intended to be easily deployable on the field, our robotic solution has the ultimate goal of keeping humans away from the threat, safeguarding their lives. A detailed description of the modular robotic system architecture is presented, and several real world experiments are carried out to validate the robot’s functionalities and illustrate continuous work in progress on minefield coverage, mine detection, outdoor localization, navigation, and environment perception. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4b8a46065520d2b7489bf0475321c726", "text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.", "title": "" }, { "docid": "efb9686dbd690109e8e5341043648424", "text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.", "title": "" }, { "docid": "ebb68e2067fe684756514ce61871a820", "text": "Ž . Ž PLS-regression PLSR is the PLS approach in its simplest, and in chemistry and technology, most used form two-block . predictive PLS . PLSR is a method for relating two data matrices, X andY, by a linear multivariate model, but goes beyond traditional regression in that it models also the structure of X andY. PLSR derives its usefulness from its ability to analyze data with many, noisy, collinear, and even incomplete variables in both X andY. PLSR has the desirable property that the precision of the model parameters improves with the increasing number of relevant variables and observations. This article reviews PLSR as it has developed to become a standard tool in chemometrics and used in chemistry and engineering. The underlying model and its assumptions are discussed, and commonly used diagnostics are reviewed together with the interpretation of resulting parameters. Ž . Two examples are used as illustrations: First, a Quantitative Structure–Activity Relationship QSAR rQuantitative StrucŽ . ture–Property Relationship QSPR data set of peptides is used to outline how to develop, interpret and refine a PLSR model. Second, a data set from the manufacturing of recycled paper is analyzed to illustrate time series modelling of process data by means of PLSR and time-lagged X-variables. q2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "3d8f937692b9c0e2bb2c5b0148e1ef2c", "text": "BACKGROUND\nAttenuated peripheral perfusion in patients with advanced chronic heart failure (CHF) is partially the result of endothelial dysfunction. This has been causally linked to an impaired endogenous regenerative capacity of circulating progenitor cells (CPC). The aim of this study was to elucidate whether exercise training (ET) affects exercise intolerance and left ventricular (LV) performance in patients with advanced CHF (New York Heart Association class IIIb) and whether this is associated with correction of peripheral vasomotion and induction of endogenous regeneration.\n\n\nMETHODS AND RESULTS\nThirty-seven patients with CHF (LV ejection fraction 24+/-2%) were randomly assigned to 12 weeks of ET or sedentary lifestyle (control). At the beginning of the study and after 12 weeks, maximal oxygen consumption (Vo(2)max) and LV ejection fraction were determined; the number of CD34(+)/KDR(+) CPCs was quantified by flow cytometry and CPC functional capacity was determined by migration assay. Flow-mediated dilation was assessed by ultrasound. Capillary density was measured in skeletal muscle tissue samples. In advanced CHF, ET improved Vo(2)max by +2.7+/-2.2 versus -0.8+/-3.1 mL/min/kg in control (P=0.009) and LV ejection fraction by +9.4+/-6.1 versus -0.8+/-5.2% in control (P<0.001). Flow-mediated dilation improved by +7.43+/-2.28 versus +0.09+/-2.18% in control (P<0.001). ET increased the number of CPC by +83+/-60 versus -6+/-109 cells/mL in control (P=0.014) and their migratory capacity by +224+/-263 versus -12+/-159 CPC/1000 plated CPC in control (P=0.03). Skeletal muscle capillary density increased by +0.22+/-0.10 versus -0.02+/-0.16 capillaries per fiber in control (P<0.001).\n\n\nCONCLUSIONS\nTwelve weeks of ET in patients with advanced CHF is associated with augmented regenerative capacity of CPCs, enhanced flow-mediated dilation suggestive of improvement in endothelial function, skeletal muscle neovascularization, and improved LV function. Clinical Trial Registration- http://www.clinicaltrials.gov. Unique Identifier: NCT00176384.", "title": "" }, { "docid": "5ff7a82ec704c8fb5c1aa975aec0507c", "text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.", "title": "" }, { "docid": "faa1a49f949d5ba997f4285ef2e708b2", "text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.", "title": "" }, { "docid": "1f0796219eaf350fd0e288e22165017d", "text": "Behçet’s disease, also known as the Silk Road Disease, is a rare systemic vasculitis disorder of unknown etiology. Recurrent attacks of acute inflammation characterize Behçet’s disease. Frequent oral aphthous ulcers, genital ulcers, skin lesions and ocular lesions are the most common manifestations. Inflammation is typically self-limiting in time and relapsing episodes of clinical manifestations represent a hallmark of Behçet’s disease. Other less frequent yet severe manifestations that have a major prognostic impact involve the eyes, the central nervous system, the main large vessels and the gastrointestinal tract. Behçet’s disease has a heterogeneous onset and is associated with significant morbidity and premature mortality. This study presents a current immunological review of the disease and provides a synopsis of clinical aspects and treatment options.", "title": "" }, { "docid": "bb799a3aac27f4ac764649e1f58ee9fb", "text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.", "title": "" }, { "docid": "9c1d8f50bd46f7c7b6e98c3c61edc67d", "text": "This paper presents the implementation of a complete fingerprint biometric cryptosystem in a Field Programmable Gate Array (FPGA). This is possible thanks to the use of a novel fingerprint feature, named QFingerMap, which is binary, length-fixed, and ordered. Security of Authentication on FPGA is further improved because information stored is protected due to the design of a cryptosystem based on Fuzzy Commitment. Several samples of fingers as well as passwords can be fused at feature level with codewords of an error correcting code to generate non-sensitive data. System performance is illustrated with experimental results corresponding to 560 fingerprints acquired in live by an optical sensor and processed by the system in a Xilinx Virtex 6 FPGA. Depending on the realization, more or less accuracy is obtained, being possible a perfect authentication (zero Equal Error Rate), with the advantages of real-time operation, low power consumption, and a very small device.", "title": "" }, { "docid": "e1b6cc1dbd518760c414cd2ddbe88dd5", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich", "title": "" } ]
scidocsrr
d25adf125533533dcfd534c242920236
Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means
[ { "docid": "67e16f36bb6d83c5d6eae959a7223b77", "text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.", "title": "" } ]
[ { "docid": "80a9489262ee8d94d64dd8e475c060a3", "text": "The effects of social-cognitive variables on preventive nutrition and behavioral intentions were studied in 580 adults at 2 points in time. The authors hypothesized that optimistic self-beliefs operate in 2 phases and made a distinction between action self-efficacy (preintention) and coping self-efficacy (postintention). Risk perceptions, outcome expectancies, and action self-efficacy were specified as predictors of the intention at Wave 1. Behavioral intention and coping self-efficacy served as mediators linking the 3 predictors with low-fat and high-fiber dietary intake 6 months later at Wave 2. Covariance structure analysis yielded a good model fit for the total sample and 6 subsamples created by a median split of 3 moderators: gender, age, and body weight. Parameter estimates differed between samples; the importance of perceived self-efficacy increased with age and weight.", "title": "" }, { "docid": "16be435a946f8ff5d8d084f77373a6f3", "text": "Answer selection is a core component in any question-answering systems. It aims to select correct answer sentences for a given question from a pool of candidate sentences. In recent years, many deep learning methods have been proposed and shown excellent results for this task. However, these methods typically require extensive parameter (and hyper-parameter) tuning, which gives rise to efficiency issues for large-scale datasets, and potentially makes them less portable across new datasets and domains (as re-tuning is usually required). In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view. FastHybrid is a light-weight model that requires little tuning and adaptation across different domains. It combines a fast deep model (which will be introduced in the method section) with an initial information retrieval model to effectively and efficiently handle answer selection. We introduce a new efficient attention mechanism in the hybrid model and demonstrate its effectiveness on several QA datasets. Experimental results show that although the hybrid uses no training data, its accuracy is often on-par with supervised deep learning techniques, while significantly reducing training and tuning costs across different domains.", "title": "" }, { "docid": "c61a39f0ba3f24f10c5edd8ad39c7a20", "text": "REINFORCEMENT LEARNING AND ITS APPLICATION TO CONTROL", "title": "" }, { "docid": "eac74165cd2cfa9bf5066874c59e0a3e", "text": "In big data era, data are usually stored in databases for easy access and utilization, which are now woven into every aspect of our lives. However, traditional relational databases cannot address users' demands for quick data access and calculating, since they cannot process data in a distributed way. To tackle this problem, non-relational databases such as MongoDB have emerged up and been applied in various Scenarios. Nevertheless, it should be noted that most MongoDB products fail to consider user's data privacy. In this paper, we propose a practical encrypted MongoDB (i.e., CryptMDB). Specifically, we utilize an additive homomorphic asymmetric cryptosystem to encrypt user's data and achieve strong privacy protection. Security analysis indicates that the CryptMDB can achieve confidentiality of user's data and prevent adversaries from illegally gaining access to the database. Furthermore, extensive experiments demonstrate that the CryptMDB achieves better efficiency than existing relational database in terms of data access and calculating.", "title": "" }, { "docid": "661c99429dc6684ca7d6394f01201ac3", "text": "SUMO is an open source traffic simulation package including net import and demand modeling components. We describe the current state of the package as well as future developments and extensions. SUMO helps to investigate several research topics e.g. route choice and traffic light algorithm or simulating vehicular communication. Therefore the framework is used in different projects to simulate automatic driving or traffic management strategies. Keywordsmicroscopic traffic simulation, software, open", "title": "" }, { "docid": "cf5cd34ea664a81fabe0460e4e040a2d", "text": "A novel p-trench phase-change memory (PCM) cell and its integration with a MOSFET selector in a standard 0.18 /spl mu/m CMOS technology are presented. The high-performance capabilities of PCM cells are experimentally investigated and their application in embedded systems is discussed. Write times as low as 10 ns and 20 ns have been measured for the RESET and SET operation, respectively, still granting a 10/spl times/ read margin. The impact of the RESET pulse on PCH cell endurance has been also evaluated. Finally, cell distributions and first statistical endurance measurements on a 4 Mbit MOS demonstrator clearly assess the feasibility of the PCM technology.", "title": "" }, { "docid": "53d8734d66ffa4398d0105d6d2b55a66", "text": "Inspite of long years of research, problem of manipulator path tracking control is the thrust area for researchers to work upon. Non-linear systems like manipulator are multi-input-multi-output, non-linear and time variant complex problem. A number of different approaches presently followed for the control of manipulator vary from classical PID (Proportional Integral Derivative) to CTC (Computed Torque Control) control techniques. This paper presents design and implementation of PID and CTC controller for robotic manipulator. Comparative study of simulated results of conventional controllers, like PID and CTC are also shown. Tracking performance and error comparison graphs are presented to show the performance of the proposed controllers.", "title": "" }, { "docid": "a66765e24b6cfdab2cc0b30de8afd12e", "text": "A broadband transition structure from rectangular waveguide (RWG) to microstrip line (MSL) is presented for the realization of the low-loss packaging module using Low-temperature co-fired ceramic (LTCC) technology at W-band. In this transition, a cavity structure is buried in LTCC layers, which provides the wide bandwidth, and a laminated waveguide (LWG) transition is designed, which provides the low-loss performance, as it reduces the radiation loss of conventional direct transition between RWG and MSL. The design procedure is also given. The measured results show that the insertion loss of better than 0.7 dB from 86 to 97 GHz can be achieved.", "title": "" }, { "docid": "061a5beeac4a5794e9a04c537b3045ce", "text": "Surgical replacement with artificial devices has revolutionised the care of patients with severe valvular diseases. Mechanical valves are very durable, but require long-term anticoagulation. Bioprosthetic heart valves (BHVs), devices manufactured from glutaraldehyde-fixed animal tissues, do not need long-term anticoagulation, but their long-term durability is limited to 15 - 20 years, mainly because of mechanical failure and tissue calcification. Although mechanisms of BHV calcification are not fully understood, major determinants are glutaraldehyde fixation, presence of devitalised cells and alteration of specific extracellular matrix components. Treatments targeted at the prevention of calcification include those that target neutralisation of the effects of glutaraldehyde, removal of cells, and modifications of matrix components. Several existing calcification-prevention treatments are in clinical use at present, and there are excellent mid-term clinical follow-up reports available. The purpose of this review is to appraise basic knowledge acquired in the field of prevention of BHV calcification, and to provide directions for future research and development.", "title": "" }, { "docid": "6d380dc3fe08d117c090120b3398157b", "text": "Conversational interfaces are likely to become more efficient, intuitive and engaging way for human-computer interaction than today’s text or touch-based interfaces. Current research efforts concerning conversational interfaces focus primarily on question answering functionality, thereby neglecting support for search activities beyond targeted information lookup. Users engage in exploratory search when they are unfamiliar with the domain of their goal, unsure about the ways to achieve their goals, or unsure about their goals in the first place. Exploratory search is often supported by approaches from information visualization. However, such approaches cannot be directly translated to the setting of conversational search. In this paper we investigate the affordances of interactive storytelling as a tool to enable exploratory search within the framework of a conversational interface. Interactive storytelling provides a way to navigate a document collection in the pace and order a user prefers. In our vision, interactive storytelling is to be coupled with a dialogue-based system that provides verbal explanations and responsive design. We discuss challenges and sketch the research agenda required to put this vision into life.", "title": "" }, { "docid": "ca1059da91a6b6e008983b8bbf57b57f", "text": "Immersive augmented reality (AR) technologies are becoming a reality. Prior works have identified security and privacy risks raised by these technologies, primarily considering individual users or AR devices. However, we make two key observations: (1) users will not always use AR in isolation, but also in ecosystems of other users, and (2) since immersive AR devices have only recently become available, the risks of AR have been largely hypothetical to date. To provide a foundation for understanding and addressing the security and privacy challenges of emerging AR technologies, grounded in the experiences of real users, we conduct a qualitative lab study with an immersive AR headset, the Microsoft HoloLens. We conduct our study in pairs - 22 participants across 11 pairs - wherein participants engage in paired and individual (but physically co-located) HoloLens activities. Through semi-structured interviews, we explore participants' security, privacy, and other concerns, raising key findings. For example, we find that despite the HoloLens's limitations, participants were easily immersed, treating virtual objects as real (e.g., stepping around them for fear of tripping). We also uncover numerous security, privacy, and safety concerns unique to AR (e.g., deceptive virtual objects misleading users about the real world), and a need for access control among users to manage shared physical spaces and virtual content embedded in those spaces. Our findings give us the opportunity to identify broader lessons and key challenges to inform the design of emerging single-and multi-user AR technologies.", "title": "" }, { "docid": "591438f31d3f7b8093f8d10874a17d5b", "text": "Previously, techniques such as class hierarchy analysis and profile-guided receiver class prediction have been demonstrated to greatly improve the performance of applications written in pure object-oriented languages, but the degree to which these results are transferable to applications written in hybrid languages has been unclear. In part to answer this question, we have developed the Vortex compiler infrastructure, a language-independent optimizing compiler for object-oriented languages, with front-ends for Cecil, C++, Java, and Modula-3. In this paper, we describe the Vortex compiler's intermediate language, internal structure, and optimization suite, and then we report the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four languages. We characterize the benchmark programs in terms of a collection of static and dynamic metrics, intended to quantify aspects of the \"object-orientedness\" of a program.", "title": "" }, { "docid": "49ee2dafe659cfb82c623a3e3e093f12", "text": "This study examined the naturally occurring dimensions of the dentogingival junction in 10 adult human cadaver jaws. The connective tissue attachment, epithelial attachment, loss of attachment, and sulcus depth were measured histomorphometrically for 171 tooth surfaces. Mean measurements were 1.34 +/- 0.84 mm for sulcus depth; 1.14 +/- 0.49 mm for epithelial attachment; 0.77 +/- 0.32 mm for connective tissue attachment; and 2.92 +/- 1.69 mm for loss of attachment. These dimensions, as measured in this study, support the concept that the connective tissue attachment is a variable width within a more narrow distribution and range than the epithelial attachment, sulcus depth, or loss of attachment. The level of the loss of attachment was not predictive of the connective tissue attachment length.", "title": "" }, { "docid": "52ec5766be25da53c39c4fd347145636", "text": "In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test by the smallest signal strength that it can detect and we propose a computationally efficient method based on semidefinite programming. We also prove that the statistical performance of this test cannot be strictly improved by any computationally efficient method. Our results can be viewed as complexity theoretic lower bounds conditionally on the assumptions that some instances of the planted clique problem cannot be solved in randomized polynomial time.", "title": "" }, { "docid": "1d5aee3a22f540f6bb8ae619cdc9935d", "text": "In emergency situations, actions that save lives and limit the impact of hazards are crucial. In order to act, situational awareness is needed to decide what to do. Geolocalized photos and video of the situations as they evolve can be crucial in better understanding them and making decisions faster. Cameras are almost everywhere these days, either in terms of smartphones, installed CCTV cameras, UAVs or others. However, this poses challenges in big data and information overflow. Moreover, most of the time there are no disasters at any given location, so humans aiming to detect sudden situations may not be as alert as needed at any point in time. Consequently, computer vision tools can be an excellent decision support. The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research. Researchers tend to focus on state-of-the-art systems that cover the same emergency as they are studying, obviating important research in other fields. In order to unveil this overlap, the survey is divided along four main axes: the types of emergencies that have been studied in computer vision, the objective that the algorithms can address, the type of hardware needed and the algorithms used. Therefore, this review provides a broad overview of the progress of computer vision covering all sorts of emergencies.", "title": "" }, { "docid": "73545ef815fb22fa048fed3e0bc2cc8b", "text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.", "title": "" }, { "docid": "cc52bb9210f400a42b0b8374dde374ab", "text": "It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.", "title": "" }, { "docid": "6784e31e2ec313698a622a7e78288f68", "text": "Web-based technology is often the technology of choice for distance education given the ease of use of the tools to browse the resources on the Web, the relative affordability of accessing the ubiquitous Web, and the simplicity of deploying and maintaining resources on the WorldWide Web. Many sophisticated web-based learning environments have been developed and are in use around the world. The same technology is being used for electronic commerce and has become extremely popular. However, while there are clever tools developed to understand on-line customer’s behaviours in order to increase sales and profit, there is very little done to automatically discover access patterns to understand learners’ behaviour on web-based distance learning. Educators, using on-line learning environments and tools, have very little support to evaluate learners’ activities and discriminate between different learners’ on-line behaviours. In this paper, we discuss some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour.", "title": "" }, { "docid": "ac6344574ced223d007bd3b352b4b1b0", "text": "Mobile personal devices, such as smartphones, USB thumb drives, and sensors, are becoming essential elements of our modern lives. Their large-scale pervasive deployment within the population has already attracted many malware authors, cybercriminals, and even governments. Since the first demonstration of mobile malware by Marcos Velasco, millions of these have been developed with very sophisticated capabilities. They infiltrate highly secure networks using air-gap jumping capability (e.g., “Hammer Drill” and “Brutal Kangaroo”) and spread through heterogeneous computing and communication platforms. Some of these cross-platform malware attacks are capable of infiltrating isolated control systems which might be running a variety of operating systems, such as Windows, Mac OS X, Solaris, and Linux. This paper investigates cross-platform/heterogeneous mobile malware that uses removable media, such as USB connection, to spread between incompatible computing platforms and operating systems. Deep analysis and modeling of cross-platform mobile malware are conducted at the micro (infection) and macro (spread) levels. The micro-level analysis aims to understand the cross-platform malware states and transitions between these states during node-to-node infection. The micro-level analysis helps derive the parameters essential for macro-level analysis, which are also crucial for the elaboration of suitable detection and prevention solutions. The macro-level analysis aims to identify the most important factors affecting cross-platform mobile malware spread within a digitized population. Through simulation, we show that identifying these factors helps to mitigate any outbreaks.", "title": "" } ]
scidocsrr
e8a476642b92af7ec1227c40e553662f
Anomaly intrusion detection using one class SVM
[ { "docid": "b50efa7b82d929c1b8767e23e8359a06", "text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.", "title": "" }, { "docid": "94c49272a71126ed8410650327f2770a", "text": "By employing fault tolerance, embedded systems can withstand both intentional and unintentional faults. Many fault-tolerance mechanisms are invoked only after a fault has been detected by whatever fault-detection mechanism is used, hence the process of fault detection must itself be dependable if the system is expected to be fault tolerant. Many faults are detectable only indirectly, as a result of performance disorders that manifest as anomalies in monitored system or sensor data. Anomaly detection, therefore, is often the primary means of providing early indications of faults. As with any other kind of detector, one seeks full coverage of the detection space with the anomaly detector being used. Even if coverage of a particular anomaly detector falls short of 100%, detectors can be composed to e ect broader coverage, once their respective sweet spots and blind regions are known. This paper provides a framework and a fault-injection methodology for mapping an anomaly detector's e ective operating space, and shows that two detectors, each designed to detect the same phenomenon, may not perform similarly, even when the event to be detected is unequivocally anomalous, and should be detected by either detector. Both synthetic and real-world data are used.", "title": "" } ]
[ { "docid": "5b341604b207e80ef444d11a9de82f72", "text": "Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.", "title": "" }, { "docid": "82866d253fda63fd7a1e70e9a0f4252e", "text": "We introduce a new class of maximization-expectation (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.", "title": "" }, { "docid": "af961b3b977b37f69156c4d653b745e7", "text": "The move to Internet news publishing is the latest in a series of technological shifts which have required journalists not merely to adapt their daily practice but which have also at least in the view of some – recast their role in society. For over a decade, proponents of the networked society as a new way of life have argued that responsibility for news selection and production will shift from publishers, editors and reporters to individual consumers, as in the scenario offered by Nicholas Negroponte:", "title": "" }, { "docid": "fd62cb306e6e39e7ead79696591746b2", "text": "Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase)-based approaches should perform better than the term-based ones, but many experiments do not support this hypothesis. This paper presents an innovative and effective pattern discovery technique which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.", "title": "" }, { "docid": "153b5c38978c54391bd5ec097416883c", "text": "Applying simple natural language processing methods on social media data have shown to be able to reveal insights of specific mental disorders. However, few studies have employed fine-grained sentiment or emotion related analysis approaches in the detection of mental health conditions from social media messages. This work, for the first time, employed fine-grained emotions as features and examined five popular machine learning classifiers in the task of identifying users with selfreported mental health conditions (i.e. Bipolar, Depression, PTSD, and SAD) from the general public. We demonstrated that the support vector machines and the random forests classifiers with emotion-based features and combined features showed promising improvements to the performance on this task.", "title": "" }, { "docid": "086269223c00209787310ee9f0bcf875", "text": "The availability of large annotated datasets and affordable computation power have led to impressive improvements in the performance of CNNs on various object detection and recognition benchmarks. These, along with a better understanding of deep learning methods, have also led to improved capabilities of machine understanding of faces. CNNs are able to detect faces, locate facial landmarks, estimate pose, and recognize faces in unconstrained images and videos. In this paper, we describe the details of a deep learning pipeline for unconstrained face identification and verification which achieves state-of-the-art performance on several benchmark datasets. We propose a novel face detector, Deep Pyramid Single Shot Face Detector (DPSSD), which is fast and capable of detecting faces with large scale variations (especially tiny faces). We give design details of the various modules involved in automatic face recognition: face detection, landmark localization and alignment, and face identification/verification. We provide evaluation results of the proposed face detector on challenging unconstrained face detection datasets. Then, we present experimental results for IARPA Janus Benchmarks A, B and C (IJB-A, IJB-B, IJB-C), and the Janus Challenge Set 5 (CS5).", "title": "" }, { "docid": "031ffcb89efb12d0e0d0d351751d1532", "text": "Radiographic image assessment is the most common method used to measure physical maturity and diagnose growth disorders, hereditary diseases and rheumatoid arthritis, with hand radiography being one of the most frequently used techniques due to its simplicity and minimal exposure to radiation. Finger joints are considered as especially important factors in hand skeleton examination. Although several automation methods for finger joint detection have been proposed, low accuracy and reliability are hindering full-scale adoption into clinical fields. In this paper, we propose FingerNet, a novel approach for the detection of all finger joints from hand radiograph images based on convolutional neural networks, which requires little user intervention. The system achieved 98.02% average detection accuracy for 130 test data sets containing over 1,950 joints. Further analysis was performed to verify the system robustness against factors such as epiphysis and metaphysis in different age groups.", "title": "" }, { "docid": "55fc836c8b0f10486aa6d969d0cae14d", "text": "In this manuscript we explore the ways in which the marketplace metaphor resonates with online dating participants and how this conceptual framework influences how they assess themselves, assess others, and make decisions about whom to pursue. Taking a metaphor approach enables us to highlight the ways in which participants’ language shapes their self-concept and interactions with potential partners. Qualitative analysis of in-depth interviews with 34 participants from a large online dating site revealed that the marketplace metaphor was salient for participants, who employed several strategies that reflected the assumptions underlying the marketplace perspective (including resisting the metaphor). We explore the implications of this metaphor for romantic relationship development, such as the objectification of potential partners. Journal of Social and Personal Relationships © The Author(s), 2010. Reprints and permissions: sagepub.co.uk/journalsPermissions.nav, Vol. 27(4): 427–447. DOI: 10.1177/0265407510361614 This research was funded by Affirmative Action Grant 111579 from the Office of Research and Sponsored Programs at California State University, Stanislaus. An earlier version of this paper was presented at the International Communication Association, 2005. We would like to thank Jack Bratich, Art Ramirez, Lamar Reinsch, Jeanine Turner, and three anonymous reviewers for their helpful comments. All correspondence concerning this article should be addressed to Rebecca D. Heino, Georgetown University, McDonough School of Business, Washington D.C. 20057, USA [e-mail: rdh26@georgetown.edu]. Larry Erbert was the Action Editor on this article. at MICHIGAN STATE UNIV LIBRARIES on June 9, 2010 http://spr.sagepub.com Downloaded from", "title": "" }, { "docid": "274a9094764edd249f1682fbca93a866", "text": "Visual saliency detection is a challenging problem in computer vision, but one of great importance and numerous applications. In this paper, we propose a novel model for bottom-up saliency within the Bayesian framework by exploiting low and mid level cues. In contrast to most existing methods that operate directly on low level cues, we propose an algorithm in which a coarse saliency region is first obtained via a convex hull of interest points. We also analyze the saliency information with mid level visual cues via superpixels. We present a Laplacian sparse subspace clustering method to group superpixels with local features, and analyze the results with respect to the coarse saliency region to compute the prior saliency map. We use the low level visual cues based on the convex hull to compute the observation likelihood, thereby facilitating inference of Bayesian saliency at each pixel. Extensive experiments on a large data set show that our Bayesian saliency model performs favorably against the state-of-the-art algorithms.", "title": "" }, { "docid": "ccafd3340850c5c1a4dfbedd411f1d62", "text": "The paper predicts changes in global and regional incidences of armed conflict for the 2010–2050 period. The predictions are based on a dynamic multinomial logit model estimation on a 1970–2009 cross-sectional dataset of changes between no armed conflict, minor conflict, and major conflict. Core exogenous predictors are population size, infant mortality rates, demographic composition, education levels, oil dependence, ethnic cleavages, and neighborhood characteristics. Predictions are obtained through simulating the behavior of the conflict variable implied by the estimates from this model. We use projections for the 2011–2050 period for the predictors from the UN World Population Prospects and the International Institute for Applied Systems Analysis. We treat conflicts, recent conflict history, and neighboring conflicts as endogenous variables. Out-of-sample validation of predictions for 2007–2009 (based on estimates for the 1970–2000 period) indicates that the model predicts well, with an AUC of 0.937. Using a p > 0.30 threshold for positive prediction, the True Positive Rate 7–9 years into the future is 0.79 and the False Positive Rate 0.085. We predict a continued decline in the proportion of the world’s countries that have internal armed conflict, from about 15% in 2009 to 7% in 2050. The decline is particularly strong in the Western Asia and North Africa region, and less clear in Africa South of Sahara. The remaining conflict countries will increasingly be concentrated in East, Central, and Southern Africa and in East and South Asia. ∗An earlier version of this paper was presented to the ISA Annual Convention 2009, New York, 15–18 Feb. The research was funded by the Norwegian Research Council grant no. 163115/V10. Thanks to Ken Benoit, Mike Colaresi, Scott Gates, Nils Petter Gleditsch, Joe Hewitt, Bjørn Høyland, Andy Mack, Näıma Mouhleb, Gerald Schneider, and Phil Schrodt for valuable comments.", "title": "" }, { "docid": "326493520ccb5c8db07362f412f57e62", "text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.", "title": "" }, { "docid": "00f7fb960e1cfc1a4382a55d1038135a", "text": "Cyber-physical systems, used in domains such as avionics or medical devices, perform critical functions where a fault might have catastrophic consequences (mission failure, severe injuries, etc.). Their development is guided by rigorous practice standards that prescribe safety analysis methods in order to verify that failure have been correctly evaluated and/or mitigated. This laborintensive practice typically focuses system safety analysis on system engineering activities.\n As reliance on software for system operation grows, embedded software systems have become a major source of hazard contributors. Studies show that late discovery of errors in embedded software system have resulted in costly rework, making up as much as 50% of the total software system cost. Automation of the safety analysis process is key to extending safety analysis to the software system and to accommodate system evolution.\n In this paper we discuss three elements that are key to safety analysis automation in the context of fault tree analysis (FTA). First, generation of fault trees from annotated architecture models consistently reflects architecture changes in safety analysis results. Second, use of a taxonomy of failure effects ensures coverage of potential hazard contributors is achieved. Third, common cause failures are identified based on architecture information and reflected appropriately in probabilistic fault tree analysis. The approach utilizes the SAE Architecture Analysis & Design Language (AADL) standard and the recently published revised Error Model Annex V2 (EMV2) standard to represent annotated architecture models of systems and embedded software systems.\n The approach takes into account error sources specified with an EMV2 error propagation type taxonomy and occurrence probabilities as well as direct and indirect propagation paths between system components identified in the architecture model to generate a fault graph and apply transformations into a fault tree representation to support common mode analysis, cut set determination and probabilistic analysis.", "title": "" }, { "docid": "fd208ec9a2d74306495ac8c6d454bfd6", "text": "This qualitative study investigates the perceptions of suburban middle school students’ on academic motivation and student engagement. Ten students, grades 6-8, were randomly selected by the researcher from school counselors’ caseloads and the primary data collection techniques included two types of interviews; individual interviews and focus group interviews. Findings indicate students’ motivation and engagement in middle school is strongly influenced by the social relationships in their lives. The interpersonal factors identified by students were peer influence, teacher support and teacher characteristics, and parental behaviors. Each of these factors consisted of academic and social-emotional support which hindered and/or encouraged motivation and engagement. Students identified socializing with their friends as a means to want to be in school and to engage in learning. Also, students are more engaged and motivated if they believe their teachers care about their academic success and value their job. Lastly, parental involvement in academics appeared to be more crucial for younger students than older students in order to encourage motivation and engagement in school. MIDDLE SCHOOL STUDENTS’ PERCEPTIONS 5 Middle School Students’ Perceptions on Student Engagement and Academic Motivation Middle School Students’ Perceptions on Student Engagement and Academic Motivation Early adolescence marks a time for change for students academically and socially. Students are challenged academically in the sense that there is greater emphasis on developing specific intellectual and cognitive capabilities in school, while at the same time they are attempting to develop social skills and meaningful relationships. It is often easy to overlook the social and interpersonal challenges faced by students in the classroom when there is a large focus on grades in education, especially since teachers’ competencies are often assessed on their students’ academic performance. When schools do not consider psychosocial needs of students, there is a decrease in academic motivation and interest, lower levels of student engagement and poorer academic performance (i.e. grades) for middle school students (Wang & Eccles, 2013). In fact, students who report high levels of engagement in school are 75% more likely to have higher grades and higher attendance rates. Disengaged students tend to have lower grades and are more likely to drop out of school (Klem & Connell, 2004). Therefore, this research has focused on understanding the connections between certain interpersonal influences and academic motivation and engagement.", "title": "" }, { "docid": "44dbbc80c05cbbd95bacdf2f0a724db2", "text": "Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.", "title": "" }, { "docid": "5a40dc82635b3e9905b652da114eb3f4", "text": "Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-ofconcept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system – though just a prototype – learns effectively, and, by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game.", "title": "" }, { "docid": "68c1a1fdd476d04b936eafa1f0bc6d22", "text": "Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.", "title": "" }, { "docid": "a7eec693523207e6a9547000c1fbf306", "text": "Articulated hand tracking systems have been commonly used in virtual reality applications, including systems with human-computer interaction or interaction with game consoles. However, building an effective real-time hand pose tracker remains challenging. In this paper, we present a simple and efficient methodology for tracking and reconstructing 3d hand poses using a markered optical motion capture system. Markers were positioned at strategic points, and an inverse kinematics solver was incorporated to fit the rest of the joints to the hand model. The model is highly constrained with rotational and orientational constraints, allowing motion only within a feasible set. The method is real-time implementable and the results are promising, even with a low frame rate.", "title": "" }, { "docid": "6afaf6c8059d9a8b4201af3ab0e9c2ba", "text": "\" Memory is made up of a number of interrelated systems, organized structures of operating components consisting of neural substrates and their behavioral and cognitive correlates. A ternary classificatory scheme of memory is proposed in which procedural, semantic, and episodic memory constitute a \"monohierarchical\" arrangement: Episodic memory is a specialized subsystem of semantic memory, and semantic memory is a specialized subsystem of procedural memory. The three memory systems differ from one another in a number of ways, including the kind of consciousness that characterizes their operations. The ternary scheme overlaps with dichotomies and trichotomies of memory proposed by others. Evidence for multiple systems is derived from many sources. Illustrative data are provided by experiments in which direct priming effects are found to be both functionally and stochastically independent of recognition memory. Solving puzzles in science has much in common with solving puzzles for amusement, but the two differ in important respects. Consider, for instance, the jigsaw puzzle that scientific activity frequently imitates. The everyday version of the puzzle is determinate: It consists of a target picture and jigsaw pieces that, when properly assembled, are guaranteed to match the picture. Scientific puzzles are indeterminate: The number of pieces required to complete a picture is unpredictable; a particular piece may fit many pictures or none; it may fit only one picture, but the picture itself may be unknown; or the hypothetical picture may be imagined, but its component pieces may remain undiscovered. This article is about a current puzzle in the science of memory. It entails an imaginary picture and a search for pieces that fit it. The picture, or the hypothesis, depicts memory as consisting of a number of systems, each system serving somewhat different purposes and operating according to somewhat different principles. Together they form the marvelous capacity that we call by the single name of memory, the capacity that permits organisms to benefit from their past experiences. Such a picture is at variance with conventional wisdom that holds memory to be essentially a single system, the idea that \"memory is memory.\" The article consists of three main sections. In the first, 1 present some pretheoretical reasons for hypothesizing the existence of multiple memory systems and briefly discuss the concept of memory system. In the second, I describe a ternary classificatory scheme of memory--consisting of procedural, semantic, and episodic memory--and briefly compare this scheme with those proposed by others. In the third, I discuss the nature and logic of evidence for multiple systems and describe some experiments that have yielded data revealing independent effects of one and the same act of learning, effects seemingly at variance with the idea of a single system. I answer the question posed in the title of the article in the short concluding section. P r e t h e o r e t i c a l C o n s i d e r a t i o n s Why Multiple Memory Systems? It is possible to identify several a priori reasons why we should break with long tradition (Tulving, 1984a) and entertain thoughts about multiple memory systems. I mention five here. The first reason in many ways is perhaps the most compelling: No profound generalizations can be made about memory as a whole, but general statements about particular kinds of memory are perfectly possible. Thus, many questionable claims about memory in the literature, claims that give rise to needless and futile arguments, would become noncontroversial if their domain was restricted to parts of memory. Second, memory, like everything else in our world, has become what it is through a very long evolutionary process. Such a process seldom forms a continuous smooth line, but is characterized by sudden twists, jumps, shifts, and turns. One might expect, therefore, that the brain structures and mechanisms that (together with their behavioral and mental correlates) go to make up memory will also reflect such evolutionary quirks (Oakley, 1983). April 1985 • American Psychologist Copyright 1985 by the American Psychological Association, Inc. 0003-066X/85/$00.75 Vol. 40, No. 4, 385-398 385 The third reason is suggested by comparisons with other psychological functions. Consider, for instance, the interesting phenomenon of blindsight: People with damage to the visual cortex are blind in a part of their visual field in that they do not see objects in that part, yet they can accurately point to and discriminate these objects in a forced-choice situation (e.g., Weiskrantz, 1980; Weiskrantz, Warrington, Sanders, & Marshall, 1974). Such facts imply that different brain mechanisms exist for picking up information about the visual environment. Or consider the massive evidence for the existence of two separate cortical pathways involved in vision, one mediating recognition of objects, the other their location in space (e.g., Mishkin, Ungerleider, & Macko, 1983; Ungerleider & Mishkin, 1982). I f \"seeing\" things--something that phenomenal experience tells us is clearly uni tary-i s subserved by separable neural-cognitive systems, it is possible that learning and remembering, too, appear to be unitary only because of the absence of contrary evidence. The fourth general reason derives from what I think is an unassailable assumption that most, if not all, of our currently held ideas and theories about mental processes are wrong and that sooner or later in the future they will be replaced with more adequate concepts, concepts that fit nature better (Tulving, 1979). Our task, therefore, should be to hasten the arrival of such a future. Among other things, we should be willing to contemplate the possibility that the \"memoryi s -memory\" view is wrong and look for a better alternative. The fifth reason lies in a kind of failure of imagination: It is difficult to think how varieties of learning and memory that appear to be so different on inspection can reflect the workings of one and the same underlying set of structures and processes. It is difficult to imagine, for instance, that perceptualEditor's note. This article is based on a Distinguished Scientific Contribution Award address presented at the meeting of the American Psychological Association, Toronto, Canada, August 26, 1984. Award addresses, submitted by award recipients, are published as received except for minor editorial changes designed to maintain American Psychologist format. This reflects a policy of recognizing distinguished award recipients by eliminating the usual editorial review process to provide a forum consistent with that employed in delivering the award address. Author's note. This work was supported by the Natural Sciences and Engineering Research Council of Canada (Grant No. A8632) and by a Special Research Program Grant from the Connaught Fund, University of Toronto. I would like to thank Fergus-Craik and Daniel Schacter for their comments on the article and Janine Law for help with library research and the preparation of the manuscript. Requests for reprints should be sent to Endel Tulving, Department of Psychology, University of Toronto, Toronto, Canada, M5S IA1. motor adaptations to distorting lenses and their aftereffects (e.g., Kohler, 1962) are mediated by the same memory system that enables an individual to answer affirmatively when asked whether Abraham Lincoln is dead. It is equally difficult to imagine that the improved ability to make visual acuity judgments, resulting from many sessions of practice without reinforcement or feedback (e.g., Tulving, 1958), has much in common with a person's ability to remember the funeral of a close friend. If we reflect on the limits of generalizations about memory, think about the twists and turns of evolution, examine possible analogies with other biological and psychological systems, believe that most current ideas we have about the human mind are wrong, and have great difficulty apprehending sameness in different varieties of learning and memory, we might be ready to imagine the possibility that memory consists of a number of interrelated systems. But what exactly do we mean by a memory", "title": "" }, { "docid": "174406f7c5dabb3007158987d35d6de2", "text": "In this paper, we propose a toolkit for efficient and privacy-preserving outsourced calculation under multiple encrypted keys (EPOM). Using EPOM, a large scale of users can securely outsource their data to a cloud server for storage. Moreover, encrypted data belonging to multiple users can be processed without compromising on the security of the individual user's (original) data and the final computed results. To reduce the associated key management cost and private key exposure risk in EPOM, we present a distributed two-trapdoor public-key cryptosystem, the core cryptographic primitive. We also present the toolkit to ensure that the commonly used integer operations can be securely handled across different encrypted domains. We then prove that the proposed EPOM achieves the goal of secure integer number processing without resulting in privacy leakage of data to unauthorized parties. Last, we demonstrate the utility and the efficiency of EPOM using simulations.", "title": "" }, { "docid": "165195f20110158a26bc62b74821dc46", "text": "Prior studies on knowledge contribution started with the motivating role of social capital to predict knowledge contribution but did not specifically examine how they can be built in the first place. Our research addresses this gap by highlighting the role technology plays in supporting the development of social capital and eventual knowledge sharing intention. Herein, we propose four technology-based social capital builders – identity profiling, sub-community building, feedback mechanism, and regulatory practice – and theorize that individuals’ use of these IT artifacts determine the formation of social capital, which in turn, motivate knowledge contribution in online communities. Data collected from 253 online community users provide support for the proposed structural model. The results show that use of IT artifacts facilitates the formation of social capital (network ties, shared language, identification, trust in online community, and norms of cooperation) and their effects on knowledge contribution operate indirectly through social capital.", "title": "" } ]
scidocsrr
ab47a5ffcffe3e581a2e7a751a70024b
Learning to Search on Manifolds for 3D Pose Estimation of Articulated Objects
[ { "docid": "5f77e21de8f68cba79fc85e8c0e7725e", "text": "We introduce structured prediction energy networks (SPENs), a flexible framework for structured prediction. A deep architecture is used to define an energy function of candidate labels, and then predictions are produced by using backpropagation to iteratively optimize the energy with respect to the labels. This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output. One natural application of our technique is multi-label classification, which traditionally has required strict prior assumptions about the interactions between labels to ensure tractable learning and prediction problems. We are able to apply SPENs to multi-label problems with substantially larger label sets than previous applications of structured prediction, while modeling high-order interactions using minimal structural assumptions. Overall, deep learning provides remarkable tools for learning features of the inputs to a prediction problem, and this work extends these techniques to learning features of structured outputs. Our experiments provide impressive performance on a variety of benchmark multi-label classification tasks, demonstrate that our technique can be used to provide interpretable structure learning, and illuminate fundamental trade-offs between feed-forward and iterative structured prediction techniques.", "title": "" }, { "docid": "61ae61d0950610ee2ad5e07f64f9b983", "text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.", "title": "" } ]
[ { "docid": "a4d89f698e3049adc70bcd51b26878cc", "text": "The design and measured results of a 2 times 2 microstrip line fed U-slot rectangular antenna array are presented. The U-slot patches and the feeding network are placed on the same layer, resulting in a very simple structure. The advantage of the microstrip line fed U-slot patch is that it is easy to form the array. An impedance bandwidth (VSWR < 2) of 18% ranging from 5.65 GHz to 6.78 GHz is achieved. The radiation performance including radiation pattern, cross polarization, and gain is also satisfactory within this bandwidth. The measured peak gain of the array is 11.5 dBi. The agreement between simulated results and the measurement ones is good. The 2 times 2 array may be used as a module to form larger array.", "title": "" }, { "docid": "b9da9cc9d7583c5b72daf8a25a3145f5", "text": "The purpose of this article is to review literature that is relevant to the social scientific study of ethics and leadership, as well as outline areas for future study. We first discuss ethical leadership and then draw from emerging research on \"dark side\" organizational behavior to widen the boundaries of the review to include ««ethical leadership. Next, three emerging trends within the organizational behavior literature are proposed for a leadership and ethics research agenda: 1 ) emotions, 2) fit/congruence, and 3) identity/ identification. We believe each shows promise in extending current thinking. The review closes with discussion of important issues that are relevant to the advancement of research on leadership and ethics. T IMPORTANCE OF LEADERSHIP in promoting ethical conduct in organizations has long been understood. Within a work environment, leaders set the tone for organizational goals and behavior. Indeed, leaders are often in a position to control many outcomes that affect employees (e.g., strategies, goal-setting, promotions, appraisals, resources). What leaders incentivize communicates what they value and motivates employees to act in ways to achieve such rewards. It is not surprising, then, that employees rely on their leaders for guidance when faced with ethical questions or problems (Treviño, 1986). Research supports this contention, and shows that employees conform to the ethical values of their leaders (Schminke, Wells, Peyrefitte, & Sabora, 2002). Furthermore, leaders who are perceived as ethically positive influence productive employee work behavior (Mayer, Kuenzi, Greenbaum, Bardes, & Salvador, 2009) and negatively influence counterproductive work behavior (Brown & Treviño, 2006b; Mayer et al., 2009). Recently, there has been a surge of empirical research seeking to understand the influence of leaders on building ethical work practices and employee behaviors (see Brown & Treviño, 2006a for a review). Initial theory and research (Bass & Steidlemeier, 1999; Brown, Treviño, & Harrison, 2005; Ciulla, 2004; Treviño, Brown, & Hartman, 2003; Treviño, Hartman, & Brown, 2000) sought to define ethical leadership from both normative and social scientific (descriptive) approaches to business ethics. The normative perspective is rooted in philosophy and is concerned with prescribing how individuals \"ought\" or \"should\" behave in the workplace. For example, normative scholarship on ethical leadership (Bass & Steidlemeier, 1999; Ciulla, 2004) examines ethical decision making from particular philosophical frameworks, evaluates the ethicality of particular leaders, and considers the degree to which certain styles of leadership or influence tactics are ethical. ©2010 Business Ethics Quarterly 20:4 (October 2010); ISSN 1052-150X pp. 583-616 584 BUSINESS ETHICS QUARTERLY In contrast, our article emphasizes a social scientific approach to ethical leadership (e.g.. Brown et al., 2005; Treviño et al., 2000; Treviño et al, 2003). This approach is rooted in disciplines such as psychology, sociology, and organization science, and it attempts to understand how people perceive ethical leadership and investigates the antecedents, outcomes, and potential boundary conditions of those perceptions. This research has focused on investigating research questions such as: What is ethical leadership (Brown et al., 2005; Treviño et al., 2003)? What traits are associated with perceived ethical leadership (Walumbwa & Schaubroeck, 2009)? How does ethical leadership flow through various levels of management within organizations (Mayer et al., 2009)? And, does ethical leadership help or hurt a leader's promotability within organizations (Rubin, Dierdorff, & Brown, 2010)? The purpose of our article is to review literature that is relevant to the descriptive study of ethics and leadership, as well as outhne areas for future empirical study. We first discuss ethical leadership and then draw from emerging research on what often is called \"dark\" (destructive) organizational behavior, so as to widen the boundaries of our review to also include ««ethical leadership. Next, we discuss three emerging trends within the organizational behavior literature—1) emotions, 2) fit/congruence, and 3) identity/identification—that we believe show promise in extending current thinking on the influence of leadership (both positive and negative) on organizational ethics. We conclude with a discussion of important issues that are relevant to the advancement of research in this domain. A REVIEW OF SOCIAL SCIENTIFIC ETHICAL LEADERSHIP RESEARCH The Concept of Ethical Leadership Although the topic of ethical leadership has long been considered by scholars, descriptive research on ethical leadership is relatively new. Some of the first formal investigations focused on defining ethical leadership from a descriptive perspective and were conducted by Treviño and colleagues (Treviño et al., 2000, 2003). Their qualitative research revealed that ethical leaders were best described along two related dimensions: moral person and moral manager. The moral person dimension refers to the qualities of the ethical leader as a person. Strong moral persons are honest and trustworthy. They demonstrate a concern for other people and are also seen as approachable. Employees can come to these individuals with problems and concerns, knowing that they will be heard. Moral persons have a reputation for being fair and principled. Lastly, riioral persons are seen as consistently moral in both their personal and professional lives. The moral manager dimension refers to how the leader uses the tools of the position of leadership to promote ethical conduct at work. Strong moral managers see themselves as role models in the workplace. They make ethics salient by modeling ethical conduct to their employees. Moral managers set and communicate ethical standards and use rewards and punishments to ensure those standards are followed. In sum, leaders who are moral managers \"walk the talk\" and \"talk the walk,\" patterning their behavior and organizational processes to meet moral standards. ETHICAL AND UNETHICAL LEADERSHIP 585 Treviño and colleagues (Treviño et al., 2000, 2003) argued that individuals in power must be both strong moral persons and moral managers in order to be seen as ethical leaders by those around them. Strong moral managers who are weak moral persons are likely to be seen as hypocrites, failing to practice what they preach. Hypocritical leaders talk about the importance of ethics, but their actions show them to be dishonest and unprincipled. Conversely, a strong moral person who is a weak moral manager runs the risk of being seen as an ethically \"neutral\" leader. That is, the leader is perceived as being silent on ethical issues, suggesting to employees that the leader does not really care about ethics. Subsequent research by Brown, Treviño, and Harrison (2005:120) further clarified the construct and provided a formal definition of ethical leadership as \"the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making.\" They noted that \"the term normatively appropriate is 'deliberately vague'\" (Brown et al., 2005: 120) because norms vary across organizations, industries, and cultures. Brown et al. (2005) ground their conceptualization of ethical leadership in social learning theory (Bandura, 1977, 1986). This theory suggests individuals can learn standards of appropriate behavior by observing how role models (like teachers, parents, and leaders) behave. Accordingly, ethical leaders \"teach\" ethical conduct to employees through their own behavior. Ethical leaders are relevant role models because they occupy powerful and visible positions in organizational hierarchies that allow them to capture their follower's attention. They communicate ethical expectations through formal processes (e.g., rewards, policies) and personal example (e.g., interpersonal treatment of others). Effective \"ethical\" modeling, however, requires more than power and visibility. For social learning of ethical behavior to take place, role models must be credible in terms of moral behavior. By treating others fairly, honestly, and considerately, leaders become worthy of emulation by others. Otherwise, followers might ignore a leader whose behavior is inconsistent with his/her ethical pronouncements or who fails to interact with followers in a caring, nurturing style (Yussen & Levy, 1975). Outcomes of Ethical Leadership Researchers have used both social learning theory (Bandura, 1977,1986) and social exchange theory (Blau, 1964) to explain the effects of ethical leadership on important outcomes (Brown et al., 2005; Brown & Treviño, 2006b; Mayer et al , 2009; Walumbwa & Schaubroeck, 2009). According to principles of reciprocity in social exchange theory (Blau, 1964; Gouldner, 1960), individuals feel obligated to return beneficial behaviors when they believe another has been good and fair to them. In line with this reasoning, researchers argue and find that employees feel indebted to ethical leaders because of their trustworthy and fair nature; consequently, they reciprocate with beneficial work behavior (e.g., higher levels of ethical behavior and citizenship behaviors) and refrain from engaging in destructive behavior (e.g., lower levels of workplace deviance). 586 BUSINESS ETHICS QUARTERLY Emerging research has found that ethical leadership is related to important follower outcomes, such as employees' job satisfaction, organizational commitment, willingness to report problems to supervisors, willingness to put in extra effort on the job, voice behavior (i.e., expression of constructive suggestions intended to improve standard procedures), and perceptions of organizational culture and ethical climate (Brown et al., 2005; Neubert, Carlson, Kacmar, Roberts,", "title": "" }, { "docid": "cb16e3091aa29f0c6e50e3d556822df9", "text": "A considerable amount of effort has been devoted to design a classifier in practical situations. In this paper, a simple nonparametric classifier based on the local mean vectors is proposed. The proposed classifier is compared with the 1-NN, k-NN, Euclidean distance (ED), Parzen, and artificial neural network (ANN) classifiers in terms of the error rate on the unknown patterns, particularly in small training sample size situations. Experimental results show that the proposed classifier is promising even in practical situations. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "545de0009c9bba3538df2d9061c3ecb8", "text": "Attendance is one of the work ethics which is valued by most employers. In educational institutions also, attendance and academic success are directly related. Therefore, proper attendance management systems must be in place. Most of the educational institutions and government organizations in developing countries still use paper based attendance method to monitor the attendance. There is a need to replace these traditional methods of attendance recording with a more secure and robust system. Fingerprint based automated identification system based are gaining popularity due to unique nature of fingerprints. In this paper, a novel approach for fingerprint based attendance system using LabVIEW and GSM technology is proposed. Optical fingerprint module is used for capturing and processing fingerprints. Features such as recording of attendance in a text file along with the date and time of attendance are also incorporated in the system. GSM technology is used to intimate the parents about student’s attendance. The proposed system is implemented in the university and its performance is evaluated based upon user friendliness, accuracy, speed, security and cost.", "title": "" }, { "docid": "51fec678a2e901fdf109d4836ef1bf34", "text": "BACKGROUND\nFoot-and-mouth disease (FMD) is an acute, highly contagious disease that infects cloven-hoofed animals. Vaccination is an effective means of preventing and controlling FMD. Compared to conventional inactivated FMDV vaccines, the format of FMDV virus-like particles (VLPs) as a non-replicating particulate vaccine candidate is a promising alternative.\n\n\nRESULTS\nIn this study, we have developed a co-expression system in E. coli, which drove the expression of FMDV capsid proteins (VP0, VP1, and VP3) in tandem by a single plasmid. The co-expressed FMDV capsid proteins (VP0, VP1, and VP3) were produced in large scale by fermentation at 10 L scale and the chromatographic purified capsid proteins were auto-assembled as VLPs in vitro. Cattle vaccinated with a single dose of the subunit vaccine, comprising in vitro assembled FMDV VLP and adjuvant, developed FMDV-specific antibody response (ELISA antibodies and neutralizing antibodies) with the persistent period of 6 months. Moreover, cattle vaccinated with the subunit vaccine showed the high protection potency with the 50 % bovine protective dose (PD50) reaching 11.75 PD50 per dose.\n\n\nCONCLUSIONS\nOur data strongly suggest that in vitro assembled recombinant FMDV VLPs produced from E. coli could function as a potent FMDV vaccine candidate against FMDV Asia1 infection. Furthermore, the robust protein expression and purification approaches described here could lead to the development of industrial level large-scale production of E. coli-based VLPs against FMDV infections with different serotypes.", "title": "" }, { "docid": "df354ff3f0524d960af7beff4ec0a68b", "text": "The paper presents digital beamforming for Passive Coherent Location (PCL) radar. The considered circular antenna array is a part of a passive system developed at Warsaw University of Technology. The system is based on FM radio transmitters. The array consists of eight half-wave dipoles arranged in a circular array covering 360deg with multiple beams. The digital beamforming procedure is presented, including mutual coupling correction and antenna pattern optimization. The results of field calibration and measurements are also shown.", "title": "" }, { "docid": "5e14a79e4634445291d67c3d7f4ea617", "text": "A a new type of word-of-mouth information, online consumer product review is an emerging market phenomenon that is playing an increasingly important role in consumers’ purchase decisions. This paper argues that online consumer review, a type of product information created by users based on personal usage experience, can serve as a new element in the marketing communications mix and work as free “sales assistants” to help consumers identify the products that best match their idiosyncratic usage conditions. This paper develops a normative model to address several important strategic issues related to consumer reviews. First, we show when and how the seller should adjust its own marketing communication strategy in response to consumer reviews. Our results reveal that if the review information is sufficiently informative, the two types of product information, i.e., the seller-created product attribute information and buyer-created review information, will interact with each other. For example, when the product cost is low and/or there are sufficient expert (more sophisticated) product users, the two types of information are complements, and the seller’s best response is to increase the amount of product attribute information conveyed via its marketing communications after the reviews become available. However, when the product cost is high and there are sufficient novice (less sophisticated) product users, the two types of information are substitutes, and the seller’s best response is to reduce the amount of product attribute information it offers, even if it is cost-free to provide such information. We also derive precise conditions under which the seller can increase its profit by adopting a proactive strategy, i.e., adjusting its marketing strategies even before consumer reviews become available. Second, we identify product/market conditions under which the seller benefits from facilitating such buyer-created information (e.g., by allowing consumers to post user-based product reviews on the seller’s website). Finally, we illustrate the importance of the timing of the introduction of consumer reviews available as a strategic variable and show that delaying the availability of consumer reviews for a given product can be beneficial if the number of expert (more sophisticated) product users is relatively large and cost of the product is low.", "title": "" }, { "docid": "c638fe67f5d4b6e04a37e216edb849fa", "text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.", "title": "" }, { "docid": "901fa78a4d06c365d13169859caeae69", "text": "Although the number of cloud projects has dramatically increased over the last few years, ensuring the availability and security of project data, services, and resources is still a crucial and challenging research issue. Distributed denial of service (DDoS) attacks are the second most prevalent cybercrime attacks after information theft. DDoS TCP flood attacks can exhaust the cloud’s resources, consume most of its bandwidth, and damage an entire cloud project within a short period of time. The timely detection and prevention of such attacks in cloud projects are therefore vital, especially for eHealth clouds. In this paper, we present a new classifier system for detecting and preventing DDoS TCP flood attacks (CS_DDoS) in public clouds. The proposed CS_DDoS system offers a solution to securing stored records by classifying the incoming packets and making a decision based on the classification results. During the detection phase, the CS_DDOS identifies and determines whether a packet is normal or originates from an attacker. During the prevention phase, packets, which are classified as malicious, will be denied to access the cloud service and the source IP will be blacklisted. The performance of the CS_DDoS system is compared using the different classifiers of the least squares support vector machine (LS-SVM), naïve Bayes, K-nearest, and multilayer perceptron. The results show that CS_DDoS yields the best performance when the LS-SVM classifier is adopted. It can detect DDoS TCP flood attacks with about 97% accuracy and with a Kappa coefficient of 0.89 when under attack from a single source, and 94% accuracy with a Kappa coefficient of 0.9 when under attack from multiple attackers. Finally, the results are discussed in terms of accuracy and time complexity, and validated using a K-fold cross-validation model.", "title": "" }, { "docid": "72d863c7e323cd9b3ab4368a51743319", "text": "STUDY DESIGN\nThis study is a retrospective review of the initial enrollment data from a prospective multicentered study of adult spinal deformity.\n\n\nOBJECTIVES\nThe purpose of this study is to correlate radiographic measures of deformity with patient-based outcome measures in adult scoliosis.\n\n\nSUMMARY OF BACKGROUND DATA\nPrior studies of adult scoliosis have attempted to correlate radiographic appearance and clinical symptoms, but it has proven difficult to predict health status based on radiographic measures of deformity alone. The ability to correlate radiographic measures of deformity with symptoms would be useful for decision-making and surgical planning.\n\n\nMETHODS\nThe study correlates radiographic measures of deformity with scores on the Short Form-12, Scoliosis Research Society-29, and Oswestry profiles. Radiographic evaluation was performed according to an established positioning protocol for anteroposterior and lateral 36-inch standing radiographs. Radiographic parameters studied were curve type, curve location, curve magnitude, coronal balance, sagittal balance, apical rotation, and rotatory subluxation.\n\n\nRESULTS\nThe 298 patients studied include 172 with no prior surgery and 126 who had undergone prior spine fusion. Positive sagittal balance was the most reliable predictor of clinical symptoms in both patient groups. Thoracolumbar and lumbar curves generated less favorable scores than thoracic curves in both patient groups. Significant coronal imbalance of greater than 4 cm was associated with deterioration in pain and function scores for unoperated patients but not in patients with previous surgery.\n\n\nCONCLUSIONS\nThis study suggests that restoration of a more normal sagittal balance is the critical goal for any reconstructive spine surgery. The study suggests that magnitude of coronal deformity and extent of coronal correction are less critical parameters.", "title": "" }, { "docid": "542bf63a4c97cbbfe91c39e32fbaf9dd", "text": "Vision is the most versatile and efficient sensory system. So, it is not surprising that images contribute an important role in human perception. This is analogous to machine vision such as shape recognition application which is an important field nowadays. This paper describes implementation of image processing on embedded platform and an embedded application, a robot capable of tracking an object in 3-dimensional environment. It is a real time operating system (RTOS) based embedded system which will run the Digital Image Processing Algorithms to extract the information from the images. The camera connected on USB bus is used to capture images on the ARM9 core running RTOS. Depending upon the information extracted, the locomotion is carried out. The camera is a simple CMOS USB-camera module which has a resolution about 0.3MP. Video4Linux API’s provided by kernel are used to capture the image, and then it is decoded, and the required object location is detected using image processing algorithms. The actuations are made so as to track the object. The embedded Linux kernel provides support for multitasking and ensures that the task is performed within the real time constraints. The OS makes system flexible for changes such as interfacing new devices, handling the file system and memory management for storage of data. KeywordsEmbedded Linux, ARM, Video4Linux, YUYV, Embedded C, Object detection, CMOS, USB, SOC, Kerne", "title": "" }, { "docid": "984c41e73e0fc97c1f2ea054bf6cac14", "text": "Security is an integral part of most software systems but it is not considered as an explicit part in the development process yet. Input validation is the most critical part of software security that is not covered in the design phase of software development life-cycle resulting in many security vulnerabilities. Our objective is to extend UML to new integrated framework for model driven security engineering leading to ideal way to design more secure software. Input validation in UML has not been addressed previously, hence we incorporate input validation into UML diagrams such as use case, class, sequence and activity. This approach has some advantages such as preventing from common input tampering attacks, having both security and convenience in software at high level of abstraction and ability of solving the problem of weak security background for developers.", "title": "" }, { "docid": "4f686e9f37ec26070d0d280b98f78673", "text": "State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards “small”. Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4% (97.6% top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance.", "title": "" }, { "docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "fc0470776583df8b25114abc8709b045", "text": "Certified Registered Nurse Anesthetists (CRNAs) have been providing anesthesia care in the United States (US) for nearly 150 years. Historically, anesthesia care for surgical patients was mainly provided by trained nurses under the supervision of surgeons until the establishment of anesthesiology as a medical specialty in the US. Currently, all 50 US states utilize CRNAs to perform various kinds of anesthesia care, either under the medical supervision of anesthesiologists in most states, or independently without medical supervision in 16 states; the latter has become an on-going source of conflict between anesthesiologists and CRNAs. Understanding the history and current conditions of anesthesia practice in the US is crucial for countries in which the shortage of anesthesia care providers has become a national issue.", "title": "" }, { "docid": "712ec4f533c252baeaa3fc4aec3900eb", "text": "This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.", "title": "" }, { "docid": "c37266bccbd26b149f759e8323ed77ff", "text": "Transformer bushings are one of the important components in the construction of high voltage power transmission. The structure and materials used in electrical bushings play a vital role in the durability and lifespan of the bushing. Although the cost of transformer bushing is very minimal in the transmission system extensive damages are caused in case of failure. The main reason of failure in the insulation materials are caused due to moisture ingress and formation of voids, cavities. When the appropriate material is used along with an optimized structure, the electric field is distributed evenly which reduces the stress and increases the longevity of the bushing. In this paper, finite element analysis (FEA) is used to calculate The electric field distribution and thermal (temperature) analysis is done for 765 kV transformer bushing by using different insulating materials such as epoxy resin, porcelain, oil impregnated paper, resin impregnated paper and polymer. Henceforth, by using the above insulating materials the total stress level of the bushings corresponding to the net electric field distribution is in turn calculated using ANSYS 13.0 software. The results are compared and analysis is done for the different insulating materials. Based on the electric field distribution over bushing material, it is used to identify optimization of bushing design.", "title": "" }, { "docid": "325b97e73ea0a50d2413757e95628163", "text": "Due to the recent advancement in procedural generation techniques, games are presenting players with ever growing cities and terrains to explore. However most sandbox-style games situated in cities, do not allow players to wander into buildings. In past research, space planning techniques have already been utilized to generate suitable layouts for both building floor plans and room layouts. We introduce a novel rule-based layout solving approach, especially suited for use in conjunction with procedural generation methods. We show how this solving approach can be used for procedural generation by providing the solver with a userdefined plan. In this plan, users can specify objects to be placed as instances of classes, which in turn contain rules about how instances should be placed. This approach gives us the opportunity to use our generic solver in different procedural generation scenarios. In this paper, we will illustrate mainly with interior generation examples.", "title": "" }, { "docid": "f53dc3977a9e8c960e0232ef59c0e7fd", "text": "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.", "title": "" }, { "docid": "88c1ab7e817118ee01fb28bf32ed2e23", "text": "Field experiment was conducted on fodder maize to explore the potential of integrated use of chemical, organic and biofertilizers for improving maize growth, beneficial microflora in the rhizosphere and the economic returns. The treatments were designed to make comparison of NPK fertilizer with different combinations of half dose of NP with organic and biofertilizers viz. biological potassium fertilizer (BPF), Biopower, effective microorganisms (EM) and green force compost (GFC). Data reflected maximum crop growth in terms of plant height, leaf area and fresh biomass with the treatment of full NPK; and it was followed by BPF+full NP. The highest uptake of NPK nutrients by crop was recorded as: N under half NP+Biopower; P in BPF+full NP; and K from full NPK. The rhizosphere microflora enumeration revealed that Biopower+EM applied along with half dose of GFC soil conditioner (SC) or NP fertilizer gave the highest count of N-fixing bacteria (Azotobacter, Azospirillum, Azoarcus andZoogloea). Regarding the P-solubilizing bacteria,Bacillus was having maximum population with Biopower+BPF+half NP, andPseudomonas under Biopower+EM+half NP treatment. It was concluded that integration of half dose of NP fertilizer with Biopower+BPF / EM can give similar crop yield as with full rate of NP fertilizer; and through reduced use of fertilizers the production cost is minimized and the net return maximized. However, the integration of half dose of NP fertilizer with biofertilizers and compost did not give maize fodder growth and yield comparable to that from full dose of NPK fertilizers.", "title": "" } ]
scidocsrr
19a3d40c417bcc3b03b59a7cbc2e8ef3
An empirical analysis of the optimization of deep network loss surfaces
[ { "docid": "f31f45176e89163d27b065a52b429973", "text": "Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.", "title": "" } ]
[ { "docid": "24c49ac0ed56f27982cfdad18054e466", "text": "This paper examines two alternative approaches to supporting code scheduling for multiple-instruction-issue processors. One is to provide a set of non-trapping instructions so that the compiler can perform aggressive static code scheduling. The application of this approach to existing commercial architectures typically requires extending the instruction set. The other approach is to support out-of-order execution in the microarchitecture so that the hardware can perform aggressive dynamic code scheduling. This approach usually does not require modifying the instruction set but requires complex hardware support. In this paper, we analyze the performance of the two alternative approaches using a set of important nonnumerical C benchmark programs. A distinguishing feature of the experiment is that the code for the dynamic approach has been optimized and scheduled as much as allowed by the architecture. The hardware is only responsible for the additional reordering that cannot be performed by the compiler. The overall result is that the clynamic and static approaches are comparable in performance. When applied to a four-instruction-issue processor, both methods achieve more than two times speedup over a high performance single-instruction-issue processor. However, the performance of each scheme varies among the benchmark programs. To explain this variation, we have identified the conditions in these programs that make one approach perform better than the other.", "title": "" }, { "docid": "c77d76834c3aa8ace82cb15b6f882365", "text": "A multidatabase system provides integrated access to heterogeneous, autonomous local databases in a distributed system. An important problem in current multidatabase systems is identification of semantically similar data in different local databases. The Summary Schemas Model (SSM) is proposed as an extension to multidatabase systems to aid in semantic identification. The SSM uses a global data structure to abstract the information available in a multidatabase system. This abstracted form allows users to use their own terms (imprecise queries) when accessing data rather than being forced to use system-specified terms. The system uses the global data structure to match the user's terms to the semantically closest available system terms. A simulation of the SSM is presented to compare imprecise-query processing with corresponding query-processing costs in a standard multidatabase system. The costs and benefits of the SSM are discussed, and future research directions are presented.", "title": "" }, { "docid": "a6dfe7d715ff1243912d21bb75d0e8a3", "text": "When evaluating the accessibility of a website, we usually resort to sampling methods to reduce the cost of evaluation. In this kind of approaches, a small subset of pages in a website are chosen for evaluating the accessibility value of the whole website. Good sampling quality means the selected subset can represent well the accessibility level of the whole website, i.e. minimizing the accessibility evaluation difference between the whole site and the sampled subset. As existing studies show the accuracy of sampling methods depends heavily on the metric, we propose in this paper a specific sampling method OPS-WAQM that is optimized for Web Accessibility Quantitative Metric (WAQM). OPS-WAQM minimizes the sampling error by choosing the optimal sample numbers in different page depth layers. A greedy algorithm is proposed to approximately solve the optimization problem in an efficient way. We use a dataset of 20 websites, 365780 web pages to validate our method. Experimental results show that our sampling method is effective for web accessibility evaluation.", "title": "" }, { "docid": "c7502c4fe6d06993c3075043c0e6a3e7", "text": "Wireless communication applications have driven the development of high-resolution A/D converters (ADCs) with high sample rates, good AC performance and IF sampling capability to enable wider cellular coverage, more carriers, and to simplify the system design. We describe a 16b ADC with a sample rate up to 250MS/s that employs background calibration of the residue amplifier (RA) gain errors. The ADC has an integrated input buffer and is fabricated on a 0.18µm BiCMOS process. When the input buffer is bypassed, the SNR is 77.5dB and the SFDR is 90dB at 10MHz input frequency. With the input buffer, the SNR is 76dB and the SFDR is 95dB. The ADC consumes 850mW from a 1.8V supply, and the input buffer consumes 150mW from a 3V supply. The input span is 2.6Vp-p and the jitter is 60fs.", "title": "" }, { "docid": "c6961a90d470bdcbc547636690365a75", "text": "Seven studies reveal that nostalgia, a sentimental affection for the past, offers a window to the intrinsic self-concept-who people think they truly are. In Study 1, state nostalgia was associated with higher authenticity and lower extrinsic self-focus (concern with meeting extrinsic value standards). In Study 2, experimentally primed nostalgia increased perceived authenticity of the past self, which in turn predicted reduced current extrinsic self-focus. Study 3 showed that nostalgia increased the accessibility of the intrinsic self-concept but not the everyday self-concept. Study 4 provided evidence for a moderator suggested by our theoretical analysis: Recalling a nostalgic event increased felt nostalgia and positive affect, but this effect was attenuated if participants were prompted to recognize external factors controlling their behavior during that event. Next we treated nostalgia as an outcome variable and a moderator to test whether nostalgia is triggered by, and buffers against, threats to the intrinsic self. Using a mediation approach, Study 5 showed that participants primed to feel blocked in intrinsic self-expression responded with increased nostalgia. In Study 6, intrinsic self-threat reduced intrinsic self-expression and subjective well-being for participants who were not given an opportunity to respond with nostalgia but not for participants who were allowed to reflect on a nostalgic memory. In line with the experimental findings, correlational data from Study 7 indicated that dispositional nostalgia positively predicted intrinsic self-expression and well-being. Understanding nostalgia as a window to the intrinsic self points to new directions for research on nostalgia's antecedents, moderators, and consequences for well-being.", "title": "" }, { "docid": "25fdc5ac66a915f2bd4f552218dcf396", "text": "Piaget has su!ered a great deal of criticism that his theory of psychological development neglects the social nature of human development. Much of this criticism has come from researchers following a Vygotskian approach and comparing Piaget's approach unfavorably with that of Vygotsky. Smith (1995) refers us to Piaget's collected articles on sociology (Piaget, 1995) to argue convincingly that it is oversimpli\"cation and misunderstanding to assume Piaget's neglect of the social nature of human development. We want to o!er our own critique of both Piaget and Vygotsky from a new, sociocultural perspective, recently emerging in several disciplines of social sciences (Heath, 1983; Latour, 1987; Lave & Wenger, 1991; McDermott, 1993; Rogo!, 1990). We do not consider ourselves followers of Vygotsky's theory but of a sociocultural approach, despite the fact that the sociocultural approach is itself heavily built on and in#uenced by Vygotsky's work. For a while, a sociocultural approach was an invisible by-product of e!orts by mainly US psychologists like Cole (1978), Wertsch (1985), Scribner (1984), Rogo! and Wertsch (1984) and others to reconstruct and continue Vygotsky's paradigm. Just as the medieval endeavor to bring a renaissance of ancient Greek art and culture gave birth to a new art and a new culture, we argue that the renaissance of Vygotsky has gradually produced a new theoretical approach* namely the sociocultural. Initial critiques of Piaget from a Vygotskyian perspective came when the sociocultural approach was into the earlier phases of development, when `psychologistsa2 were `increasingly interested in the e!ects of the social context of individuals' cognitive developmenta (Tudge & Rogo!, 1989, p. 17). In contrast, from a current sociocultural perspective, cognitive development is embedded in social contexts and their separation is considered impossible and, thus, cannot have `e!ectsa. Like Smith, we claim that there is an overlooked similarity between Piaget and Vygotsky. However, from a recent sociocultural perspective, we associate these similarities with a shared failure to recognize the unity of cognition and social context. Our paper is primarily", "title": "" }, { "docid": "f08c6829b353c45b6a9a6473b4f9a201", "text": "In this paper, we study the Symmetric Regularized Long Wave (SRLW) equations by finite difference method. We design some numerical schemes which preserve the original conservative properties for the equations. The first scheme is two-level and nonlinear-implicit. Existence of its difference solutions are proved by Brouwer fixed point theorem. It is proved by the discrete energy method that the scheme is uniquely solvable, unconditionally stable and second-order convergent for U in L1 norm, and for N in L2 norm on the basis of the priori estimates. The second scheme is three-level and linear-implicit. Its stability and second-order convergence are proved. Both of the two schemes are conservative so can be used for long time computation. However, they are coupled in computing so need more CPU time. Thus we propose another three-level linear scheme which is not only conservative but also uncoupled in computation, and give the numerical analysis on it. Numerical experiments demonstrate that the schemes are accurate and efficient. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e92523a656b96996d72db0c8697a46aa", "text": "For many of the world’s languages, the Bible is the only significant bilingual, or even monolingual, text, making it a unique training resource for tasks such as translation, named entity analysis, and transliteration. Given the Bible’s small size, however, the output of standard word alignment tools can be extremely noisy, making downstream tasks difficult. In this work, we develop and release a novel resource of 1129 aligned Bible person and place names across 591 languages, which was constructed and improved using several approaches including weighted edit distance, machine-translation-based transliteration models, and affixal induction and transformation models. Our models outperform a widely used word aligner on 97% of test words, showing the particular efficacy of our approach on the impactful task of broadly multilingual named-entity alignment and translation across a remarkably large number of world languages. We further illustrate the utility of our translation matrix for the multilingual learning of name-related affixes and their semantics as well as transliteration of named entities.", "title": "" }, { "docid": "bf44cc7e8e664f930edabf20ca06dd29", "text": "Nowadays, our living environment is rich in radio-frequency energy suitable for harvesting. This energy can be used for supplying low-power consumption devices. In this paper, we analyze a new type of a Koch-like antenna which was designed for energy harvesting specifically. The designed antenna covers two different frequency bands (GSM 900 and Wi-Fi). Functionality of the antenna is verified by simulations and measurements.", "title": "" }, { "docid": "8318d49318f442749bfe3a33a3394f42", "text": "Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task.", "title": "" }, { "docid": "31404322fb03246ba2efe451191e29fa", "text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.", "title": "" }, { "docid": "e43056aad827cd5eea146418aa89ec09", "text": "The detection and analysis of clusters has become commonplace within geographic information science and has been applied in epidemiology, crime prevention, ecology, demography and other fields. One of the many methods for detecting and analyzing these clusters involves searching the dataset with a flock of boids (bird objects). While boids are effective at searching the dataset once their behaviors are properly configured, it can be difficult to find the proper configuration. Since genetic algorithms have been successfully used to configure neural networks, they may also be useful for configuring parameters guiding boid behaviors. In this paper, we develop a genetic algorithm to evolve the ideal boid behaviors. Preliminary results indicate that, even though the genetic algorithm does not return the same configuration each time, it does converge on configurations that improve over the parameters used when boids were initially proposed for geographic cluster detection. Also, once configured, the boids perform as well as other cluster detection methods. Continued work with this system could determine which parameters have a greater effect on the results of the boid system and could also discover rules for configuring a flock of boids directly from properties of the dataset, such as point density, rather than requiring the time-consuming process of optimizing the parameters for each new dataset.", "title": "" }, { "docid": "87da90ee583f5aa1777199f67bdefc83", "text": "The rapid development of computer networks in the past decades has created many security problems related to intrusions on computer and network systems. Intrusion Detection Systems IDSs incorporate methods that help to detect and identify intrusive and non-intrusive network packets. Most of the existing intrusion detection systems rely heavily on human analysts to analyze system logs or network traffic to differentiate between intrusive and non-intrusive network traffic. With the increase in data of network traffic, involvement of human in the detection system is a non-trivial problem. IDS’s ability to perform based on human expertise brings limitations to the system’s capability to perform autonomously over exponentially increasing data in the network. However, human expertise and their ability to analyze the system can be efficiently modeled using soft-computing techniques. Intrusion detection techniques based on machine learning and softcomputing techniques enable autonomous packet detections. They have the potential to analyze the data packets, autonomously. These techniques are heavily based on statistical analysis of data. The ability of the algorithms that handle these data-sets can use patterns found in previous data to make decisions for the new evolving data-patterns in the network traffic. In this paper, we present a rigorous survey study that envisages various soft-computing and machine learning techniques used to build autonomous IDSs. A robust IDSs system lays a foundation to build an efficient Intrusion Detection and Prevention System IDPS.", "title": "" }, { "docid": "212536baf7f5bd2635046774436e0dbf", "text": "Mobile devices have already been widely used to access the Web. However, because most available web pages are designed for desktop PC in mind, it is inconvenient to browse these large web pages on a mobile device with a small screen. In this paper, we propose a new browsing convention to facilitate navigation and reading on a small-form-factor device. A web page is organized into a two level hierarchy with a thumbnail representation at the top level for providing a global view and index to a set of sub-pages at the bottom level for detail information. A page adaptation technique is also developed to analyze the structure of an existing web page and split it into small and logically related units that fit into the screen of a mobile device. For a web page not suitable for splitting, auto-positioning or scrolling-by-block is used to assist the browsing as an alterative. Our experimental results show that our proposed browsing convention and developed page adaptation scheme greatly improve the user's browsing experiences on a device with a small display.", "title": "" }, { "docid": "f733125d8cd3d90ac7bf463ae93ca24a", "text": "Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers).", "title": "" }, { "docid": "bc8b531442f155d8311b10585135eb9f", "text": "CONTEXT\nAlthough mathematical models have been developed for the bony movement occurring during chiropractic manipulation, such models are not available for soft tissue motion.\n\n\nOBJECTIVE\nTo develop a three-dimensional mathematical model for exploring the relationship between mechanical forces and deformation of human fasciae in manual therapy using a finite deformation theory.\n\n\nMETHODS\nThe predicted stresses required to produce plastic deformation were evaluated for a volunteer subject's fascia lata, plantar fascia, and superficial nasal fascia. These stresses were then compared with previous experimental findings for plastic deformation in dense connective tissues. Using the three-dimensional mathematical model, the authors determined the changing amounts of compression and shear produced in fascial tissue during 20 seconds of manual therapy.\n\n\nRESULTS\nThe three-dimensional model's equations revealed that very large forces, outside the normal physiologic range, are required to produce even 1% compression and 1% shear in fascia lata and plantar fascia. Such large forces are not required to produce substantial compression and shear in superficial nasal fascia, however.\n\n\nCONCLUSION\nThe palpable sensations of tissue release that are often reported by osteopathic physicians and other manual therapists cannot be due to deformations produced in the firm tissues of plantar fascia and fascia lata. However, palpable tissue release could result from deformation in softer tissues, such as superficial nasal fascia.", "title": "" }, { "docid": "bb88a929b1ac6565c7d31abb65813b29", "text": "Esophagitis dissecans superficialis and eosinophilic esophagitis are distinct esophageal pathologies with characteristic clinical and histologic findings. Esophagitis dissecans superficialis is a rare finding on endoscopy consisting of the peeling of large fragments of esophageal mucosa. Histology shows sloughing of the epithelium and parakeratosis. Eosinophilic esophagitis is an allergic disease of the esophagus characterized by eosinophilic inflammation of the epithelium and symptoms of esophageal dysfunction. Both of these esophageal processes have been associated with other diseases, but there is no known association between them. We describe a case of esophagitis dissecans superficialis and eosinophilic esophagitis in an adolescent patient. To our knowledge, this is the first case describing an association between esophageal dissecans superficialis and eosinophilic esophagitis. Citation: Guerra MR, Vahabnezhad E, Swanson E, Naini BV, Wozniak LJ (2015) Esophagitis dissecans associated with eosinophilic esophagitis in an adolescent. Adv Pediatr Res 2:8. doi:10.12715/apr.2015.2.8 Received: January 27, 2015; Accepted: February 19, 2015; Published: March 19, 2015 Copyright: © 2015 Guerra et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Competing interests: The authors have declared that no competing interests exist. * Email: marjorieanneguerra@mednet.ucla.edu", "title": "" }, { "docid": "31190e66cb9bff91359f4594623880ad", "text": "This paper reports an ultra-thin MEMS capacitive pressure sensor with high pressure sensitivity of better than 150aF/Pa, and small die size of 1.0mm × 1.0mm × 60µm. It is able to detect ambient pressure change with a resolution of 0.025% in a pressure range +/−3.5KPa. This capacitive pressure sensor decouples the pressure sensing from its capacitance sensing by using a hermetically sealed capacitor that is electrically isolated but mechanically coupled with a pressure sensing diaphragm such that a large dynamic range and high pressure sensitivity can be readily achieved. Because the capacitor is hermetically sealed in a cavity, this capacitive pressure sensor is also immune to measurement media and EMI (Electromagnetic Interference) effects.", "title": "" }, { "docid": "aee28b8f04acf83abb1441134411690a", "text": "This paper reports a new novel low-cost, wireless communication network system, called the \"Electronic Shepherd\" (ES). The system is innovative in the way that it supports flock behavior, meaning that a flock leader monitors the state of the other elements in the flock using low-cost radio communication equipment. The paper addresses both details of the terminal devices and communication protocols, as well as testing of the system in a real environment. The ES system was originally made to address special needs for sheep and reindeer farmers who were seeking a system to keep track of their animals during the grazing season. The system, including GPS receivers, UHF radio communication transceivers and GPRS modems, contributes a new approach for low-cost networking and service implementation, not only for the purpose of animal tracking, but also for other applications where objects are to be monitored at a low cost.", "title": "" }, { "docid": "40ca62f4c792d8381a3ab3c1193fcc4f", "text": "There has been a rapid increase in the volume of research on data-driven dependency parsers in the past five years. This increase has been driven by the availability of treebanks in a wide variety of languages—due in large part to the CoNLL shared tasks—as well as the straightforward mechanisms by which dependency theories of syntax can encode complex phenomena in free word order languages. In this article, our aim is to take a step back and analyze the progress that has been made through an analysis of the two predominant paradigms for data-driven dependency parsing, which are often called graph-based and transition-based dependency parsing. Our analysis covers both theoretical and empirical aspects and sheds light on the kinds of errors each type of parser makes and how they relate to theoretical expectations. Using these observations, we present an integrated system based on a stacking learning framework and show that such a system can learn to overcome the shortcomings of each non-integrated system.", "title": "" } ]
scidocsrr
e6902ff3ecf86586b575283bcaae5dcf
Relative entropy and free energy dualities: Connections to Path Integral and KL control
[ { "docid": "9fdba452394ba0a8ed3b75f222de9590", "text": "We present a theory of compositionality in stochastic optimal control, showing how task-optimal controllers can be constructed from certain primitives. The primitives are themselves feedback controllers pursuing their own agendas. They are mixed in proportion to how much progress they are making towards their agendas and how compatible their agendas are with the present task. The resulting composite control law is provably optimal when the problem belongs to a certain class. This class is rather general and yet has a number of unique properties – one of which is that the Bellman equation can be made linear even for non-linear or discrete dynamics. This gives rise to the compositionality developed here. In the special case of linear dynamics and Gaussian noise our framework yields analytical solutions (i.e. non-linear mixtures of LQG controllers) without requiring the final cost to be quadratic. More generally, a natural set of control primitives can be constructed by applying SVD to Green’s function of the Bellman equation. We illustrate the theory in the context of human arm movements. The ideas of optimality and compositionality are both very prominent in the field of motor control, yet they have been difficult to reconcile. Our work makes this possible.", "title": "" } ]
[ { "docid": "6dd440495dacfa43e1926fcdaa063aab", "text": "In this paper we revise the state of the art on personality-aware recommender systems, identifying main research trends and achievements up to date, and discussing open issues that may be addressed in the future.", "title": "" }, { "docid": "7a24f978a349c897c1ae91de66b2cdc6", "text": "Synthetic biology is a research field that combines the investigative nature of biology with the constructive nature of engineering. Efforts in synthetic biology have largely focused on the creation and perfection of genetic devices and small modules that are constructed from these devices. But to view cells as true 'programmable' entities, it is now essential to develop effective strategies for assembling devices and modules into intricate, customizable larger scale systems. The ability to create such systems will result in innovative approaches to a wide range of applications, such as bioremediation, sustainable energy production and biomedical therapies.", "title": "" }, { "docid": "fec345f9a3b2b31bd76507607dd713d4", "text": "E-government is a relatively new branch of study within the Information Systems (IS) field. This paper examines the factors influencing adoption of e-government services by citizens. Factors that have been explored in the extant literature present inadequate understanding of the relationship that exists between ‘adopter characteristics’ and ‘behavioral intention’ to use e-government services. These inadequacies have been identified through a systematic and thorough review of empirical studies that have considered adoption of government to citizen (G2C) electronic services by citizens. This paper critically assesses key factors that influence e-government service adoption; reviews limitations of the research methodologies; discusses the importance of 'citizen characteristics' and 'organizational factors' in adoption of e-government services; and argues for the need to examine e-government service adoption in the developing world.", "title": "" }, { "docid": "32f3396d7e843f75c504cd99b00944a0", "text": "This paper aims to address the very challenging problem of efficient and accurate hand tracking from depth sequences, meanwhile to deform a high-resolution 3D hand model with geometric details. We propose an integrated regression framework to infer articulated hand pose, and regress high-frequency details from sparse high-resolution 3D hand model examples. Specifically, our proposed method mainly consists of four components: skeleton embedding, hand joint regression, skeleton alignment, and high-resolution details integration. Skeleton embedding is optimized via a wrinkle-based skeleton refinement method for faithful hand models with fine geometric details. Hand joint regression is based on a deep convolutional network, from which 3D hand joint locations are predicted from a single depth map, then a skeleton alignment stage is performed to recover fully articulated hand poses. Deformable fine-scale details are estimated from a nonlinear mapping between the hand joints and per-vertex displacements. Experiments on two challenging datasets show that our proposed approach can achieve accurate, robust, and real-time hand tracking, while preserve most high-frequency details when deforming a virtual hand.", "title": "" }, { "docid": "89039f8d247b3f178c0be6a1f30004b8", "text": "We study the property of the Fused Lasso Signal Approximator (FLSA) for estimating a blocky signal sequence with additive noise. We transform the FLSA to an ordinary Lasso problem, and find that in general the resulting design matrix does not satisfy the irrepresentable condition that is known as an almost necessary and sufficient condition for exact pattern recovery. We give necessary and sufficient conditions on the expected signal pattern such that the irrepresentable condition holds in the transformed Lasso problem. However, these conditions turn out to be very restrictive. We apply the newly developed preconditioning method — Puffer Transformation (Jia and Rohe, 2015) to the transformed Lasso and call the new procedure the preconditioned fused Lasso. We give nonasymptotic results for this method, showing that as long as the signal-to-noise ratio is not too small, our preconditioned fused Lasso estimator always recovers the correct pattern with high probability. Theoretical results give insight into what controls the ability of recovering the pattern — it is the noise level instead of the length of the signal sequence. Simulations further confirm our theorems and visualize the significant improvement of the preconditioned fused Lasso estimator over the vanilla FLSA in exact pattern recovery. © 2015 Published by Elsevier B.V.", "title": "" }, { "docid": "de3f00cbd1a907423b73b712fe592785", "text": "The Internet of Things (IoT) has introduced a myriad of ways in which devices can interact with each other. The IoT concept provides opportunities for novel and useful applications but at the same time, concerns have been raised over potential security issues caused by buggy IoT software. It is therefore imperative to detect and fix these bugs in order to minimise the risk of IoT devices becoming the target or source of attacks. In this paper, we focus our investigation on the underlying IoT operating system (OS), which is critical for the overall security of IoT devices. We picked Contiki as our case study since it is a very popular IoT OS and we have access to part of the development team, allowing us to discuss potential vulnerabilities with them so that fixes can be implemented quickly. Using static program analysis tools and techniques, we are able to scan the source code of the Contiki OS systematically in order to identify, analyse and patch vulnerabilities. Our main contribution is a holistic and systematic analysis of Contiki, starting with an exploration of its metrics, fundamental architecture, and finally some of its vulnerabilities. Our analysis produced relevant data on the number of unsafe functions in use, as well as the bug density; both of which provide an indication of the overall security of the inspected system. Our effort led to the finding of two major issues, described in two Common Vulnerabilities and Exposures (CVE) reports.", "title": "" }, { "docid": "a0640bbfa22020e216d4ab5dfefa9bc0", "text": "Clozapine has demonstrated superior efficacy in relieving positive and negative symptoms in treatment-resistant schizophrenic patients; unlike other antipsychotics, it causes minimal extrapyramidal side effects (EPS) and has little effect on serum prolactin. Despite these benefits, the use of clozapine has been limited because of infrequent but serious side effects, the most notable being agranulocytosis. In recent years, however, mandatory blood monitoring has significantly reduced both the incidence of agranulocytosis and its associated mortality. The occurrence of seizures appears to be dose-related and can generally be managed by reduction in clozapine dosage. Less serious and more common side effects of clozapine including sedation, hypersalivation, tachycardia, hypotension, hypertension, weight gain, constipation, urinary incontinence, and fever can often be managed medically and are generally tolerated by the patient. Appropriate management of clozapine side effects facilitates a maximization of the benefits of clozapine treatment, and physicians and patients alike should be aware that there is a range of benefits to clozapine use that is wider than its risks.", "title": "" }, { "docid": "e0debf78f4b9a8addeeaa298c08a68d9", "text": "The paper outlines a pilot test on the UCSB Wearable Computer developed in conjunction with Project Battuta. The UCSB Wearable incorporates geographic context information (location and orientation) to display an electronic digital map to a user via a Heads-Up Display (HUD). Features of the custom designed Geographic Referenced Graphic Environment (GEORGE) software allow the user to switch between types of maps, and automatically center and rotate the map. The pilot study used pre and post-experiment questionnaires, performance records and interaction logs to evaluate which aspects of the system were most useful both in terms of user attitudes and objective performance on a navigation task. Zoom, pan and auto-rotate features were shown to be well used, while subjects showed indifference to map type and perspective view options. Performance on the wayfinding task generally improved over each segment of the trial, with subjects approaching normal walking speeds at the end of the trial.", "title": "" }, { "docid": "654d078b7aa669ea730630e7e37b64b5", "text": "Cancers are believed to arise from cancer stem cells (CSCs), but it is not known if these cells remain dependent upon the niche microenvironments that regulate normal stem cells. We show that endothelial cells interact closely with self-renewing brain tumor cells and secrete factors that maintain these cells in a stem cell-like state. Increasing the number of endothelial cells or blood vessels in orthotopic brain tumor xenografts expanded the fraction of self-renewing cells and accelerated the initiation and growth of tumors. Conversely, depletion of blood vessels from xenografts ablated self-renewing cells from tumors and arrested tumor growth. We propose that brain CSCs are maintained within vascular niches that are important targets for therapeutic approaches.", "title": "" }, { "docid": "e1b6de27518c1c17965a891a8d14a1e1", "text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.", "title": "" }, { "docid": "9fc869c7e7d901e418b1b69d636cbd33", "text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2", "title": "" }, { "docid": "94a59f1c20a6476035a00d86c222a08b", "text": "Lateral transshipments within an inventory system are stock movements between locations of the same echelon. These transshipments can be conducted periodically at predetermined points in time to proactively redistribute stock, or they can be used reactively as a method of meeting demand which cannot be satisfied from stock on hand. The elements of an inventory system considered, e.g. size, cost structures and service level definition, all influence the best method of transshipping. Models of many different systems have been considered. This paper provides a literature review which categorizes the research to date on lateral transshipments, so that these differences can be understood and gaps within the literature can be identified.", "title": "" }, { "docid": "c229a2ebe7ce4d8088b1decf596053c7", "text": "We study the infinitely many-armed bandit problem with budget constraints, where the number of arms can be infinite and much larger than the number of possible experiments. The player aims at maximizing his/her total expected reward under a budget constraint B for the cost of pulling arms. We introduce a weak stochastic assumption on the ratio of expected-reward to expected-cost of a newly pulled arm which characterizes its probability of being a near-optimal arm. We propose an algorithm named RCB-I to this new problem, in which the player first randomly picks K arms, whose order is sub-linear in terms of B, and then runs the algorithm for the finite-arm setting on the selected arms. Theoretical analysis shows that this simple algorithm enjoys a sub-linear regret in term of the budget B. We also provide a lower bound of any algorithm under Bernoulli setting. The regret bound of RCB-I matches the lower bound up to a logarithmic factor. We further extend this algorithm to the any-budget setting (i.e., the budget is unknown in advance) and conduct corresponding theoretical analysis.", "title": "" }, { "docid": "fc172716fe01852d53d0ae5d477f3afc", "text": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.", "title": "" }, { "docid": "906b785365a27e5d9c7f0a622996264b", "text": "In this paper, we put forward a new pre–processing scheme for automatic analysis of dermoscopic images. Our contribu tions are two-fold. First, we present a procedure, an extens ion of previous approaches, which succeeds in removing confoun ding factors from dermoscopic images: these include shading ind uce by imaging non-flat skin surfaces and the effect of light-int ensity falloff toward the edges of the dermoscopic image. This proc edure is shown to facilitate the detection and removal of arti f cts such as hairs as well. Second, we present a novel simple yet ef fective greyscale conversion approach that is based on phys ics and biology of human skin. Our proposed greyscale image provides high separability between a pigmented lesion and norm al skin surrounding it. Finally, using our pre–processing sch eme, we perform segmentation based on simple grey-level thresho lding, with results outperforming the state of the art.", "title": "" }, { "docid": "e91e1a2bdd90cec352cb566f8c556c68", "text": "This paper deals with a new MRAM technology whose writing scheme relies on the Spin Orbit Torque (SOT). Compared to Spin Transfer Torque (STT) MRAM, it offers a very fast switching, a quasi-infinite endurance and improves the reliability by solving the issue of “read disturb”, thanks to separate reading and writing paths. These properties allow introducing SOT at all-levels of the memory hierarchy of systems and adressing applications which could not be easily implemented by STT-MRAM. We present this emerging technology and a full design framework, allowing to design and simulate hybrid CMOS/SOT complex circuits at any level of abstraction, from device to system. The results obtained are very promising and show that this technology leads to a reduced power consumption of circuits without notable penalty in terms of performance.", "title": "" }, { "docid": "d68cab42b5f69e16238da749c95bbfd3", "text": "Does trait self-control (TSC) predict affective well-being and life satisfaction--positively, negatively, or not? We conducted three studies (Study 1: N = 414, 64% female, Mage = 35.0 years; Study 2: N = 208, 66% female, Mage = 25.24 years; Study 3: N = 234, 61% female, Mage = 34.53 years). The key predictor was TSC, with affective well-being and life satisfaction ratings as key outcomes. Potential explanatory constructs including goal conflict, goal balancing, and emotional distress also were investigated. TSC is positively related to affective well-being and life satisfaction, and managing goal conflict is a key as to why. All studies, moreover, showed that the effect of TSC on life satisfaction is at least partially mediated by affect. Study 1's correlational study established the effect. Study 2's experience sampling approach demonstrated that compared to those low in TSC, those high in TSC experience higher levels of momentary affect even as they experience desire, an effect partially mediated through experiencing lower conflict and emotional distress. Study 3 found evidence for the proposed mechanism--that TSC may boost well-being by helping people avoid frequent conflict and balance vice-virtue conflicts by favoring virtues. Self-control positively contributes to happiness through avoiding and dealing with motivational conflict.", "title": "" }, { "docid": "9ac8ce316225509a0fb644001d960535", "text": "The display of statistical information is ubiquitous in all fields of visualization. Whether aided by graphs, tables, plots, or integrated into the visualizations themselves, understanding the best way to convey statistical information is important. Highlighting the box plot, a survey of traditional methods for expressing specific statistical characteristics of data is presented. Reviewing techniques for the expression of statistical measures will be increasingly important as data quality, confidence and uncertainty are becoming influential characteristics to integrate into visualizations.", "title": "" }, { "docid": "598744a94cbff466c42e6788d5e23a79", "text": "The energy consumption of DRAM is a critical concern in modern computing systems. Improvements in manufacturing process technology have allowed DRAM vendors to lower the DRAM supply voltage conservatively, which reduces some of the DRAM energy consumption. We would like to reduce the DRAM supply voltage more aggressively, to further reduce energy. Aggressive supply voltage reduction requires a thorough understanding of the effect voltage scaling has on DRAM access latency and DRAM reliability.\n In this paper, we take a comprehensive approach to understanding and exploiting the latency and reliability characteristics of modern DRAM when the supply voltage is lowered below the nominal voltage level specified by DRAM standards. Using an FPGA-based testing platform, we perform an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three major DRAM vendors. We find that reducing the supply voltage below a certain point introduces bit errors in the data, and we comprehensively characterize the behavior of these errors. We discover that these errors can be avoided by increasing the latency of three major DRAM operations (activation, restoration, and precharge). We perform detailed DRAM circuit simulations to validate and explain our experimental findings. We also characterize the various relationships between reduced supply voltage and error locations, stored data patterns, DRAM temperature, and data retention.\n Based on our observations, we propose a new DRAM energy reduction mechanism, called Voltron. The key idea of Voltron is to use a performance model to determine by how much we can reduce the supply voltage without introducing errors and without exceeding a user-specified threshold for performance loss. Our evaluations show that Voltron reduces the average DRAM and system energy consumption by 10.5% and 7.3%, respectively, while limiting the average system performance loss to only 1.8%, for a variety of memory-intensive quad-core workloads. We also show that Voltron significantly outperforms prior dynamic voltage and frequency scaling mechanisms for DRAM.", "title": "" }, { "docid": "7691fba64da5d36d57d11d7319f742a4", "text": "The design of flow control systems remains a challenge due to the nonlinear nature of the equations that govern fluid flow. However, recent advances in computational fluid dynamics (CFD) have enabled the simulation of complex fluid flows with high accuracy, opening the possibility of using learning-based approaches to facilitate controller design. We present a method for learning the forced and unforced dynamics of airflow over a cylinder directly from CFD data. The proposed approach, grounded in Koopman theory, is shown to produce stable dynamical models that can predict the time evolution of the cylinder system over extended time horizons. Finally, by performing model predictive control with the learned dynamical models, we are able to find a straightforward, interpretable control law for suppressing vortex shedding in the wake of the cylinder.", "title": "" } ]
scidocsrr
66ce6f79c0101eba98e49736ff026edd
A study to support agile methods more effectively through traceability
[ { "docid": "6b9d8ff2c31b672832e2a81fbbcde583", "text": "ion in Rationale Models. The design goal of KBSA-ADM was to offer a coherent series of rationale models based on results of the REMAP project (Ramesh and Dhar 1992) for maintaining rationale at different levels of detail. Figure 19: Simple Rationale Model The model sketched in Figure 19 is used for capturing rationale at a simple level of detail. It links an OBJECT with its RATIONALE. The model in Figure 19 also provides for the explicit representation of ASSUMPTIONS and DEPENDENCIES among them. Thus, using this model, the assumptions providing justifications to the creation of objects can be explicitly identified and reasoned with. As changes in such assumptions are a primary factor in the", "title": "" } ]
[ { "docid": "f73881fdb6b732e7a6a79cd13618e649", "text": "Information exchange among coalition command and control (C2) systems in network-enabled environments requires ensuring that each recipient system understands and interprets messages exactly as the source system intended. The Semantic Interoperability Logical Framework (SILF) aims at meeting NATO's needs for semantically correct interoperability between C2 systems, as well as the need to adapt quickly to new missions and new combinations of coalition partners and systems. This paper presents an overview of the SILF framework and performs a detailed analysis of a case study for implementing SILF in a real-world military scenario.", "title": "" }, { "docid": "56a35139eefd215fe83811281e4e2279", "text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d9387322d796059173c704194a090304", "text": "Emotional and neutral sounds rated for valence and arousal were used to investigate the influence of emotions on timing in reproduction and verbal estimation tasks with durations from 2 s to 6 s. Results revealed an effect of emotion on temporal judgment, with emotional stimuli judged to be longer than neutral ones for a similar arousal level. Within scalar expectancy theory (J. Gibbon, R. Church, & W. Meck, 1984), this suggests that emotion-induced activation generates an increase in pacemaker rate, leading to a longer perceived duration. A further exploration of self-assessed emotional dimensions showed an effect of valence and arousal. Negative sounds were judged to be longer than positive ones, indicating that negative stimuli generate a greater increase of activation. High-arousing stimuli were perceived to be shorter than low-arousing ones. Consistent with attentional models of timing, this seems to reflect a decrease of attention devoted to time, leading to a shorter perceived duration. These effects, robust across the 2 tasks, are limited to short intervals and overall suggest that both activation and attentional processes modulate the timing of emotional events.", "title": "" }, { "docid": "ac08d20a1430ee10c7ff761cae9d9ada", "text": "OBJECTIVES\nTo evaluate the clinical response at 12 month in a cohort of patients with rheumatoid arthritis treated with Etanar (rhTNFR:Fc), and to register the occurrence of adverse effects.\n\n\nMETHODS\nThis is a multicentre observational cohort study. It included patients over 18 years of age with an active rheumatoid arthritis diagnosis for which the treating physician had begun a treatment scheme of 25 mg of subcutaneous etanercept (Etanar ® 25 mg: biologic type rhTNFR:Fc), twice per week. Follow-up was done during 12 months, with assessments at weeks 12, 24, 36 and 48. Evaluated outcomes included tender joint count, swollen joint count, ACR20, ACR50, ACR70, HAQ and DAS28.\n\n\nRESULTS\nOne-hundred and five (105) subjects were entered into the cohort. The median of tender and swollen joint count, ranged from 19 and 14, respectively at onset to 1 at the 12th month. By month 12, 90.5% of the subjects reached ACR20, 86% ACR50, and 65% ACR70. The median of DAS28 went from 4.7 to 2, and the median HAQ went from 1.3 to 0.2. The rate of adverse effects was 14 for every 100 persons per year. No serious adverse effects were reported. The most frequent were pruritus (5 cases), and rhinitis (3 cases).\n\n\nCONCLUSIONS\nAfter a year of following up a patient cohort treated with etanercept 25 mg twice per week, significant clinical results were observed, resulting in adequate disease control in a high percentage of patients with an adequate level of safety.", "title": "" }, { "docid": "9446421ed0c69e8e0eadc39674283625", "text": "The paper presents the main results of a previously developed methodology to better evaluate new technologies in Smart Cities, using a tool to evaluate different systems and technologies regarding their usefulness, considering each application and how technologies can impact the physical space and natural environment. Technologies have also been evaluated according to how they are used by citizens, who must be the main concern of all urban development. Through a survey conducted among the Smart City Spanish network (RECI) we found that the ICT’s that change our cities everyday must be reviewed, developing an innovative methodology in order to find an analysis matrix to assess and score all the technologies that affect a Smart City strategy. The paper provides the results of this methodology regarding the three main aspects to be considered in urban developments: mobility, energy efficiency, and quality of life after obtaining the final score for every analyzed technology. This methodology fulfills an identified need to study how new technologies could affect urban scenarios before being applied, developing an analysis system to be used by urban planners and policy-makers to decide how best to use them, and this paper tries to show, in a simple way, how they can appreciate the variances between different solutions.", "title": "" }, { "docid": "7f9b9bef62aed80a918ef78dcd15fb2a", "text": "Transferring image-based object detectors to domain of videos remains a challenging problem. Previous efforts mostly exploit optical flow to propagate features across frames, aiming to achieve a good trade-off between performance and computational complexity. However, introducing an extra model to estimate optical flow would significantly increase the overall model size. The gap between optical flow and high-level features can hinder it from establishing the spatial correspondence accurately. Instead of relying on optical flow, this paper proposes a novel module called Progressive Sparse Local Attention (PSLA), which establishes the spatial correspondence between features across frames in a local region with progressive sparse strides and uses the correspondence to propagate features. Based on PSLA, Recursive Feature Updating (RFU) and Dense feature Transforming (DFT) are introduced to model temporal appearance and enrich feature representation respectively. Finally, a novel framework for video object detection is proposed. Experiments on ImageNet VID are conducted. Our framework achieves a state-of-the-art speedaccuracy trade-off with significantly reduced model capacity.", "title": "" }, { "docid": "aa52a5764fc0b95e11d3088f7cdc7448", "text": "Generative Adversarial Networks (GANs) have received wide attention in the machine learning field for their potential to learn high-dimensional, complex real data distribution. Specifically, they do not rely on any assumptions about the distribution and can generate real-like samples from latent space in a simple manner. This powerful property allows GANs to be applied to various applications such as image synthesis, image attribute editing, image translation, domain adaptation, and other academic fields. In this article, we discuss the details of GANs for those readers who are familiar with, but do not comprehend GANs deeply or who wish to view GANs from various perspectives. In addition, we explain how GANs operates and the fundamental meaning of various objective functions that have been suggested recently. We then focus on how the GAN can be combined with an autoencoder framework. Finally, we enumerate the GAN variants that are applied to various tasks and other fields for those who are interested in exploiting GANs for their research.", "title": "" }, { "docid": "edda7891a323b5c23b3f2f1519309c40", "text": "As digital technologies proliferate in the home, the Human Computer Interaction (HCI) community has turned its attention from the workplace and productivity tools towards domestic design environments and non-utilitarian activities. In the workplace, applications tend to focus on productivity and efficiency and involve relatively well-understood requirements and methodologies, but in domestic design environments we are faced with the need to support new classes of activities. While usability is still central to the field, HCI is beginning to address considerations such as pleasure, fun, emotional effect, aesthetics, the experience of use, and the social and cultural impacts of new technologies. These considerations are particularly relevant to the home, where technologies are situated or embedded within an ecology that is rich with meaning and nuance.The aim of this workshop is to explore ways of designing domestic technology by incorporating an awareness of cultural context, accrued social meanings, and user experience.", "title": "" }, { "docid": "e6ff00b275f28864fb98af7f9643beca", "text": "Although the distributed file system is a widely used technology in local area networks, it has seen less use on the wide area networks that connect clusters, clouds, and grids. One reason for this is access control: existing file system technologies require either the client machine to be fully trusted, or the client process to hold a high value user credential, neither of which is practical in large scale systems. To address this problem, we have designed a system for fine-grained access control which dramatically reduces the amount of trust required of a batch job accessing a distributed file system. We have implemented this system in the context of the Chirp user-level distributed file system used in clusters, clouds, and grids, but the concepts can be applied to almost any other storage system. The system is evaluated to show that performance and scalability are similar to other authentication methods. The paper concludes with a discussion of integrating the authentication system into workflow systems.", "title": "" }, { "docid": "161e4dabbe73fa86605f8070d8cc1855", "text": "There has been much recent interest in adapting data mining algorithms to time series databases. Many of these algorithms need to compare time series. Typically some variation or extension of Euclidean distance is used. However, as we demonstrate in this paper, Euclidean distance can be an extremely brittle distance measure. Dynamic time warping (DTW) has been suggested as a technique to allow more robust distance calculations, however it is computationally expensive. In this paper we introduce a modification of DTW which operates on a higher level abstraction of the data, in particular, a piecewise linear representation. We demonstrate that our approach allows us to outperform DTW by one to three orders of magnitude. We experimentally evaluate our approach on medical, astronomical and sign language data.", "title": "" }, { "docid": "43af3570e8eeee6cf113991e6c0994cf", "text": "The main goal of modeling human conversation is to create agents which can interact with people in both open-ended and goal-oriented scenarios. End-to-end trained neural dialog systems are an important line of research for such generalized dialog models as they do not resort to any situation-specific handcrafting of rules. However, incorporating personalization into such systems is a largely unexplored topic as there are no existing corpora to facilitate such work. In this paper, we present a new dataset of goal-oriented dialogs which are influenced by speaker profiles attached to them. We analyze the shortcomings of an existing end-toend dialog system based on Memory Networks and propose modifications to the architecture which enable personalization. We also investigate personalization in dialog as a multi-task learning problem, and show that a single model which shares features among various profiles outperforms separate models for each profile.", "title": "" }, { "docid": "0daa43669ae68a81e5eb71db900976c6", "text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.", "title": "" }, { "docid": "2c5e280525168d71d1a48fec047b5a23", "text": "This paper presents the implementation of four channel Electromyography (EMG) signal acquisition system for acquiring the EMG signal of the lower limb muscles during ankle joint movements. Furthermore, some post processing and statistical analysis for the recorded signal were presented. Four channels were implemented using instrumentation amplifier (INA114) for pre-amplification stage then the amplified signal subjected to the band pass filter to eliminate the unwanted signals. Operational amplifier (OPA2604) was involved for the main amplification stage to get the output signal in volts. The EMG signals were detected during movement of the ankle joint of a healthy subject. Then the signal was sampled at the rate of 2 kHz using NI6009 DAQ and Labview used for displaying and storing the acquired signal. For EMG temporal representation, mean absolute value (MAV) analysis algorithm is used to investigate the level of the muscles activity. This data will be used in future as a control input signal to drive the ankle joint exoskeleton robot.", "title": "" }, { "docid": "07b362c7f6e941513cfbafce1ba87db1", "text": "ResearchGate is increasingly used by scholars to upload the full-text of their articles and make them freely available for everyone. This study aims to investigate the extent to which ResearchGate members as authors of journal articles comply with publishers’ copyright policies when they self-archive full-text of their articles on ResearchGate. A random sample of 500 English journal articles available as full-text on ResearchGate were investigated. 108 articles (21.6%) were open access (OA) published in OA journals or hybrid journals. Of the remaining 392 articles, 61 (15.6%) were preprint, 24 (6.1%) were post-print and 307 (78.3%) were published (publisher) PDF. The key finding was that 201 (51.3%) out of 392 non-OA articles infringed the copyright and were non-compliant with publishers’ policy. While 88.3% of journals allowed some form of self-archiving (SHERPA/RoMEO green, blue or yellow journals), the majority of non-compliant cases (97.5%) occurred when authors self-archived publishers’ PDF files (final published version). This indicates that authors infringe copyright most of the time not because they are not allowed to self-archive, but because they use the wrong version, which might imply their lack of understanding of copyright policies and/or complexity and diversity of policies.", "title": "" }, { "docid": "59970fa92db7948a5fa51fcfdefbc86e", "text": "In this article we propose to facilitate local peer-to-peer communication by a Device-to-Device (D2D) radio that operates as an underlay network to an IMT-Advanced cellular network. It is expected that local services may utilize mobile peer-to-peer communication instead of central server based communication for rich multimedia services. The main challenge of the underlay radio in a multi-cell environment is to limit the interference to the cellular network while achieving a reasonable link budget for the D2D radio. We propose a novel power control mechanism for D2D connections that share cellular uplink resources. The mechanism limits the maximum D2D transmit power utilizing cellular power control information of the devices in D2D communication. Thereby it enables underlaying D2D communication even in interference-limited networks with full load and without degrading the performance of the cellular network. Secondly, we study a single cell scenario consisting of a device communicating with the base station and two devices that communicate with each other. The results demonstrate that the D2D radio, sharing the same resources as the cellular network, can provide higher capacity (sum rate) compared to pure cellular communication where all the data is transmitted through the base station.", "title": "" }, { "docid": "39838881287fd15b29c20f18b7e1d1eb", "text": "In the software industry, a challenge firms often face is how to effectively commercialize innovations. An emerging business model increasingly embraced by entrepreneurs, called freemium, combines “free” and “premium” consumption in association with a product or service. In a nutshell, this model involves giving away for free a certain level or type of consumption while making money on premium consumption. We develop a unifying multi-period microeconomic framework with network externalities embedded into consumer learning in order to capture the essence of conventional for-fee models, several key freemium business models such as feature-limited or time-limited, and uniform market seeding models. Under moderate informativeness of word-of-mouth signals, we fully characterize conditions under which firms prefer freemium models, depending on consumer priors on the value of individual software modules, perceptions of crossmodule synergies, and overall value distribution across modules. Within our framework, we show that uniform seeding is always dominated by either freemium models or conventional for-fee models. We further discuss managerial and policy implications based on our analysis. Interestingly, we show that freemium, in one form or another, is always preferred from the social welfare perspective, and we provide guidance on when the firms need to be incentivized to align their interests with the society’s. Finally, we discuss how relaxing some of the assumptions of our model regarding costs or informativeness and heterogeneity of word of mouth may reduce the profit gap between seeding and the other models, and potentially lead to seeding becoming the preferred approach for the firm.", "title": "" }, { "docid": "6d80c1d1435f016b124b2d61ef4437a5", "text": "Recent high profile developments of autonomous learning thermostats by companies such as Nest Labs and Honeywell have brought to the fore the possibility of ever greater numbers of intelligent devices permeating our homes and working environments into the future. However, the specific learning approaches and methodologies utilised by these devices have never been made public. In fact little information is known as to the specifics of how these devices operate and learn about their environments or the users who use them. This paper proposes a suitable learning architecture for such an intelligent thermostat in the hope that it will benefit further investigation by the research community. Our architecture comprises a number of different learning methods each of which contributes to create a complete autonomous thermostat capable of controlling a HVAC system. A novel state action space formalism is proposed to enable a Reinforcement Learning agent to successfully control the HVAC system by optimising both occupant comfort and energy costs. Our results show that the learning thermostat can achieve cost savings of 10% over a programmable thermostat, whilst maintaining high occupant comfort standards.", "title": "" }, { "docid": "89f85a4a20735222867c5f0b4623f0a1", "text": "Arabic is one of the major languages in the world. Unfortunately not so much research in Arabic speaker recognition has been done. One main reason for this lack of research is the unavailability of rich Arabic speech databases. In this paper, we present a rich and comprehensive Arabic speech database that we developed for the Arabic speaker / speech recognition research and/or applications. The database is rich in different aspects: (a) it has 752 speakers; (b) the speakers are from different ethnic groups: Saudis, Arabs, and non-Arabs; (c) utterances are both read text and spontaneous; (d) scripts are of different dimensions, such as, isolated words, digits, phonetically rich words, sentences, phonetically balanced sentences, paragraphs, etc.; (e) different sets of microphones with medium and high quality; (f) telephony and non-telephony speech; (g) three different recording environments: office, sound proof room, and cafeteria; (h) three different sessions, where the recording sessions are scheduled at least with 2 weeks interval. Because of the richness of this database, it can be used in many Arabic, and non-Arabic, speech processing researches, such as speaker / speech recognition, speech analysis, accent identification, ethnic groups / nationality recognition, etc. The richness of the database makes it a valuable resource for research in Arabic speech processing in particular and for research in speech processing in general. The database was carefully manually verified. The manual verification was complemented with automatic verification. Validation was performed on a subset of the database where the recognition rate reached 100% for Saudi speakers and 96% for non-Saudi speakers by using a system with 12 Mel frequency Cepstral coefficients, and 32 Gaussian mixtures.", "title": "" }, { "docid": "235fc12dc2f741dacede5f501b028cd3", "text": "Self-adaptive software is capable of evaluating and changing its own behavior, whenever the evaluation shows that the software is not accomplishing what it was intended to do, or when better functionality or performance may be possible. The topic of system adaptivity has been widely studied since the mid-60s and, over the past decade, several application areas and technologies relating to self-adaptivity have assumed greater importance. In all these initiatives, software has become the common element that introduces self-adaptability. Thus, the investigation of systematic software engineering approaches is necessary, in order to develop self-adaptive systems that may ideally be applied across multiple domains. The main goal of this study is to review recent progress on self-adaptivity from the standpoint of computer sciences and cybernetics, based on the analysis of state-of-the-art approaches reported in the literature. This review provides an over-arching, integrated view of computer science and software engineering foundations. Moreover, various methods and techniques currently applied in the design of self-adaptive systems are analyzed, as well as some European research initiatives and projects. Finally, the main bottlenecks for the effective application of self-adaptive technology, as well as a set of key research issues on this topic, are precisely identified, in order to overcome current constraints on the effective application of self-adaptivity in its emerging areas of application. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "472519682e5b086732b31e558ec7934d", "text": "As networks become ubiquitous in people's lives, users depend on networks a lot for sufficient communication and convenient information access. However, networks suffer from security issues. Network security becomes a challenging topic since numerous new network attacks have appeared increasingly sophisticated and caused vast loss to network resources. Game theoretic approaches have been introduced as a useful tool to handle those tricky network attacks. In this paper, we review the existing game-theory based solutions for network security problems, classifying their application scenarios under two categories, attack-defense analysis and security measurement. Moreover, we present a brief view of the game models in those solutions and summarize them into two categories, cooperative game models and non-cooperative game models with the latter category consisting of subcategories. In addition to the introduction to the state of the art, we discuss the limitations of those game theoretic approaches and propose future research directions.", "title": "" } ]
scidocsrr
3aae221d963c15d6122008e238c537cc
Automatic Summarization of Open-Domain Multiparty Dialogues in Diverse Genres
[ { "docid": "fb81f9419861a20b2e6e45ba04bb0ce1", "text": "It has been said for decades (if not centuries) that more and more information is becoming available and that tools are needed to handle it. Only recently, however, does it seem that a sufficient quantity of this information is electronically available to produce a widespread need for automatic summarization. Consequently, this research area has enjoyed a resurgence of interest in the past few years, as illustrated by a 1997 ACL Workshop, a 1998 AAAI Spring Symposium and in the same year SUMMAC: a TREC-like TIPSTER-funded summarization evaluation conference. Not unexpectedly, there is now a book to add to this list: Advances in Automatic Summarization, a collection of papers edited by Inderjeet Mani and Mark T. Maybury and published by The MIT Press. Half of it is a historical record: thirteen previously published papers, including classics such as Luhn’s 1958 word-counting sentence-extraction paper, Edmundson’s 1969 use of cue words and phrases, and Kupiec, Pedersen, and Chen’s 1995 trained summarizer. The other half of the book holds new papers, which attempt to cover current issues and point to future trends. It starts with a paper by Karen Spärck Jones, which acts as an overall introduction. In it, the summarization process and the uses of summaries are broken down into their constituent parts and each of these is discussed (it reminded me of a much earlier Spärck Jones paper on categorization [1970]). Despite its comprehensiveness and authority, I must confess to finding this opener heavy going at times. The rest of the papers are grouped into six sections, each of which is prefaced with two or three well-written pages from the editors. These introductions contain valuable commentary on the coming papers—even pointing out a possible flaw in the evaluation part of one. The opening section holds three papers on so-called classical approaches. Here one finds the oft-cited papers of Luhn, Edmundson, and Pollock and Zamora. As a package, these papers provide a novice with a good idea of how basic summarization works. My only quibble was in their reproduction. In Luhn’s paper, an article from Scientific American is summarized and it would have been beneficial to have this included in the book as well. Some of the figures in another paper contained very small fonts and were hard to read; fixing this for a future print run is probably worth thinking about. The next section holds papers on corpus-based approaches to summarization, starting with Kupiec et al.’s paper about a summarizer trained on an existing corpus of manually abstracted documents. Two new papers building upon the Kupiec et al. work follow this. Exploiting the discourse structure of a document is the topic of the next section. Of the five papers here, I thought Daniel Marcu’s was the best, nicely describing summarization work so far and then clearly explaining his system, which is based on Rhetorical Structure Theory. The following section on knowledge-rich approaches to summarization covers such things as Wendy Lehnert’s work on breaking", "title": "" } ]
[ { "docid": "380492dfcbd6da60cdc0c02b6957c587", "text": "The New Yorker publishes a weekly captionless cartoon. More than 5,000 readers submit captions for it. The editors select three of them and ask the readers to pick the funniest one. We describe an experiment that compares a dozen automatic methods for selecting the funniest caption. We show that negative sentiment, human-centeredness, and lexical centrality most strongly match the funniest captions, followed by positive sentiment. These results are useful for understanding humor and also in the design of more engaging conversational agents in text and multimodal (vision+text) systems. As part of this work, a large set of cartoons and captions is being made available to the community.", "title": "" }, { "docid": "1d73817f8b1b54a82308106ee526a62b", "text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.", "title": "" }, { "docid": "f26b35d1d8cc326c7e6baebc895df9fa", "text": "Correspondence: michelle.edwards@adelaide.edu.au University of Adelaide, Adelaide, SA 5005, Australia Full list of author information is available at the end of the article Abstract Analysing narratives through their social networks is an expanding field in quantitative literary studies. Manually extracting a social network from any narrative can be time consuming, so automatic extraction methods of varying complexity have been developed. However, the effect of different extraction methods on the analysis is unknown. Here we model and compare three extraction methods for social networks in narratives: manual extraction, co-occurrence automated extraction and automated extraction using machine learning. Although the manual extraction method produces more precise results in the network analysis, it is much more time consuming and the automatic extraction methods yield comparable conclusions for density, centrality measures and edge weights. Our results provide evidence that social networks extracted automatically are reliable for many analyses. We also describe which aspects of analysis are not reliable with such a social network. We anticipate that our findings will make it easier to analyse more narratives, which help us improve our understanding of how stories are written and evolve, and how people interact with each other.", "title": "" }, { "docid": "f391d24622a123cf35c56693ac3b0044", "text": "Web users are confronted with the daunting challenges of creating, remembering, and using more and more strong passwords than ever before in order to protect their valuable assets on different websites. Password manager is one of the most popular approaches designed to address these challenges by saving users' passwords and later automatically filling the login forms on behalf of users. Fortunately, all the five most popular Web browsers have provided password managers as a useful built-in feature. Unfortunately, the designs of all those Browser-based Password Managers (BPMs) have severe security vulnerabilities. In this paper, we uncover the vulnerabilities of existing BPMs and analyze how they can be exploited by attackers to crack users' saved passwords. Moreover, we propose a novel Cloud-based Storage-Free BPM (CSF-BPM) design to achieve a high level of security with the desired confidentiality, integrity, and availability properties. We have implemented a CSF-BPM system into Firefox and evaluated its correctness and performance. We believe CSF-BPM is a rational design that can also be integrated into other popular Web browsers.", "title": "" }, { "docid": "64a77ec55d5b0a729206d9af6d5c7094", "text": "In this paper, we propose an Internet of Things (IoT) virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud. The framework uses the sensor-as-aservice notion to expose IoT cloud's connected objects functional aspects in the form of web services. The framework uses an adapter oriented approach to address the issue of connectivity with various types of sensor nodes. We employ semantic enhanced access polices to ensure that only authorized parties can access the IoT framework services, which result in enhancing overall security of the proposed framework. Furthermore, the use of event-driven service oriented architecture (e-SOA) paradigm assists the framework to leverage the monitoring process by dynamically sensing and responding to different connected objects sensor events. We present our design principles, implementations, and demonstrate the development of IoT application with reasoning capability by using a green school motorcycle (GSMC) case study. Our exploration shows that amalgamation of e-SOA, semantic web technologies and virtualization paves the way to address the connectivity, security and monitoring issues of IoT domain.", "title": "" }, { "docid": "ee72e994f10f848c992f4e03e60d6cb3", "text": "Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy. The traditional modular pipeline heavily relies on hand-designed rules and the pre-processing perception system while the supervised learning-based models are limited by the accessibility of extensive human experience. We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator. To alleviate the low exploration efficiency for large continuous action space that often prohibits the use of classical RL on challenging real tasks, our CIRL explores over a reasonably constrained action space guided by encoded experiences that imitate human demonstrations, building upon Deep Deterministic Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies and steering-angle reward designs for different control signals (i.e. follow, straight, turn right, turn left) based on the shared representations to improve the model capability in tackling with diverse cases. Extensive experiments on CARLA driving benchmark demonstrate that CIRL substantially outperforms all previous methods in terms of the percentage of successfully completed episodes on a variety of goal-directed driving tasks. We also show its superior generalization capability in unseen environments. To our knowledge, this is the first successful case of the learned driving policy by reinforcement learning in the high-fidelity simulator, which performs better than supervised imitation learning.", "title": "" }, { "docid": "d4820344d9c229ac15d002b667c07084", "text": "In this paper, we propose to integrate semantic similarity assessment in an edit distance algorithm, seeking to amend similarity judgments when comparing XML-based legal documents[3].", "title": "" }, { "docid": "a7a43e9f206f65a5b4f48ab4d6d59fdb", "text": "A study of more than nineteen hundred U.S. hotels for the years 2002 and 2003 found that a hotel’s net operating income percentage is most closely tied to its occupancy, although average daily rate (ADR) has a strong influence, as does market segment (also known as chain scale), the age of the property, and brand affiliation. A hotel’s size (that is, number of rooms) and location (e.g., urban or highway) also influence net operating income (NOI), but a hotel’s region does not significantly affect NOI percentage. The year 2002 data particularly show the importance of heads in beds. Hoteliers cut ADR heavily in that recession year, and those hotels that maintained strong occupancy were the ones that enjoyed strong NOI. While resorts and urban hotels generated the highest NOI in raw dollar volume, economy hotels had the highest NOI percentage and midscale hotels with food and beverage service (F&B) had the lowest NOI percentage.", "title": "" }, { "docid": "2321a11afd8a9f4da42a092ea43b544b", "text": "This paper proposes a method for recognizing postures and gestures using foot pressure sensors, and we investigate optimal positions for pressure sensors on soles are the best for motion recognition. In experiments, the recognition accuracies of 22 kinds of daily postures and gestures were evaluated from foot-pressure sensor values. Furthermore, the optimum measurement points for high recognition accuracy were examined by evaluating combinations of two foot pressure measurement areas on a round-robin basis. As a result, when selecting the optimum two points for a user, the recognition accuracy was about 93.6% on average. Although individual differences were seen, the best combinations of areas for each subject were largely divided into two major patterns. When two points were chosen, combinations of the near thenar, which is located near the thumb ball, and near the heel or point of the outside of the middle of the foot were highly recognized. Of the best two points, one was commonly the near thenar for subjects. By taking three points of data and covering these two combinations, it will be possible to cope with individual differences. The recognition accuracy of the averaged combinations of the best two combinations for all subjects was classified with an accuracy of about 91.0% on average. On the basis of these results, two types of pressure sensing shoes were developed.", "title": "" }, { "docid": "8b38fd43c9d418b356ef009e9612e564", "text": "English. This work aims at evaluating and comparing two different frameworks for the unsupervised topic modelling of the CompWHoB Corpus, namely our political-linguistic dataset. The first approach is represented by the application of the latent DirichLet Allocation (henceforth LDA), defining the evaluation of this model as baseline of comparison. The second framework employs Word2Vec technique to learn the word vector representations to be later used to topic-model our data. Compared to the previously defined LDA baseline, results show that the use of Word2Vec word embeddings significantly improves topic modelling performance but only when an accurate and taskoriented linguistic pre-processing step is carried out. Italiano. L’obiettivo di questo contributo è di valutare e confrontare due differenti framework per l’apprendimento automatico del topic sul CompWHoB Corpus, la nostra risorsa testuale. Dopo aver implementato il modello della latent DirichLet Allocation, abbiamo definito come standard di riferimento la valutazione di questo stesso approccio. Come secondo framework, abbiamo utilizzato il modello Word2Vec per apprendere le rappresentazioni vettoriali dei termini successivamente impiegati come input per la fase di apprendimento automatico del topic. I risulati mostrano che utilizzando i ‘word embeddings’ generati da Word2Vec, le prestazioni del modello aumentano significativamente ma solo se supportati da una accurata fase di ‘pre-processing’ linguisti-", "title": "" }, { "docid": "6f77e74cd8667b270fae0ccc673b49a5", "text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" }, { "docid": "261ef8b449727b615f8cd5bd458afa91", "text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.", "title": "" }, { "docid": "cbf32934e275e8d95a584762b270a5c2", "text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.", "title": "" }, { "docid": "b16bb73155af7f141127617a7e9fdde1", "text": "Organizing code into coherent programs and relating different programs to each other represents an underlying requirement for scaling genetic programming to more difficult task domains. Assuming a model in which policies are defined by teams of programs, in which team and program are represented using independent populations and coevolved, has previously been shown to support the development of variable sized teams. In this work, we generalize the approach to provide a complete framework for organizing multiple teams into arbitrarily deep/wide structures through a process of continuous evolution; hereafter the Tangled Program Graph (TPG). Benchmarking is conducted using a subset of 20 games from the Arcade Learning Environment (ALE), an Atari 2600 video game emulator. The games considered here correspond to those in which deep learning was unable to reach a threshold of play consistent with that of a human. Information provided to the learning agent is limited to that which a human would experience. That is, screen capture sensory input, Atari joystick actions, and game score. The performance of the proposed approach exceeds that of deep learning in 15 of the 20 games, with 7 of the 15 also exceeding that associated with a human level of competence. Moreover, in contrast to solutions from deep learning, solutions discovered by TPG are also very ‘sparse’. Rather than assuming that all of the state space contributes to every decision, each action in TPG is resolved following execution of a subset of an individual’s graph. This results in significantly lower computational requirements for model building than presently the case for deep learning.", "title": "" }, { "docid": "74e15be321ec4e2d207f3331397f0399", "text": "Interoperability has been a basic requirement for the modern information systems environment for over two decades. How have key requirements for interoperability changed over that time? How can we understand the full scope of interoperability issues? What has shaped research on information system interoperability? What key progress has been made? This chapter provides some of the answers to these questions. In particular, it looks at different levels of information system interoperability, while reviewing the changing focus of interoperability research themes, past achievements and new challenges in the emerging global information infrastructure (GII). It divides the research into three generations, and discusses some of achievements of the past. Finally, as we move from managing data to information, and in future knowledge, the need for achieving semantic interoperability is discussed and key components of solutions are introduced. Data and information interoperability has gained increasing attention for several reasons, including: • excellent progress in interconnection afforded by the Internet, Web and distributed computing infrastructures, leading to easy access to a large number of independently created and managed information sources of broad variety;", "title": "" }, { "docid": "8a3e49797223800cb644fe2b819f9950", "text": "In this paper, we present machine learning approaches for characterizing and forecasting the short-term demand for on-demand ride-hailing services. We propose the spatio-temporal estimation of the demand that is a function of variable effects related to traffic, pricing and weather conditions. With respect to the methodology, a single decision tree, bootstrap-aggregated (bagged) decision trees, random forest, boosted decision trees, and artificial neural network for regression have been adapted and systematically compared using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and slope. To better assess the quality of the models, they have been tested on a real case study using the data of DiDi Chuxing, the main on-demand ride-hailing service provider in China. In the current study, 199,584 time-slots describing the spatio-temporal ride-hailing demand has been extracted with an aggregated-time interval of 10 mins. All the methods are trained and validated on the basis of two independent samples from this dataset. The results revealed that boosted decision trees provide the best prediction accuracy (RMSE=16.41), while avoiding the risk of over-fitting, followed by artificial neural network (20.09), random forest (23.50), bagged decision trees (24.29) and single decision tree (33.55). ∗Currently under review for publication †Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium, Email: ismail.saadi@ulg.ac.be ‡Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: melvin.wong@polymtl.ca §Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: bilal.farooq@polymtl.ca ¶Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ‖Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ar X iv :1 70 3. 02 43 3v 1 [ cs .L G ] 7 M ar 2 01 7", "title": "" }, { "docid": "eec9bd3e2c187c23f3d99fd3b98433ce", "text": "Optimum sample size is an essential component of any research. The main purpose of the sample size calculation is to determine the number of samples needed to detect significant changes in clinical parameters, treatment effects or associations after data gathering. It is not uncommon for studies to be underpowered and thereby fail to detect the existing treatment effects due to inadequate sample size. In this paper, we explain briefly the basic principles of sample size calculations in medical studies.", "title": "" }, { "docid": "dfcf58ee43773271d01cd5121c60fde0", "text": "Semantic segmentation as a pixel-wise segmentation task provides rich object information, and it has been widely applied in many fields ranging from autonomous driving to medical image analysis. There are two main challenges on existing approaches: the first one is the obfuscation between objects resulted from the prediction of the network and the second one is the lack of localization accuracy. Hence, to tackle these challenges, we proposed global encoding module (GEModule) and dilated decoder module (DDModule). Specifically, the GEModule that integrated traditional dictionary learning and global semantic context information is to select discriminative features and improve performance. DDModule that combined dilated convolution and dense connection is used to decoder module and to refine the prediction results. We evaluated our proposed architecture on two public benchmarks, Cityscapes and CamVid data set. We conducted a series of ablation studies to exploit the effectiveness of each module, and our approach has achieved an intersection-over-union scores of 71.3% on the Cityscapes data set and 60.4% on the CamVid data set.", "title": "" } ]
scidocsrr
a0c462a9fa509e98de597a1e579d703d
Automatic visual quality assessment in optical fundus images
[ { "docid": "e42357ff2f957f6964bab00de4722d52", "text": "We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.", "title": "" } ]
[ { "docid": "21daaa29b6ff00af028f3f794b0f04b7", "text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.", "title": "" }, { "docid": "0dd0f44e59c1ee1e04d1e675dfd0fd9c", "text": "An important first step to successful global marketing is to understand the similarities and dissimilarities of values between cultures. This task is particularly daunting for companies trying to do business with China because of the scarcity of research-based information. This study uses updated values of Hofstede’s (1980) cultural model to compare the effectiveness of Pollay’s advertising appeals between the U.S. and China. Nine of the twenty hypotheses predicting effective appeals based on cultural dimensions were supported. An additional hypothesis was significant, but in the opposite direction as predicted. These findings suggest that it would be unwise to use Hofstede’s cultural dimensions as a sole predictor for effective advertising appeals. The Hofstede dimensions may lack the currency and fine grain necessary to effectively predict the success of the various advertising appeals. Further, the effectiveness of advertising appeals may be moderated by other factors, such as age, societal trends, political-legal environment and product usage.", "title": "" }, { "docid": "db7a4ab8d233119806e7edf2a34fffd1", "text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.", "title": "" }, { "docid": "829269b599968a7d3e472d72cea9ab74", "text": "We present measures, models and link prediction algorithms based on the structural balance in signed social networks. Certain social networks contain, in addition to the usual friend links, enemy links. These networks are called signed social networks. A classical and major concept for signed social networks is that of structural balance, i.e., the tendency of triangles to be balanced towards including an even number of negative edges, such as friend-friend-friend and friend-enemy-enemy triangles. In this article, we introduce several new signed network analysis methods that exploit structural balance for measuring partial balance, for finding communities of people based on balance, for drawing signed social networks, and for solving the problem of link prediction. Notably, the introduced methods are based on the signed graph Laplacian and on the concept of signed resistance distances. We evaluate our methods on a collection of four signed social network datasets.", "title": "" }, { "docid": "460d0167679542a7aeac2eb24c81907d", "text": "The 1 € filter (\"one Euro filter\") is a simple algorithm to filter noisy signals for high precision and responsiveness. It uses a first order low-pass filter with an adaptive cutoff frequency: at low speeds, a low cutoff stabilizes the signal by reducing jitter, but as speed increases, the cutoff is increased to reduce lag. The algorithm is easy to implement, uses very few resources, and with two easily understood parameters, it is easy to tune. In a comparison with other filters, the 1 € filter has less lag using a reference amount of jitter reduction.", "title": "" }, { "docid": "714b5db0d1f146c5dde6e4c01de59be9", "text": "Coilgun electromagnetic launchers have capability for low and high speed applications. Through the development of four guns having projectiles ranging from 10 g to 5 kg and speeds up to 1 km/s, Sandia National Laboratories has succeeded in coilgun design and operations, validating the computational codes and basis for gun system control. Coilguns developed at Sandia consist of many coils stacked end-to-end forming a barrel, with each coil energized in sequence to create a traveling magnetic wave that accelerates a projectile. Active tracking of the projectile location during launch provides precise feedback to control when the coils arc triggered to create this wave. However, optimum performance depends also on selection of coil parameters. This paper discusses issues related to coilgun design and control such as tradeoffs in geometry and circuit parameters to achieve the necessary current risetime to establish the energy in the coils. The impact of switch jitter on gun performance is also assessed for high-speed applications.", "title": "" }, { "docid": "8c51ce7809ca0ed47d106e2a3c82f0c2", "text": "Three dimensional city models are presently being used in many sectors. The potentiality of three dimensional data has been exploited by many researchers and applications. It has been realized that 3D data are not only for visualization or navigation but to support solving more complex problems in urban planning, disaster management, facility management etc. In this paper a 3D city model is used to perform a solar energy potentiality analysis. In contrast to existing methods shadowing effects on the roof surface and façades are taken into account to achieve better simulation results. Of course, geometric details of the building geometry directly effect the calculation of shadows. In principle, 3D city models or point clouds, which contain roof structure, vegetation, thematically differentiated surface and texture, are suitable to simulate exact real-time shadow. Solar radiation data during the whole day and around the year and photovoltaic cells response to the radiation can be modeled using available simulation environment. However, the real impact on geometric details has to be investigated in further research. ings and shadow, which are the important parameters for predicting the energy production from the photovoltaic cells, can also be derived from these 3D models. Among all these factors shadow is most difficult to determine. People spend a lot of money for photovoltaic cells and if it is placed at a wrong place where due to shadow, the production is much lower than it was measured from potentiality analysis, they will lose money. Therefore it is of great importance to be investigated is to measure exact shadow effect and sunlight intensity on each surface. The computation should also include the direct and diffuse component of light after reflection and absorption by surrounding objects and at real time automatically from the 3D city models. During this research, the quality of the current city models will be checked. It will also be determined how detailed data is the minimum for calculating exact shadows. Cloud, Air quality, humidity, vegetation, color, material and other weather related and geographical aspects which causes, effects and controls shadow will also be investigated. This paper has been organized with a brief introduction at the beginning. Then some related works has been mentioned and drawback has been used to identify the gap in the literature and the good things have been integrated. A case study area has been selected for availability of data. The types of data suitable for the research have been discussed. Then the system architecture has been presented. The effect of shadow on photovoltaic cells and a methodology for detecting shadow caused by blocking of direct beam radiation have been explained. Finally the result has been shown applying the methodology on a small sample city model to have a idea of the final result.", "title": "" }, { "docid": "2b61a16b47d865197c6c735cefc8e3ec", "text": "The present study investigated the relationship between trauma symptoms and a history of child sexual abuse, adult sexual assault, and physical abuse by a partner as an adult. While there has been some research examining the correlation between individual victimization experiences and traumatic stress, the cumulative impact of multiple victimization experiences has not been addressed. Subjects were recruited from psychological clinics and community advocacy agencies. Additionally, a nonclinical undergraduate student sample was evaluated. The results of this study indicate not only that victimization and revictimization experiences are frequent, but also that the level of trauma specific symptoms are significantly related to the number of different types of reported victimization experiences. The research and clinical implications of these findings are discussed.", "title": "" }, { "docid": "3e727d70f141f52fb9c432afa3747ceb", "text": "In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) [1]to generate adversarial examples, which can fool white-box models and blackbox models with a state of the art performance and won the SECOND place in the non-target task in CAAD 2018. In this section, we first introduce the whole architecture about our method, then we present our improvement on loss functions to generate adversarial examples satisfying the L∞ norm restriction in the non-targeted attack problem. Then we illustrate how to use a robust-enhance module to make our adversarial examples more robust and have better transfer-ability. At last we will show our method on how to attack an ensemble of models.", "title": "" }, { "docid": "75c1fa342d6f30d68b0aba906a54dd69", "text": "The Constrained Application Protocol (CoAP) is a promising candidate for future smart city applications that run on resource-constrained devices. However, additional security means are mandatory to cope with the high security requirements of smart city applications. We present a framework to evaluate lightweight intrusion detection techniques for CoAP applications. This framework combines an OMNeT++ simulation with C/C++ application code that also runs on real hardware. As the result of our work, we used our framework to evaluate intrusion detection techniques for a smart public transport application that uses CoAP. Our first evaluations indicate that a hybrid IDS approach is a favorable choice for smart city applications.", "title": "" }, { "docid": "911545273424b27832310d9869ccb55f", "text": "Current people detectors operate either by scanning an image in a sliding window fashion or by classifying a discrete set of proposals. We propose a model that is based on decoding an image into a set of people detections. Our system takes an image as input and directly outputs a set of distinct detection hypotheses. Because we generate predictions jointly, common post-processing steps such as nonmaximum suppression are unnecessary. We use a recurrent LSTM layer for sequence generation and train our model end-to-end with a new loss function that operates on sets of detections. We demonstrate the effectiveness of our approach on the challenging task of detecting people in crowded scenes1.", "title": "" }, { "docid": "57ccc061377399b669d5ece668b7e030", "text": "We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.", "title": "" }, { "docid": "63d944058c683a57cdc531738b097466", "text": "These Human facial expressions convey a lot of information visually rather than articulately. Facial expression recognition plays a crucial role in the area of human-machine interaction. Recognition of facial expression by computer with high recognition rate is still a challenging task. Facial Expression Recognition usually performed in three-stages consisting of face detection, feature extraction, and expression classification. This paper presents a survey of the current work done in the field of facial expression recognition techniques with various face detection, feature extraction and classification methods used by them and their performance.", "title": "" }, { "docid": "2d6d5c8b1ac843687db99ccf50a0baff", "text": "This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.", "title": "" }, { "docid": "6db5de1bb37513c3c251624947ee4e8f", "text": "The proliferation of Ambient Intelligence (AmI) devices and services and their integration in smart environments creates the need for a simple yet effective way of controlling and communicating with them. Towards that direction, the application of the Trigger -- Action model has attracted a lot of research with many systems and applications having been developed following that approach. This work introduces ParlAmI, a multimodal conversational interface aiming to give its users the ability to determine the behavior of AmI environments, by creating rules using natural language as well as a GUI. The paper describes ParlAmI, its requirements and functionality, and presents the findings of a user-based evaluation which was conducted.", "title": "" }, { "docid": "70fafdedd05a40db5af1eabdf07d431c", "text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.", "title": "" }, { "docid": "20deb56f6d004a8e33d1e1a4f579c1ba", "text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.", "title": "" }, { "docid": "7892a17a84d54bb6975cb7b8229242a9", "text": "The way people conceptualize space is an important consideration for the design of geographic information systems, because a better match with peopleÕs thinking is expected to lead to easier-touse information systems. Everyday space, the basis to geographic information systems (GISs), has been characterized in the literature as being either small-scale (from table-top to room-size spaces) or large-scale (inside-of-building spaces to city-size space). While this dichotomy of space is grounded in the view from psychology that peopleÕs perception of space, spatial cognition, and spatial behavior are experience-based, it is in contrast to current GISs, which enable us to interact with large-scale spaces as though they were small-scale or manipulable. We analyze different approaches to characterizing spaces and propose a unified view in which space is based on the physical properties of manipulability, locomotion, and size of space. Within the structure of our framework, we distinguish six types of spaces: manipulable object space (smaller than the human body), non-manipulable object space (greater than the human body, but less than the size of a building), environmental space (from inside building spaces to city-size spaces), geographic space (state, country, and continent-size spaces), panoramic space (spaces perceived via scanning the landscape), and map space. Such a categorization is an important part of Naive Geography, a set of theories of how people intuitively or spontaneously conceptualize geographic space and time, because it has implications for various theoretical and methodological questions concerning the design and use of spatial information tools. Of particular concern is the design of effective spatial information tools that lead to better communication.", "title": "" }, { "docid": "5f2b4caef605ab07ca070552e308d6e6", "text": "The objective of CLEF is to promote research in the field of multilingual system development. This is done through the organisation of annual evaluation campaigns in which a series of tracks designed to test different aspects of monoand cross-language information retrieval (IR) are offered. The intention is to encourage experimentation with all kinds of multilingual information access – from the development of systems for monolingual retrieval operating on many languages to the implementation of complete multilingual multimedia search services. This has been achieved by offering an increasingly complex and varied set of evaluation tasks over the years. The aim is not only to meet but also to anticipate the emerging needs of the R&D community and to encourage the development of next generation multilingual IR systems. These Working Notes contain descriptions of the experiments conducted within CLEF 2006 – the sixth in a series of annual system evaluation campaigns. The results of the experiments will be presented and discussed in the CLEF 2006 Workshop, 20-22 September, Alicante, Spain. The final papers revised and extended as a result of the discussions at the Workshop together with a comparative analysis of the results will appear in the CLEF 2006 Proceedings, to be published by Springer in their Lecture Notes for Computer Science series. As from CLEF 2005, the Working Notes are published in electronic format only and are distributed to participants at the Workshop on CD-ROM together with the Book of Abstracts in printed form. All reports included in the Working Notes will also be inserted in the DELOS Digital Library, accessible at http://delos-dl.isti.cnr.it. Both Working Notes and Book of Abstracts are divided into eight sections, corresponding to the CLEF 2006 evaluation tracks. In addition appendices are included containing run statistics for the Ad Hoc, Domain-Specific, GeoCLEF and QA tracks, plus a list of all participating groups showing in which track they took part. The main features of the 2006 campaign are briefly outlined here below in order to provide the necessary background to the experiments reported in the rest of the Working Notes.", "title": "" }, { "docid": "cc9ff40f0c210ad0669bce44b5043e48", "text": "Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).", "title": "" } ]
scidocsrr
e4a9fcde70759b6895be822c56777f09
Supervised Discrete Hashing With Relaxation
[ { "docid": "404fdd6f2d7f1bf69f2f010909969fa9", "text": "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "title": "" } ]
[ { "docid": "b8c46afde4c09049f7018a4503a8c027", "text": "Digital signal processing has completely changed the way optical communication systems work during recent years. In combination with coherent demodulation, it enables compensation of optical distortions that seemed impossible only a few years ago. However, at high bit rates, this comes at the price of complex processing circuits and high power consumption. In order to translate theoretic concepts into economically viable products, careful design of the digital signal processing algorithms is needed. In this paper, we give an overview of digital equalization algorithms for coherent receivers and derive expressions for their complexity. We compare single-carrier and multicarrier approaches, and investigate blind equalizer adaptation as well as training-symbol-based algorithms. We examine tradeoffs between parameters like sampling rate and tracking speed that are important for algorithm design and practical implementation.", "title": "" }, { "docid": "52c9ee7e057ff9ade5daf44ea713e889", "text": "In this work, we present a novel peak-piloted deep network (PPDN) that uses a sample with peak expression (easy sample) to supervise the intermediate feature responses for a sample of non-peak expression (hard sample) of the same type and from the same subject. The expression evolving process from nonpeak expression to peak expression can thus be implicitly embedded in the network to achieve the invariance to expression intensities.", "title": "" }, { "docid": "01f9384b33a84c3ece4db5337e708e24", "text": "Broken rails are the leading cause of major derailments in North America. Class I freight railroads average 84 mainline broken-rail derailments per year with an average track and equipment cost of approximately $525,000 per incident. The number of mainline broken-railcaused derailments has increased from 77 in 1997, to 91 in 2006; therefore, efforts to reduce their occurrence remain important. We conducted an analysis of the factors that influence the occurrence of broken rails and developed a quantitative model to predict locations where they are most likely to occur. Among the factors considered were track and rail characteristics, maintenance activities and frequency, and on-track testing results. Analysis of these factors involved the use of logistic regression techniques to develop a statistical model for the prediction of broken rail locations. For such a model to have value for railroads it must be feasible to use and provide information in a useful manner. Consequently, an optimal prediction model containing only the top eight factors related to broken rails was developed. The economic impact of broken rail events was also studied. This included the costs associated with broken rail derailments and service failures, as well as the cost of typical prevention measures. A train delay calculator was also developed based on industry operating averages. Overall, the information presented here can assist railroads to more effectively allocate resources to prevent the occurrence of broken rails. INTRODUCTION Understanding the factors related to broken rails is an important topic for U.S. freight railroads and is becoming more so because of the increase in their occurrence in recent years. This increase is due to several factors, but the combination of increased traffic and heavier axle loads are probably the most important. Broken rails are generally caused by the undetected growth of either internal or surface defects in the rail (1). Previous research has focused on both mechanistic analyses (2-8) and statistical analyses (9-13) in order to understand the factors that cause crack growth in rails and ultimately broken rails. The first objective of this analysis was to develop a predictive tool that will enable railroads to identify locations with a high probability of broken rail. The possible predictive factors that were evaluated included rail characteristics, infrastructure data, maintenance activity, operational information, and rail testing results. The second objective was to study the economic impact of broken rails based on industry operating averages. Our analysis on this topic incorporates previous work that developed a framework for the cost of broken rails (14). The purpose of this paper is to provide information to enable more efficient evaluation of options to reduce the occurence of broken rails. DEVELOPMENT OF SERVICE FAILURE PREDICTION MODEL The first objective of this paper was to develop a model to identify locations in the rail network with a high probability of broken rail occurrence based on broken rail service failure data and possible influence factors. All of the factors that might affect service failure occurrence and for which we had data were considered in this analysis. Several broken rail predictive models were developed and evaluated using logistic regression techniques. Data Available for Study In order to develop a predictive tool, it is desirable to initially consider as many factors as possible that might affect the occurrence of broken rails. From the standpoint of rail maintenance planning it is important to determine which factors are and are not correlated with broken rail occurence. Therefore the analysis included a wide-range of possible variables for which data were available. This included track and rail characteristics such as rail age, rail curvature, track speed, grade, and rail weight. Also, changes in track modulus due to the presence of infrastructure features such as bridges and turnouts have a potential effect on rail defect growth and were examined as well. Additionally, maintenance activities were included that can reduce the likelihood of broken rail occurrence, such as rail grinding and tie replacement. Finally, track geometry and ultrasonic testing for rail defects were used by railroads to assess the condition of track and therefore the results of these tests are included as they may provide predictive information about broken rail occurrence. The BNSF Railway provided data on the location of service failures and a variety of other infrastructure, inspection and operational parameters. In this study a “service failure” was defined as an incident where a track was taken out of service due to a broken rail. A database was developed from approximately 23,000 miles of mainline track maintained by the BNSF Railway covering the four-year period, 2003 through 2006. BNSF’s network was divided into 0.01-mile-long segments (approximately 53 feet each) and the location of each reported service failure was recorded. BNSF experienced 12,685 service failures during the four-year study period. For the case of modeling rare events it is common to sample all of the rare events and compare these with a similar sized sample of instances where the event did not occur (15). Therefore an additional 12,685 0.01-mile segments that did not experience a service failure during the four-year period were randomly selected from the same network. Each non-failure location was also assigned a random date within the four-year time period for use in evaluating certain temporal variables that might be factors. Thus, the dataset used in this analysis included a total of 25,370 segment locations and dates when a service failure did or did not occur in the railroad’s network during the study period. All available rail characteristics, infrastructure data, maintenance activity, operational information, and track testing results were linked to each of these locations, for a total of 28 unique input variables. Evaluation of Previous Service Failure Model In a previous study Dick developed a predictive model of service failures based on relevant track and traffic data for a two-year period (10, 11). The outcome of that study was a multivariate statistical model that could quantify the probability of a service failure at any particular location based on a number of track and traffic related variables. Dick‘s model used 11 possible predictor factors for broken rails and could correctly classify failure locations with 87.4% accuracy using the dataset provided to him. Our first step was to test this model using data from a more recent two-year period. From 2005 through 2006, the BNSF experienced 6,613 service failures and data on these, along with 6,613 randomly selected non-failure locations, were analyzed. 7,247 of the 13,226 cases were classified correctly (54.8%), considerably lower than in the earlier study causing us to ask why the predictive power seemed to have declined. Examination of the service failure dataset used previously revealed that it may not have included all the trackage from the network. This resulted in a dataset that generated the particular model and accuracy levels reported in the earlier study (10, 11). Therefore a new, updated statistical model was developed to predict service failure locations. Development of Updated Statistical Classification Model The updated model that was developed to predict service failure locations used similar logistic regression techniques. Logistic regression was selected because it is a discrete choice model that calculates the probability of failure based on available input variables. These probabilities are used to classify each case as either failure or non-failure. A statistical regression equation was developed based on the significant input parameters to determine the probability of failure. To find the best classification model, the input parameters were evaluated with and without multiple-term interactions allowed. Logistic Regression Methodology and Techniques The model was developed as a discrete choice classification problem of either failure or non-failure using the new dataset described above. The objective was to find the best combination of variables and mathematical relationships among the 28 available input variables to predict the occurrence of broken rails. The service failure probability model was developed using Statistical Analysis Software (SAS) and the LOGISTIC procedure (16). This procedure fits a discrete choice logistic regression model to the input data. The output of this model is an index value between zero and one corresponding to the probability of a service failure occurrence. Four commonly used variable selection techniques were evaluated in this analysis to find the best model. The simplest method is referred to as “full-model”, or variable selection type “none” in SAS. The full-model method uses every available input variable to determine the best regression model. The next technique examined was selection type “forward”, which evaluates each input variable and systematically adds the most significant variables to the model. The forward selection process continues adding the most significant variable until no additional variables meet a defined significance level for inclusion in the model. The entry and removal level used in this analysis for all variable selection techniques was a 0.05 significance threshold. The “backward” variable selection technique was also used. This method starts with all input variables included in the model. In the first step, the model determines the least significant variable that does not meet the defined significance level and removes it from the model. This process continues until no other variables included in the model meet the defined criteria for removal. The final logistic regression selection technique used was “step-wise” selection. The step-wise selection method is s", "title": "" }, { "docid": "c56c45405e0a943e63ab035b11b9fd93", "text": "We present a simple, but expressive type system that supports strong updates—updating a memory cell to hold values of unrelated types at different points in time. Our formulation is based upon a standard linear lambda calculus and, as a result, enjoys a simple semantic interpretation for types that is closely related to models for spatial logics. The typing interpretation is strong enough that, in spite of the fact that our core programming language supports shared, mutable references and cyclic graphs, every well-typed program terminates. We then consider extensions needed to model ML-style references, where the capability to access a reference cell is unrestricted, but strong updates are disallowed. Our extensions include a thaw primitive for re-gaining the capability to perform strong updates on unrestricted references. The thaw primitive is closely related to other mechanisms that support strong updates, such as CQUAL’s restrict.", "title": "" }, { "docid": "c8386ae4ee017290c4ed5816d03f4864", "text": "In this article, a compact tutorial of ANC techniques was presented with a review of their application in reducing undesired noise inside automobiles. Some of the recent advances have demonstrated significant improvements in the noise reduction levels as well as the cost and implementation complexity. While the aforementioned techniques discussed may individually focus on a particular noise field (e.g., road noise only, engine noise only), it is proven through research and commercial products that a combination of these strategies can deliver significant benefits in realistic conditions.", "title": "" }, { "docid": "61c6d49c3cdafe4366d231ebad676077", "text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.", "title": "" }, { "docid": "71c2478c1eb50681fe0793976ffc24fe", "text": "Background subtraction is a common first step in the field of video processing and it is used to reduce the effective image size in subsequent processing steps by segmenting the mostly static background from the moving or changing foreground. In this paper previous approaches towards background modeling are extended to handle videos accompanied by information gained from a novel 2D/3D camera. This camera contains a color and a PMD chip which operates on the Time-of-Flight operating principle. The background is estimated using the widely spread Gaussian mixture model in color as well as in depth and amplitude modulation. A new matching function is presented that allows for better treatment of shadows and noise and reduces block artifacts. Problems and limitations to overcome the problem of fusing high resolution color information with low resolution depth data are addressed and the approach is tested with different parameters on several scenes and the results are compared to common and widely accepted methods.", "title": "" }, { "docid": "e724d4405f50fd74a2184187dcc52401", "text": "This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet's ethical use-is fully focused on exploiting the current version's foundational weaknesses.", "title": "" }, { "docid": "fa6ffdd24549135ac8c6efce5eb238c5", "text": "Search-based approaches to software design are investigated. Software design is considered from a wide view, including topics that can also be categorized under software maintenance or re-engineering. Search-based approaches have been used in research from high architecture level design to software clustering and finally software refactoring. Enhancing and predicting software quality with search-based methods is also taken into account as a part of the design process. The choices regarding fundamental decisions, such as representation and fitness function, when using in meta-heuristic search algorithms, are emphasized and discussed in detail. Ideas for future research directions are also given.", "title": "" }, { "docid": "80947cea68851bc522d5ebf8a74e28ab", "text": "Advertising is key to the business model of many online services. Personalization aims to make ads more relevant for users and more effective for advertisers. However, relatively few studies into user attitudes towards personalized ads are available. We present a San Francisco Bay Area survey (N=296) and in-depth interviews (N=24) with teens and adults. People are divided and often either (strongly) agreed or disagreed about utility or invasiveness of personalized ads and associated data collection. Mobile ads were reported to be less relevant than those on desktop. Participants explained ad personalization based on their personal previous behaviors and guesses about demographic targeting. We describe both metrics improvements as well as opportunities for improving online advertising by focusing on positive ad interactions reported by our participants, such as personalization focused not just on product categories but specific brands and styles, awareness of life events, and situations in which ads were useful or even inspirational.", "title": "" }, { "docid": "e71bd8a43806651b412d00848821a517", "text": "Techniques for procedural generation of the graphics content have seen widespread use in multimedia over the past thirty years. It is still an active area of research with many applications in 3D modeling software, video games, and films. This thesis focuses on algorithmic generation of virtual terrains in real-time and their real-time visualization. We provide an overview of available approaches and present an extendable library for procedural terrain synthesis.", "title": "" }, { "docid": "3a0d38ba7d29358e511d5eef24360713", "text": "In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.", "title": "" }, { "docid": "f8d01364ff29ad18480dfe5d164bbebf", "text": "With companies such as Netflix and YouTube accounting for more than 50% of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.", "title": "" }, { "docid": "31abdea5ff0fc543ddfd382249602cda", "text": "Named Entity Recognition (NER), an information extraction task, is typically applied to spoken documents by cascading a large vocabulary continuous speech recognizer (LVCSR) and a named entity tagger. Recognizing named entities in automatically decoded speech is difficult since LVCSR errors can confuse the tagger. This is especially true of out-of-vocabulary (OOV) words, which are often named entities and always produce transcription errors. In this work, we improve speech NER by including features indicative of OOVs based on a OOV detector, allowing for the identification of regions of speech containing named entities, even if they are incorrectly transcribed. We construct a new speech NER data set and demonstrate significant improvements for this task.", "title": "" }, { "docid": "2af0ef7c117ace38f44a52379c639e78", "text": "Examination of a child with genital or anal disease may give rise to suspicion of sexual abuse. Dermatologic, traumatic, infectious, and congenital disorders may be confused with sexual abuse. Seven children referred to us are representative of such confusion.", "title": "" }, { "docid": "09d7bb1b4b976e6d398f20dc34fc7678", "text": "A compact wideband quarter-wave transformer using microstrip lines is presented. The design relies on replacing a uniform microstrip line with a multi-stage equivalent circuit. The equivalent circuit is a cascade of either T or π networks. Design equations for both types of equivalent circuits have been derived. A quarter-wave transformer operating at 1 GHz is implemented. Simulation results indicate a −15 dB impedance bandwidth exceeding 64% for a 3-stage network with less than 0.25 dB of attenuation within the bandwidth. Both types of equivalent circuits provide more than 40% compaction with proper selection of components. Measured results for the fabricated unit deviate within acceptable limits. The designed quarter-wave transformer may be used to replace 90° transmission lines in various passive microwave components.", "title": "" }, { "docid": "f6ae47c4b53a3d5493405e8c2095d928", "text": "Bipartite networks are currently regarded as providing amajor insight into the organization ofmany real-world systems, unveiling themechanisms driving the interactions occurring between distinct groups of nodes. One of themost important issues encounteredwhenmodeling bipartite networks is devising away to obtain a (monopartite) projection on the layer of interest, which preserves asmuch as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any twonodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, herewe consider a set of four nullmodels, definedwithin the exponential randomgraph framework. Our algorithm outputs amatrix of link-specific p-values, fromwhich a validated projection is straightforwardly obtainable, upon running amultiple hypothesis testing procedure. Finally, we test ourmethod on an economic network (i.e. the countries-productsWorld TradeWeb representation) and a social network (i.e.MovieLens, collecting the users’ ratings of a list ofmovies). In both cases non-trivial communities are detected: while projecting theWorld TradeWeb on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on thefilms layer allows clusters ofmovies whose affinity cannot be fully accounted for by genre similarity to be individuated.", "title": "" }, { "docid": "928e127f60953c896d35462215731777", "text": "Detection of object of a known class is a fundamental problem of computer vision. The appearance of objects can change greatly due to illumination, view point, and articulation. For object classes with large intra-class variation, some divide-and-conquer strategy is necessary. Tree structured classifier models have been used for multi-view multi- pose object detection in previous work. This paper proposes a boosting based learning method, called Cluster Boosted Tree (CBT), to automatically construct tree structured object detectors. Instead of using predefined intra-class sub- categorization based on domain knowledge, we divide the sample space by unsupervised clustering based on discriminative image features selected by boosting algorithm. The sub-categorization information of the leaf nodes is sent back to refine their ancestors' classification functions. We compare our approach with previous related methods on several public data sets. The results show that our approach outperforms the state-of-the-art methods.", "title": "" }, { "docid": "9cd85689d30771a8b11a1d8c9d9d1785", "text": "Plug-in electric vehicles (PEVs) can behave either as loads or as distributed energy sources in a concept known as vehicle-to-grid (V2G). The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural and technical obstacles. Although V2G operation can reduce the lifetime of PEVs, it is projected to be more economical for vehicle owners and grid operators. This paper reviews these benefits and challenges of V2G technology for both individual vehicles and vehicle fleets.", "title": "" }, { "docid": "d9064c1b9bd8d0d11d91eaaaa520e322", "text": "This paper proposes a waveguide-stripline series–corporate hybrid feed technique to ease the feed-network design for dual-polarized antenna arrays. The hybrid feed network consists of a stripline series feed network and a waveguide-stripline corporate feed network, incorporating both of their advantages in one application. Furthermore, the design can efficiently simplify the manufacturing and packaging process without any bonding postprocess. The proposed technique is realized on an $8 \\times 8$ dual-polarized antenna array, and the expected performances are verified by measurement. Experimental results show that the antenna array can operate in a wide bandwidth of 13.2% with more than 40 dB isolation for dual polarization. Cross polarizations at the center frequency of 15.2 GHz are below −27 dB, and the achievable aperture efficiency is 80% for both the polarizations. This hybrid feed approach can be considered as a promising solution for efficient feed-network applications.", "title": "" } ]
scidocsrr
4b36351a442b3c49da11822134b963b4
Survey Paper on Phishing Detection : Identification of Malicious URL Using Bayesian Classification on Social Network Sites
[ { "docid": "e2b95200b6da4d2ff8c69b55f023638e", "text": "Phishing is the third cyber-security threat globally and the first cyber-security threat in China. There were 61.69 million phishing victims in China alone from June 2011 to June 2012, with the total annual monetary loss more than 4.64 billion US dollars. These phishing attacks were highly concentrated in targeting at a few major Websites. Many phishing Webpages had a very short life span. In this paper, we assume the Websites to protect against phishing attacks are known, and study the effectiveness of machine learning based phishing detection using only lexical and domain features, which are available even when the phishing Webpages are inaccessible. We propose several novel highly effective features, and use the real phishing attack data against Taobao and Tencent, two main phishing targets in China, in studying the effectiveness of each feature, and each group of features. We then select an optimal set of features in our phishing detector, which has achieved a detection rate better than 98%, with a false positive rate of 0.64% or less. The detector is still effective when the distribution of phishing URLs changes.", "title": "" }, { "docid": "61d8aa943a3cce1821eb12909c659bb9", "text": "Detecting phishing attacks (identifying fake vs. real websites) and heeding security warnings represent classical user-centered security tasks subjected to a series of prior investigations. However, our understanding of user behavior underlying these tasks is still not fully mature, motivating further work concentrating at the neuro-physiological level governing the human processing of such tasks.\n We pursue a comprehensive three-dimensional study of phishing detection and malware warnings, focusing not only on what users' task performance is but also on how users process these tasks based on: (1) neural activity captured using Electroencephalogram (EEG) cognitive metrics, and (2) eye gaze patterns captured using an eye-tracker. Our primary novelty lies in employing multi-modal neuro-physiological measures in a single study and providing a near realistic set-up (in contrast to a recent neuro-study conducted inside an fMRI scanner). Our work serves to advance, extend and support prior knowledge in several significant ways. Specifically, in the context of phishing detection, we show that users do not spend enough time analyzing key phishing indicators and often fail at detecting these attacks, although they may be mentally engaged in the task and subconsciously processing real sites differently from fake sites. In the malware warning tasks, in contrast, we show that users are frequently reading, possibly comprehending, and eventually heeding the message embedded in the warning.\n Our study provides an initial foundation for building future mechanisms based on the studied real-time neural and eye gaze features, that can automatically infer a user's \"alertness\" state, and determine whether or not the user's response should be relied upon.", "title": "" } ]
[ { "docid": "733b998017da30fe24521158a6aaa749", "text": "Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing.", "title": "" }, { "docid": "c4fa73bd2d6b06f4655eeacaddf3b3a7", "text": "In recent years, the robotic research area has become extremely prolific in terms of wearable active exoskeletons for human body motion assistance, with the presentation of many novel devices, for upper limbs, lower limbs, and the hand. The hand shows a complex morphology, a high intersubject variability, and offers limited space for physical interaction with a robot: as a result, hand exoskeletons usually are heavy, cumbersome, and poorly usable. This paper introduces a novel device designed on the basis of human kinematic compatibility, wearability, and portability criteria. This hand exoskeleton, briefly HX, embeds several features as underactuated joints, passive degrees of freedom ensuring adaptability and compliance toward the hand anthropometric variability, and an ad hoc design of self-alignment mechanisms to absorb human/robot joint axes misplacement, and proposes a novel mechanism for the thumb opposition. The HX kinematic design and actuation are discussed together with theoretical and experimental data validating its adaptability performances. Results suggest that HX matches the self-alignment design goal and is then suited for close human-robot interaction.", "title": "" }, { "docid": "b59a2c49364f3e95a2c030d800d5f9ce", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "e70c6ccc129f602bd18a49d816ee02a9", "text": "This purpose of this paper is to show how prevalent features of successful human tutoring interactions can be integrated into a pedagogical agent, AutoTutor. AutoTutor is a fully automated computer tutor that responds to learner input by simulating the dialog moves of effective, normal human tutors. AutoTutor’s delivery of dialog moves is organized within a 5step framework that is unique to normal human tutoring interactions. We assessed AutoTutor’s performance as an effective tutor and conversational partner during tutoring sessions with virtual students of varying ability levels. Results from three evaluation cycles indicate the following: (1) AutoTutor is capable of delivering pedagogically effective dialog moves that mimic the dialog move choices of human tutors, and (2) AutoTutor is a reasonably effective conversational partner. INTRODUCTION AND BACKGROUND Over the last decade a number of researchers have attempted to uncover the mechanisms of human tutoring that are responsible for student learning gains. Many of the informative findings have been reported in studies that have systematically analyzed the collaborative discourse that occurs between tutors and students (Fox, 1993; Graesser & Person, 1994; Graesser, Person, & Magliano, 1995; Hume, Michael, Rovick, & Evens, 1996; McArthur, Stasz, & Zmuidzinas, 1990; Merrill, Reiser, Ranney, & Trafton, 1992; Moore, 1995; Person & Graesser, 1999; Person, Graesser, Magliano, & Kreuz, 1994; Person, Kreuz, Zwaan, & Graesser, 1995; Putnam, 1987). For example, we have learned that the tutorial session is predominately controlled by the tutor. That is, tutors, not students, typically determine when and what topics will be covered in the session. Further, we know that human tutors rarely employ sophisticated or “ideal” tutoring models that are often incorporated into intelligent tutoring systems. Instead, human tutors are more likely to rely on localized strategies that are embedded within conversational turns. Although many findings such as these have illuminated the tutoring process, they present formidable challenges for designers of intelligent tutoring systems. After all, building a knowledgeable conversational partner is no small feat. However, if designers of future tutoring systems wish to capitalize on the knowledge gained from human tutoring studies, the next generation of tutoring systems will incorporate pedagogical agents that engage in learning dialogs with students. The purpose of this paper is twofold. First, we will describe how prevalent features of successful human tutoring interactions can be incorporated into a pedagogical agent, AutoTutor. Second, we will provide data from several preliminary performance evaluations in which AutoTutor interacts with virtual students of varying ability levels. Person, Graesser, Kreuz, Pomeroy, and the Tutoring Research Group AutoTutor is a fully automated computer tutor that is currently being developed by the Tutoring Research Group (TRG). AutoTutor is a working system that attempts to comprehend students’ natural language contributions and then respond to the student input by simulating the dialogue moves of human tutors. AutoTutor differs from other natural language tutors in several ways. First, AutoTutor does not restrict the natural language input of the student like other systems (e.g., Adele (Shaw, Johnson, & Ganeshan, 1999); the Ymir agents (Cassell & Thórisson, 1999); Cirscim-Tutor (Hume, Michael, Rovick, & Evens, 1996; Zhou et al., 1999); Atlas (Freedman, 1999); and Basic Electricity and Electronics (Moore, 1995; Rose, Di Eugenio, & Moore, 1999)). These systems tend to limit student input to a small subset of judiciously worded speech acts. Second, AutoTutor does not allow the user to substitute natural language contributions with GUI menu options like those in the Atlas and Adele systems. The third difference involves the open-world nature of AutoTutor’s content domain (i.e., computer literacy). The previously mentioned tutoring systems are relatively more closed-world in nature, and therefore, constrain the scope of student contributions. The current version of AutoTutor simulates the tutorial dialog moves of normal, untrained tutors; however, plans for subsequent versions include the integration of more sophisticated ideal tutoring strategies. AutoTutor is currently designed to assist college students learn about topics covered in an introductory computer literacy course. In a typical tutoring session with AutoTutor, students will learn the fundamentals of computer hardware, the operating system, and the Internet. A Brief Sketch of AutoTutor AutoTutor is an animated pedagogical agent that serves as a conversational partner with the student. AutoTutor’s interface is comprised of four features: a two-dimensional, talking head, a text box for typed student input, a text box that displays the problem/question being discussed, and a graphics box that displays pictures and animations that are related to the topic at hand. AutoTutor begins the session by introducing himself and then presents the student with a question or problem that is selected from a curriculum script. The question/problem remains in a text box at the top of the screen until AutoTutor moves on to the next topic. For some questions and problems, there are graphical displays and animations that appear in a specially designated box on the screen. Once AutoTutor has presented the student with a problem or question, a multi-turn tutorial dialog occurs between AutoTutor and the learner. All student contributions are typed into the keyboard and appear in a text box at the bottom of the screen. AutoTutor responds to each student contribution with one or a combination of pedagogically appropriate dialog moves. These dialog moves are conveyed via synthesized speech, appropriate intonation, facial expressions, and gestures and do not appear in text form on the screen. In the future, we hope to have AutoTutor handle speech recognition, so students can speak their contributions. However, current speech recognition packages require time-consuming training that is not optimal for systems that interact with multiple users. The various modules that enable AutoTutor to interact with the learner will be described in subsequent sections of the paper. For now, however, it is important to note that our initial goals for building AutoTutor have been achieved. That is, we have designed a computer tutor that participates in a conversation with the learner while simulating the dialog moves of normal human tutors. WHY SIMULATE NORMAL HUMAN TUTORS? It has been well documented that normal, untrained human tutors are effective. Effect sizes ranging between .5 and 2.3 have been reported in studies where student learning gains were measured (Bloom, 1984; Cohen, Kulik, & Kulik, 1982). For quite a while, these rather large effect sizes were somewhat puzzling. That is, normal tutors typically do not have expert domain knowledge nor do they have knowledge about sophisticated tutoring strategies. In order to gain a better understanding of the primary mechanisms that are responsible for student learning Simulating Human Tutor Dialog Moves in AutoTutor gains, a handful of researchers have systematically analyzed the dialogue that occurs between normal, untrained tutors and students (Graesser & Person, 1994; Graesser et al., 1995; Person & Graesser, 1999; Person et al., 1994; Person et al., 1995). Graesser, Person, and colleagues analyzed over 100 hours of tutoring interactions and identified two prominent features of human tutoring dialogs: (1) a five-step dialog frame that is unique to tutoring interactions, and (2) a set of tutor-initiated dialog moves that serve specific pedagogical functions. We believe these two features are responsible for the positive learning outcomes that occur in typical tutoring settings, and further, these features can be implemented in a tutoring system more easily than the sophisticated methods and strategies that have been advocated by other educational researchers and ITS developers. Five-step Dialog Frame The structure of human tutorial dialogs differs from learning dialogs that often occur in classrooms. Mehan (1979) and others have reported a 3-step pattern that is prevalent in classroom interactions. This pattern is often referred to as IRE, which stands for Initiation (a question or claim articulated by the teacher), Response (an answer or comment provided by the student), and Evaluation (teacher evaluates the student contribution). In tutoring, however, the dialog is managed by a 5-step dialog frame (Graesser & Person, 1994; Graesser et al., 1995). The five steps in this frame are presented below. Step 1: Tutor asks question (or presents problem). Step 2: Learner answers question (or begins to solve problem). Step 3: Tutor gives short immediate feedback on the quality of the answer (or solution). Step 4: Tutor and learner collaboratively improve the quality of the answer. Step 5: Tutor assesses learner’s understanding of the answer. This 5-step dialog frame in tutoring is a significant augmentation over the 3-step dialog frame in classrooms. We believe that the advantage of tutoring over classroom settings lies primarily in Step 4. Typically, Step 4 is a lengthy multi-turn dialog in which the tutor and student collaboratively contribute to the explanation that answers the question or solves the problem. At a macro-level, the dialog that occurs between AutoTutor and the learner conforms to Steps 1 through 4 of the 5-step frame. For example, at the beginning of each new topic, AutoTutor presents the learner with a problem or asks the learner a question (Step 1). The learner then attempts to solve the problem or answer the question (Step 2). Next, AutoTutor provides some type of short, evaluative feedback (Step 3). During Step 4, AutoTutor employs a variety of dialog moves (see next section) that encourage learner participation. Thus, ins", "title": "" }, { "docid": "2f138f030565d85e4dcd9f90585aecb0", "text": "One of the central questions in neuroscience is how particular tasks, or computations, are implemented by neural networks to generate behavior. The prevailing view has been that information processing in neural networks results primarily from the properties of synapses and the connectivity of neurons within the network, with the intrinsic excitability of single neurons playing a lesser role. As a consequence, the contribution of single neurons to computation in the brain has long been underestimated. Here we review recent work showing that neuronal dendrites exhibit a range of linear and nonlinear mechanisms that allow them to implement elementary computations. We discuss why these dendritic properties may be essential for the computations performed by the neuron and the network and provide theoretical and experimental examples to support this view. 503 A nn u. R ev . N eu ro sc i. 20 05 .2 8: 50 353 2. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by M as sa ch us et ts I ns tit ut e of T ec hn ol og y (M IT ) on 0 6/ 26 /1 4. F or p er so na l u se o nl y. AR245-NE28-18 ARI 13 May 2005 14:15", "title": "" }, { "docid": "6226b650540d812b6c40939a582331ef", "text": "With an increasingly mobile society and the worldwide deployment of mobile and wireless networks, the wireless infrastructure can support many current and emerging healthcare applications. This could fulfill the vision of “Pervasive Healthcare” or healthcare to anyone, anytime, and anywhere by removing locational, time and other restraints while increasing both the coverage and the quality. In this paper, we present applications and requirements of pervasive healthcare, wireless networking solutions and several important research problems. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. One major application in pervasive healthcare, termed comprehensive health monitoring is presented in significant details using wireless networking solutions of wireless LANs, ad hoc wireless networks, and, cellular/GSM/3G infrastructureoriented networks.Many interesting challenges of comprehensive wireless health monitoring, including context-awareness, reliability, and, autonomous and adaptable operation are also presented along with several high-level solutions. Several interesting research problems have been identified and presented for future research.", "title": "" }, { "docid": "a0f4b7f3f9f2a5d430a3b8acead2b746", "text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse", "title": "" }, { "docid": "df56d2914cdfbc31dff9ecd9a3093379", "text": "In this paper, square slot (SS) upheld by the substrate integrated waveguide (SIW) cavity is presented. A simple 50 Ω microstrip line is employed to feed this cavity. Then slot matched cavity modes are coupled to the slot and radiated efficiently. The proposed antenna features the following structural advantages, compact size, light weight and easy low cost fabrication. Concerning the electrical performance, it exhibits 15% impedance bandwidth for the reflection coefficient less than -10 dB and the realized gain touches 8.5 dB frontier.", "title": "" }, { "docid": "9055e0d3bc4747f34b662e99efb2ff69", "text": "BACKGROUND\nOmohyoid muscle syndrome (OMS) (not omohyoid syndrome) is a rare clinical condition that has a characteristic feature of a protruding lateral neck mass during swallowing. The use of endoscopic surgery on the neck is now pretty well established for thyroid and parathyroid glands. Patients with OMS usually undergo simple surgical transection of the omohyoid muscle. The procedure leaves operative scars on the neck, and most patients worry about the cosmetic problems. We report here the first use of an endoscopic procedure instead of traditional surgery for treatment of OMS.\n\n\nMATERIALS AND METHODS\nWe present a rare case of a 26-year-old Chinese man who noted a protruding mass involving the right side of his neck during the past 10 years. OMS was diagnosed. Laparoscopic simple transection of the omohyoid muscle by an ultrasonically activated scalpel was performed.\n\n\nRESULTS\nAfter laparoscopic transection of the omohyoid muscle, the neck mass completely disappeared during swallowing, and there were no operative scars on the neck.\n\n\nCONCLUSIONS\nTo our knowledge, this is the first report of laparoscopy for treatment of OMS. We believe that the laparoscopic procedure is made acceptable for this unusual disease because of the cosmetic result.", "title": "" }, { "docid": "b6bd380108803bec62dae716d9e0a83e", "text": "With the advent of statistical modeling in sports, predicting the outcome of a game has been established as a fundamental problem. Cricket is one of the most popular team games in the world. With this article, we embark on predicting the outcome of a One Day International (ODI) cricket match using a supervised learning approach from a team composition perspective. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual player’s batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Player independent factors have also been considered in order to predict the outcome of a match. We show that the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers.", "title": "" }, { "docid": "7604fdb727d378f9a63e6c5f43772236", "text": "In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods. We observe that state-of-the-art graph kernels, such as Weisfeiler-Lehman kernel (WLK) capture the structural information well but fail to capture contextual information. To address this, we develop the Contextual Weisfeiler-Lehman kernel (CWLK) which is capable of capturing both these types of information. We show that for the malware detection problem, CWLK is more expressive and hence more accurate than WLK while maintaining comparable efficiency. Through our largescale experiments with more than 50,000 real-world Android apps, we demonstrate that CWLK outperforms two state-of-the-art graph kernels (including WLK) and three malware detection techniques by more than 5.27% and 4.87% F-measure, respectively, while maintaining high efficiency. This high accuracy and efficiency make CWLK suitable for large-scale real-world malware detection.", "title": "" }, { "docid": "b27d9ddc450ed71497d70ebb7f31d7a8", "text": "Cores in a chip-multiprocessor (CMP) system share multiple hardware resources in the memory subsystem. If resource sharing is unfair, some applications can be delayed significantly while others are unfairly prioritized. Previous research proposed separate fairness mechanisms in each individual resource. Such resource-based fairness mechanisms implemented independently in each resource can make contradictory decisions, leading to low fairness and loss of performance. Therefore, a coordinated mechanism that provides fairness in the entire shared memory system is desirable.\n This paper proposes a new approach that provides fairness in the entire shared memory system, thereby eliminating the need for and complexity of developing fairness mechanisms for each individual resource. Our technique, Fairness via Source Throttling (FST), estimates the unfairness in the entire shared memory system. If the estimated unfairness is above a threshold set by system software, FST throttles down cores causing unfairness by limiting the number of requests they can inject into the system and the frequency at which they do. As such, our source-based fairness control ensures fairness decisions are made in tandem in the entire memory system. FST also enforces thread priorities/weights, and enables system software to enforce different fairness objectives and fairness-performance tradeoffs in the memory system.\n Our evaluations show that FST provides the best system fairness and performance compared to four systems with no fairness control and with state-of-the-art fairness mechanisms implemented in both shared caches and memory controllers.", "title": "" }, { "docid": "9c780c4d37326ce2a5e2838481f48456", "text": "A maximum power point tracker has been previously developed for the single high performance triple junction solar cell for hybrid and electric vehicle applications. The maximum power point tracking (MPPT) control method is based on the incremental conductance (IncCond) but removes the need for current sensors. This paper presents the hardware implementation of the maximum power point tracker. Significant efforts have been made to reduce the size to 18 mm times 21 mm (0.71 in times 0.83 in) and the cost to close to $5 US. This allows the MPPT hardware to be integrable with a single solar cell. Precision calorimetry measurements are employed to establish the converter power loss and confirm that an efficiency of 96.2% has been achieved for the 650-mW converter with 20-kHz switching frequency. Finally, both the static and the dynamic tests are conducted to evaluate the tracking performances of the MPPT hardware. The experimental results verify a tracking efficiency higher than 95% under three different insolation levels and a power loss less than 5% of the available cell power under instantaneous step changes between three insolation levels.", "title": "" }, { "docid": "7017281605b9d7d649656d1485326138", "text": "Network Coding is a routing technique where each node may actively modify the received packets before transmitting them. While this departure from passive networks improves throughput and resilience to packet loss it renders transmission susceptible to pollution attacks where nodes can misbehave and change in a malicious way the messages transmitted. Nodes cannot use standard signature schemes to authenticate the modified packets: this would require knowledge of the original sender’s signing key. Network coding signature schemes offer a cryptographic solution to this problem. Very roughly, such signatures allow signing vector spaces (or rather bases of such spaces). Furthermore, these signatures are homomorphic: given signatures on a set of vectors it is possible to create signatures for any linear combination of these vectors. Designing such schemes is a difficult task, and the few existent constructions either rely on random oracles or are rather inefficient. In this paper we introduce two new network coding signature schemes. Both of our schemes are provably secure in the standard model, rely on standard assumptions, and are in the same efficiency class with previous solutions based on random oracles.", "title": "" }, { "docid": "4cd9c7d6018920c5275c63e7bce663b9", "text": "Bullying of lesbian, gay, bisexual, and transgender (LGBT) youth is prevalent in the United States, and represents LGBT stigma when tied to sexual orientation and/or gender identity or expression. LGBT youth commonly report verbal, relational, and physical bullying, and damage to property. Bullying undermines the well-being of LGBT youth, with implications for risky health behaviors, poor mental health, and poor physical health that may last into adulthood. Pediatricians can play a vital role in preventing and identifying bullying, providing counseling to youth and their parents, and advocating for programs and policies to address LGBT bullying.", "title": "" }, { "docid": "8bb0a1b97222c065fe1e3c4738ca969d", "text": "\"Explicit concurrency should be abolished from all higher-level programming languages (i.e. everything except - perhaps- plain machine code.).\" Dijkstra [1] (paraphrased). A promising class of concurrency abstractions replaces explicit concurrency mechanisms with a single linguistic mechanism that combines state and control and uses asynchronous messages for communications, e.g. active objects or actors, but that doesn't remove the hurdle of understanding non-local control transfer. What if the programming model enabled programmers to simply do what they do best, that is, to describe a system in terms of its modular structure and write sequential code to implement the operations of those modules and handles details of concurrency? In a recently sponsored NSF project we are developing such a model that we call capsule-oriented programming and its realization in the Panini project. This model favors modularity over explicit concurrency, encourages concurrency correctness by construction, and exploits modular structure of programs to expose implicit concurrency.", "title": "" }, { "docid": "db2ffe163bd044a2265341e2cba4b057", "text": "analyzing computer security a threat or vulnerability or countermeasure approach What to say and what to do when mostly your friends love reading? Are you the one that don't have such hobby? So, it's important for you to start having that hobby. You know, reading is not the force. We're sure that reading will lead you to join in better concept of life. Reading will be a positive activity to do every time. And do you know our friends become fans of analyzing computer security a threat or vulnerability or countermeasure approach as the best book to read? Yeah, it's neither an obligation nor order. It is the referred book that will not make you feel disappointed.", "title": "" }, { "docid": "ff59d1ec0c3eb11b3201e5708a585ca4", "text": "In this paper, we described our system for Knowledge Base Acceleration (KBA) Track at TREC 2013. The KBA Track has two tasks, CCR and SSF. Our approach consists of two major steps: selecting documents and extracting slot values. Selecting documents is to look for and save the documents that mention the entities of interest. The second step involves with generating seed patterns to extract the slot values and computing confidence score.", "title": "" }, { "docid": "f90a4bbfbe4c6ea98457639a65dd84af", "text": "People in different cultures have strikingly different construals of the self, of others, and of the interdependence of the 2. These construals can influence, and in many cases determine, the very nature of individual experience, including cognition, emotion, and motivation. Many Asian cultures have distinct conceptions of individuality that insist on the fundamental relatedness of individuals to each other. The emphasis is on attending to others, fitting in, and harmonious interdependence with them. American culture neither assumes nor values such an overt connectedness among individuals. In contrast, individuals seek to maintain their independence from others by attending to the self and by discovering and expressing their unique inner attributes. As proposed herein, these construals are even more powerful than previously imagined. Theories of the self from both psychology and anthropology are integrated to define in detail the difference between a construal of the self as independent and a construal of the self as interdependent. Each of these divergent construals should have a set of specific consequences for cognition, emotion, and motivation; these consequences are proposed and relevant empirical literature is reviewed. Focusing on differences in self-construals enables apparently inconsistent empirical findings to be reconciled, and raises questions about what have been thought to be culture-free aspects of cognition, emotion, and motivation.", "title": "" } ]
scidocsrr
a0af785cef5cc7e0022dca18731cd32b
Visualizing User Story Requirements at Multiple Granularity Levels via Semantic Relatedness
[ { "docid": "d9aadb86785057ae5445dc894b1ef7a7", "text": "This paper presents Circe, an environment for the analysis of natural language requirements. Circe is first presented in terms of its architecture, based on a transformational paradigm. Details are then given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. The result of all the transformations is a set of models for the requirements document, for the system described by the requirements, and for the requirements writing process. These models can be inspected, measured, and validated against a given set of criteria. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models.", "title": "" }, { "docid": "75e794b731685064820c79f4d68ed79b", "text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to implicitly indicate groups. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. We discuss results from evaluations of those techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.", "title": "" }, { "docid": "b76f6011edb583c2e0ff21cdbb35aba9", "text": "User stories are a widely adopted requirements notation in agile development. Yet, user stories are too often poorly written in practice and exhibit inherent quality defects. Triggered by this observation, we propose the Quality User Story (QUS) framework, a set of 13 quality criteria that user story writers should strive to conform to. Based on QUS, we present the Automatic Quality User Story Artisan (AQUSA) software tool. Relying on natural language processing (NLP) techniques, AQUSA detects quality defects and suggest possible remedies. We describe the architecture of AQUSA, its implementation, and we report on an evaluation that analyzes 1023 user stories obtained from 18 software companies. Our tool does not yet reach the ambitious 100 % recall that Daniel Berry and colleagues require NLP tools for RE to achieve. However, we obtain promising results and we identify some improvements that will substantially improve recall and precision.", "title": "" } ]
[ { "docid": "8851824732fff7b160c7479b41cc423f", "text": "The current generation of Massive Open Online Courses (MOOCs) attract a diverse student audience from all age groups and over 196 countries around the world. Researchers, educators, and the general public have recently become interested in how the learning experience in MOOCs differs from that in traditional courses. A major component of the learning experience is how students navigate through course content.\n This paper presents an empirical study of how students navigate through MOOCs, and is, to our knowledge, the first to investigate how navigation strategies differ by demographics such as age and country of origin. We performed data analysis on the activities of 140,546 students in four edX MOOCs and found that certificate earners skip on average 22% of the course content, that they frequently employ non-linear navigation by jumping backward to earlier lecture sequences, and that older students and those from countries with lower student-teacher ratios are more comprehensive and non-linear when navigating through the course.\n From these findings, we suggest design recommendations such as for MOOC platforms to develop more detailed forms of certification that incentivize students to deeply engage with the content rather than just doing the minimum necessary to earn a passing grade. Finally, to enable other researchers to reproduce and build upon our findings, we have made our data set and analysis scripts publicly available.", "title": "" }, { "docid": "77d11e0b66f3543fadf91d0de4c928c9", "text": "In the United States, the number of people over 65 will double between ow and 2030 to 69.4 million. Providing care for this increasing population becomes increasingly difficult as the cognitive and physical health of elders deteriorates. This survey article describes ome of the factors that contribute to the institutionalization of elders, and then presents some of the work done towards providing technological support for this vulnerable community.", "title": "" }, { "docid": "2ac908a3c7bebc52327be70eb34153c5", "text": "A simple and innovative method for designing a spiral folded printed quadrifilar helix antenna (S-FPQHA) for dual-band operations is presented. The axial length of a conventional PQHA is miniaturized of about 43% by meandering and turning the helix arms into the form of square spirals. Parametric studies are performed to explore the performance improvements. Based on the studies a dual band antenna working in L1/L5 GPS application is realized with a good gain and a good circular polarization. Measured results are presented to validate the concept.", "title": "" }, { "docid": "4eafe7f60154fa2bed78530735a08878", "text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.", "title": "" }, { "docid": "c588af91f9a0c1ae59a355ce2145c424", "text": "Negative correlation learning (NCL) aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners’ outputs. Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters. To achieve a better solution, this paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles. The basis functions of the base models are generated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system. An analytical solution is derived for these parameters, where a cost function defined for NCL and the well-known least squares method are used. To examine the merits of our proposed algorithm, a comparative study is carried out with nine benchmark datasets. Results indicate that our approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency. Crown Copyright 2013 Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "91cf217b2c5fa968bc4e893366ec53e1", "text": "Importance\nPostpartum hypertension complicates approximately 2% of pregnancies and, similar to antepartum severe hypertension, can have devastating consequences including maternal death.\n\n\nObjective\nThis review aims to increase the knowledge and skills of women's health care providers in understanding, diagnosing, and managing hypertension in the postpartum period.\n\n\nResults\nHypertension complicating pregnancy, including postpartum, is defined as systolic blood pressure 140 mm Hg or greater and/or diastolic blood pressure 90 mm Hg or greater on 2 or more occasions at least 4 hours apart. Severe hypertension is defined as systolic blood pressure 160 mm Hg or greater and/or diastolic blood pressure 110 mm Hg or greater on 2 or more occasions repeated at a short interval (minutes). Workup for secondary causes of hypertension should be pursued, especially in patients with severe or resistant hypertension, hypokalemia, abnormal creatinine, or a strong family history of renal disease. Because severe hypertension is known to cause maternal stroke, women with severe hypertension sustained over 15 minutes during pregnancy or in the postpartum period should be treated with fast-acting antihypertension medication. Labetalol, hydralazine, and nifedipine are all effective for acute management, although nifedipine may work the fastest. For persistent postpartum hypertension, a long-acting antihypertensive agent should be started. Labetalol and nifedipine are also both effective, but labetalol may achieve control at a lower dose with fewer adverse effects.\n\n\nConclusions and Relevance\nProviders must be aware of the risks associated with postpartum hypertension and educate women about the symptoms of postpartum preeclampsia. Severe acute hypertension should be treated in a timely fashion to avoid morbidity and mortality. Women with persistent postpartum hypertension should be administered a long-acting antihypertensive agent.\n\n\nTarget Audience\nObstetricians and gynecologists, family physicians.\n\n\nLearning Objectives\nAfter completing this activity, the learner should be better able to assist patients and providers in identifying postpartum hypertension; provide a framework for the evaluation of new-onset postpartum hypertension; and provide instructions for the management of acute severe and persistent postpartum hypertension.", "title": "" }, { "docid": "7a076d150ecc4382c20a6ce08f3a0699", "text": "Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.", "title": "" }, { "docid": "2ae9bb2d81968eabe257a855f127194d", "text": "Machine learning is a computational process. To that end, it is inextricably tied to computational power the tangible material of chips and semiconductors that the algorithms of machine intelligence operate on. Most obviously, computational power and computing architectures shape the speed of training and inference in machine learning, and therefore influence the rate of progress in the technology. But, these relationships are more nuanced than that: hardware shapes the methods used by researchers and engineers in the design and development of machine learning models. Characteristics such as the power consumption of chips also define where and how machine learning can be used in the real world.", "title": "" }, { "docid": "25a94dbd1c02a6183df945d4684a0f31", "text": "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL. Introduction Policy gradient reinforcement learning (RL) algorithms have been applied with considerable success to solve highdimensional control problems, such as those arising in robotic control and coordination (Peters & Schaal 2008). These algorithms use gradient ascent to tune the parameters of a policy to maximize its expected performance. Unfortunately, this gradient ascent procedure is prone to becoming trapped in local maxima, and thus it has been widely recognized that initializing the policy in a sensible manner is crucial for achieving optimal performance. For instance, one typical strategy is to initialize the policy using human demonstrations (Peters & Schaal 2006), which may be infeasible when the task cannot be easily solved by a human. This paper explores a different approach: instead of initializing the policy at random (i.e., tabula rasa) or via human demonstrations, we instead use transfer learning (TL) to initialize the policy for a new target domain based on knowledge from one or more source tasks. In RL transfer, the source and target tasks may differ in their formulations (Taylor & Stone 2009). In particular, Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. when the source and target tasks have different state and/or action spaces, an inter-task mapping (Taylor et al. 2007a) that describes the relationship between the two tasks is typically needed. This paper introduces a framework for autonomously learning an inter-task mapping for cross-domain transfer in policy gradient RL. First, we learn an inter-state mapping (i.e., a mapping between states in two tasks) using unsupervised manifold alignment. Manifold alignment provides a powerful and general framework that can discover a shared latent representation to capture intrinsic relations between different tasks, irrespective of their dimensionality. The alignment also yields an implicit inter-action mapping that is generated by mapping tracking states from the source to the target. Given the mapping between task domains, source task trajectories are then used to initialize a policy in the target task, significantly improving the speed of subsequent learning over an uninformed initialization. This paper provides the following contributions. First, we introduce a novel unsupervised method for learning interstate mappings using manifold alignment. Second, we show that the discovered subspace can be used to initialize the target policy. Third, our empirical validation conducted on four dissimilar and dynamically chaotic task domains (e.g., controlling a three-link cart-pole and a quadrotor aerial vehicle) shows that our approach can a) automatically learn an inter-state mapping across MDPs from the same domain, b) automatically learn an inter-state mapping across MDPs from very different domains, and c) transfer informative initial policies to achieve higher initial performance and reduce the time needed for convergence to near-optimal behavior.", "title": "" }, { "docid": "3cfdf87f53d4340287fa92194afe355e", "text": "With the rise of e-commerce, people are accustomed to writing their reviews after receiving the goods. These comments are so important that a bad review can have a direct impact on others buying. Besides, the abundant information within user reviews is very useful for extracting user preferences and item properties. In this paper, we investigate the approach to effectively utilize review information for recommender systems. The proposed model is named LSTM-Topic matrix factorization (LTMF) which integrates both LSTM and Topic Modeling for review understanding. In the experiments on popular review dataset Amazon , our LTMF model outperforms previous proposed HFT model and ConvMF model in rating prediction. Furthermore, LTMF shows the better ability on making topic clustering than traditional topic model based method, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews.", "title": "" }, { "docid": "356144b7bd59e7609ec426965b73ce37", "text": "Sentiment analysis of citations in scientific papers and articles is a new and interesting problem due to the many linguistic differences between scientific texts and other genres. In this paper, we focus on the problem of automatic identification of positive and negative sentiment polarity in citations to scientific papers. Using a newly constructed annotated citation sentiment corpus, we explore the effectiveness of existing and novel features, including n-grams, specialised science-specific lexical features, dependency relations, sentence splitting and negation features. Our results show that 3-grams and dependencies perform best in this task; they outperform the sentence splitting, science lexicon and negation based features.", "title": "" }, { "docid": "0713b8668b5faf037b4553517151f9ab", "text": "Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.", "title": "" }, { "docid": "ad1dde10286d4c43f4783fc727e5e820", "text": "A fast method of handwritten word recognition suitable for real time applications is presented in this paper. Preprocessing, segmentation and feature extraction are implemented using a chain code representation of the word contour. Dynamic matching between characters of a lexicon entry and segment(s) of the input word image is used to rank the lexicon entries in order of best match. Variable duration for each character is defined and used during the matching. Experimental results prove that our approach using the variable duration outperforms the method using fixed duration in terms of both accuracy and speed. Speed of the entire recognition process is about 200 msec on a single SPARC-10 platform and the recognition accuracy is 96.8 percent are achieved for lexicon size of 10, on a database of postal words captured at 212 dpi.", "title": "" }, { "docid": "be35c342291d4805d2a5333e31ee26d6", "text": "References • We study efficient exploration in reinforcement learning. • Most provably-efficient learning algorithms introduce optimism about poorly understood states and actions. • Motivated by potential advantages relative to optimistic algorithms, we study an alternative approach: posterior sampling for reinforcement learning (PSRL). • This is the extension of the Thompson sampling algorithm for multi-armed bandit problems to reinforcement learning. • We establish the first regret bounds for this algorithm.  Conceptually simple, separates algorithm from analysis: • PSRL selects policies according to the probability they are optimal without need for explicit construction of confidence sets. • UCRL2 bounds error in each s, a separately, which allows for worst-case mis-estimation to occur simultaneously in every s, a . • We believe this will make PSRL more statistically efficient.", "title": "" }, { "docid": "0fcd4fcc743010415db27cc8201f8416", "text": " A model is presented that allows prediction of the probability for the formation of appositions between the axons and dendrites of any two neurons based only on their morphological statistics and relative separation. Statistics of axonal and dendritic morphologies of single neurons are obtained from 3D reconstructions of biocytin-filled cells, and a statistical representation of the same cell type is obtained by averaging across neurons according to the model. A simple mathematical formulation is applied to the axonal and dendritic statistical representations to yield the probability for close appositions. The model is validated by a mathematical proof and by comparison of predicted appositions made by layer 5 pyramidal neurons in the rat somatosensory cortex with real anatomical data. The model could be useful for studying microcircuit connectivity and for designing artificial neural networks.", "title": "" }, { "docid": "c9ee0f9d3a8fb12eadfe177b8552eab8", "text": "In rock climbing, discussing climbing techniques with others to master a specific route and getting practical advice from more experienced climbers is an inherent part of the culture and tradition of the sport. Spatial information, such as the position of holds, as well as learning complex body postures plays a major role in this process. A typical problem that occurs during advising is an alignment effect when trying to picture orientation-specific knowledge, e.g. explaining how to perform a certain self-climbed move to others. We propose betaCube, a self-calibrating camera-projection unit that features 3D tracking and distortion-free projection. The system enables a life-sized video replay and climbing route creation using augmented reality. We contribute an interface for automatic setup of mobile distortion-free projection, blob detection for climbing holds, as well as an automatic method for extracting planar trackables from artificial climbing walls.", "title": "" }, { "docid": "e33a0f367410c543eace7158f0f4f0c9", "text": "This paper presents novel software techniques for binocular eye tracking within Virtual Reality and discusses their application to aircraft inspection training. The aesthetic appearance of the environment is driven by standard graphical techniques augmented by realistic texture maps of the physical environment. The user's gaze direction, as well as head position and orientation, are tracked to allow recording of the user's fixations within the environment. Methods are given for (1) integration of the eye tracker into a Virtual Reality framework, (2) stereo calculation of the user's 3D gaze vector, (3) a new 3D calibration technique developed to estimate the user's inter-pupillary distance post-facto, and (4) a new technique for eye movement analysis in 3-space. The 3D eye movement analysis technique is an improvement over traditional 2D approaches since it takes into account the 6 degrees of freedom of head movements and is resolution independent. Results indicate that although the current signal analysis approach is somewhat noisy and tends to underestimate the identified number of fixations, recorded eye movements provide valuable human factors process measures complementing performance statistics used to gauge training effectiveness.", "title": "" }, { "docid": "7cd8dee294d751ec6c703d628e0db988", "text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.", "title": "" }, { "docid": "23989e6276ad8e60b0a451e3e9d5fe50", "text": "The significant benefits associated with microgrids have led to vast efforts to expand their penetration in electric power systems. Although their deployment is rapidly growing, there are still many challenges to efficiently design, control, and operate microgrids when connected to the grid, and also when in islanded mode, where extensive research activities are underway to tackle these issues. It is necessary to have an across-the-board view of the microgrid integration in power systems. This paper presents a review of issues concerning microgrids and provides an account of research in areas related to microgrids, including distributed generation, microgrid value propositions, applications of power electronics, economic issues, microgrid operation and control, microgrid clusters, and protection and communications issues.", "title": "" }, { "docid": "125a65c489bbb8541577e65015a33fe9", "text": "Users of the TIMESAT program are welcome to contact the authors in order to receive the most updated version of the program. The authors are also happy to answer questions on optimal parameter settings.", "title": "" } ]
scidocsrr
4640211701dd9e1c4bd980c17d726d1f
Design of patch array antennas for future 5G applications
[ { "docid": "e541be7c81576fdef564fd7eba5d67dd", "text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.", "title": "" }, { "docid": "136fadcc21143fd356b48789de5fb2b0", "text": "Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities.", "title": "" }, { "docid": "c8a27aecd6f356bfdaeb7c33558843df", "text": "Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies.", "title": "" } ]
[ { "docid": "5f17fc08df06a614c981a979ce9c36e1", "text": "Performing smart computations in a context of cloud computing and big data is highly appreciated today. It allows customers to fully benefit from cloud computing capacities (such as processing or storage) without losing confidentiality of sensitive data. Fully homomorphic encryption (FHE) is a smart category of encryption schemes that enables working with the data in its encrypted form. It permits us to preserve confidentiality of our sensible data and to benefit from cloud computing capabilities. While FHE is combined with verifiable computation, it offers efficient procedures for outsourcing computations over encrypted data to a remote, but non-trusted, cloud server. The resulting scheme is called Verifiable Fully Homomorphic Encryption (VFHE). Currently, it has been demonstrated by many existing schemes that the theory is feasible but the efficiency needs to be dramatically improved in order to make it usable for real applications. One subtle difficulty is how to efficiently handle the noise. This paper aims to introduce an efficient and symmetric verifiable FHE based on a new mathematic structure that is noise free. In our encryption scheme, the noise is constant and does not depend on homomorphic evaluation of ciphertexts. The homomorphy of our scheme is obtained from simple matrix operations (addition and multiplication). The running time of the multiplication operation of our encryption scheme in a cloud environment has an order of a few milliseconds.", "title": "" }, { "docid": "1421fb35904ce187fb7f98faab8f5fcc", "text": "Although the lung is the most common site of extrahepatic metastases from hepatocellular carcinoma (HCC), the optimal treatment for such metastases has’nt been established. External beam radiotherapy (EBRT) is becoming a useful local control therapy for lung cancer. To evaluated the efficacy of EBRT treatment for such metastases, we retrospectively studied 13 patients (11 men and 2 women; mean age, 52.6 years) with symptomatic pulmonary metastases from HCC who had been treated with EBRT in our institution. The palliative radiation dose delivered to the lung lesions ranged from 47 to 60 Gy (median 50) in conventional fractions, while the intrahepatic lesions were treated with surgery or transarterial chemoembolization, and/or EBRT. Follow-up period from radiotherapy ranged from 3.7 to 49.1 months (median, 16.7). Among the 13 patients, 23 out of a total of 31 pulmonary metastatic lesions received EBRT. In 12/13(92.3%) patients, significant symptoms were completely or partially relieved. An objective response was observed in 10/13(76.9%) of the subjects by computed tomography imaging. The median progression-free survival for all patients was 13.4 months. The 2-year survival rate from pulmonary metastasis was 70.7%. Adverse effects were mild and consisted of bone marrow suppression in three patients and pleural effusion in one patient (all CTCAE Grade II). In conclusion, EBRT with ≤60 Gy appears to be a good palliative therapy with reasonable safety for patients with pulmonary metastases from HCC. However, large-scale randomized clinical trials will be necessary to confirm the therapeutic role of this method.", "title": "" }, { "docid": "5238ae08b15854af54274e1c2b118d54", "text": "One-dimensional fractional anomalous sub-diffusion equations on an unbounded domain are considered in our work. Beginning with the derivation of the exact artificial boundary conditions, the original problem on an unbounded domain is converted into mainly solving an initial-boundary value problem on a finite computational domain. The main contribution of our work, as compared with the previous work, lies in the reduction of fractional differential equations on an unbounded domain by using artificial boundary conditions and construction of the corresponding finite difference scheme with the help of method of order reduction. The difficulty is the treatment of Neumann condition on the artificial boundary, which involves the time-fractional derivative operator. The stability and convergence of the scheme are proven using the discrete energy method. Two numerical examples clarify the effectiveness and accuracy of the proposed method. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "682ac189fe3fdcb602e1a361f957220a", "text": "Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems. While numerous technologies have been developed for supporting event-based interactions over local-area networks, these technologies do not scale well to wide-area networks such as the Internet. Wide-area networks pose new challenges that have to be attacked with solutions that specifically address issues of scalability. This paper presents Siena, a scalable event notification service that is based on a distributed architecture of event servers. We first present a formally defined interface that is based on an extension to the publish/subscribe protocol. We then describe and compare several different server topologies and routing algorithms. We conclude by briefly discussing related work, our experience with an initial implementation of Siena, and a framework for evaluating the scalability of event notification services such as Siena.", "title": "" }, { "docid": "2657e5090896cc7dc01f3b66d2d97a94", "text": "In this article, we review gas sensor application of one-dimensional (1D) metal-oxide nanostructures with major emphases on the types of device structure and issues for realizing practical sensors. One of the most important steps in fabricating 1D-nanostructure devices is manipulation and making electrical contacts of the nanostructures. Gas sensors based on individual 1D nanostructure, which were usually fabricated using electron-beam lithography, have been a platform technology for fundamental research. Recently, gas sensors with practical applicability were proposed, which were fabricated with an array of 1D nanostructures using scalable micro-fabrication tools. In the second part of the paper, some critical issues are pointed out including long-term stability, gas selectivity, and room-temperature operation of 1D-nanostructure-based metal-oxide gas sensors.", "title": "" }, { "docid": "e2009f56982f709671dcfe43048a8919", "text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.", "title": "" }, { "docid": "98a65cca7217dfa720dd4ed2972c3bdd", "text": "Intramuscular fat percentage (IMF%) has been shown to have a positive influence on the eating quality of red meat. Selection of Australian lambs for increased lean tissue and reduced carcass fatness using Australian Sheep Breeding Values has been shown to decrease IMF% of the Muscularis longissimus lumborum. The impact this selection has on the IMF% of other muscle depots is unknown. This study examined IMF% in five different muscles from 400 lambs (M. longissimus lumborum, Muscularis semimembranosus, Muscularis semitendinosus, Muscularis supraspinatus, Muscularis infraspinatus). The sires of these lambs had a broad range in carcass breeding values for post-weaning weight, eye muscle depth and fat depth over the 12th rib (c-site fat depth). Results showed IMF% to be highest in the M. supraspinatus (4.87 ± 0.1, P<0.01) and lowest in the M. semimembranosus (3.58 ± 0.1, P<0.01). Hot carcass weight was positively associated with IMF% of all muscles. Selection for decreasing c-site fat depth reduced IMF% in the M. longissimus lumborum, M. semimembranosus and M. semitendinosus. Higher breeding values for post-weaning weight and eye muscle depth increased and decreased IMF%, respectively, but only in the lambs born as multiples and raised as singles. For each per cent increase in lean meat yield percentage (LMY%), there was a reduction in IMF% of 0.16 in all five muscles examined. Given the drive within the lamb industry to improve LMY%, our results indicate the importance of continued monitoring of IMF% throughout the different carcass regions, given its importance for eating quality.", "title": "" }, { "docid": "375ff8dcd4e29eef317ee0838820c944", "text": "Due to continuous concerns about environmental pollution and a possible energy shortage, renewable energy systems, based mainly on wind power, solar energy, small hydro-electric power, etc have been implemented. Wind energy seems certain to play a major part in the world's energy future. In spite of sudden wind speed variations, wind farm generators should always be capable of extracting the maximum possible mechanical power from the wind and turning it into electrical power. Nowadays, most of the installed wind turbines are based on doubly-fed induction generators (DFIGs), wound rotor synchronous generators (WRSG) and permanent magnet synchronous generators (PMSGs). The DFIG equipped wind turbine has several advantages over others. One of which, the power converter in such wind turbines only deals with rotor power, hence the converter rating can run at reduced power rating. However DFIG has the famous disadvantage of the presence of slip rings which leads to increased maintenance costs and reduced life-time. Hence, brushless doubly fed induction machines (BDFIMs) can be considered as a viable alternative. In this paper, the brushless doubly fed twin stator induction generator (BDFTSIG) is modeled in details. A wind energy conversion system (WECS) utilizing a proposed indirect vector controlled BDFTSIG is presented. The proposed controller performance is investigated under various loading conditions showing enhanced transient and minimal steady state oscillations in addition to complete active/reactive power decoupling.", "title": "" }, { "docid": "e3461568f90b10dcbe05f1228b4a8614", "text": "A 2.4 GHz band high-efficiency RF rectifier and high sensitive dc voltage sensing circuit is implemented. A passive RF to DC rectifier of multiplier voltage type has no current consumption. This rectifier is using native threshold voltage diode-connected NMOS transistors to avoid the power loss due to the threshold voltage. It consumes only 900nA with 1.5V supply voltage adopting ultra low power DC sensing circuit using subthreshold current reference. These block incorporates a digital demodulation logic blocks. It can recognize OOK digital information and existence of RF input signal above sensitivity level or not. A low power RF rectifier and DC sensing circuit was fabricated in 0.18um CMOS technology with native threshold voltage NMOS; This RF wake up receiver has -28dBm sensitivity at 2.4 GHz band.", "title": "" }, { "docid": "3a7657130cb165682cc2e688a7e7195b", "text": "The functional simulator Simics provides a co-simulation integration path with a SystemC simulation environment to create Virtual Platforms. With increasing complexity of the SystemC models, this platform suffers from performance degradation due to the single threaded nature of the integrated Virtual Platform. In this paper, we present a multi-threaded Simics SystemC platform solution that significantly improves performance over the existing single threaded solution. The two schedulers run independently, only communicating in a thread safe manner through a message interface. Simics based logging and checkpointing are preserved within SystemC and tied to the corresponding Simics' APIs for a seamless experience. The solution also scales to multiple SystemC models within the platform, each running its own thread with an instantiation of the SystemC kernel. A second multi-cell solution is proposed providing comparable performance with the multi-thread solution, but reducing the burden of integration on the SystemC model. Empirical data is presented showing performance gains over the legacy single threaded solution.", "title": "" }, { "docid": "fb048df280c08a4d80eb18bafb36e6c7", "text": "There are very few reported cases of traumatic amputation of the male genitalia due to animal bite. The management involves thorough washout of the wounds, debridement, antibiotic prophylaxis, tetanus and rabies immunization followed by immediate reconstruction or primary wound closure with delayed reconstruction, when immediate reconstruction is not feasible. When immediate reconstruction is not feasible, long-term good functional and cosmetic results are still possible in the majority of cases by performing total phallic reconstruction. In particular, it is now possible to fashion a cosmetically acceptable sensate phallus with incorporated neourethra, to allow the patient to void while standing and to ejaculate, and with enough bulk to allow the insertion of a penile prosthesis to guarantee the rigidity necessary to engage in penetrative sexual intercourse.", "title": "" }, { "docid": "c47fde74be75b5e909d7657bb64bf23d", "text": "As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders", "title": "" }, { "docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94", "text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.", "title": "" }, { "docid": "55285f99e1783bcba47ab41e56171026", "text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.", "title": "" }, { "docid": "06a1d90991c5a9039c6758a66205e446", "text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.", "title": "" }, { "docid": "b876e62db8a45ab17d3a9d217e223eb7", "text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.", "title": "" }, { "docid": "412e10ae26c0abcb37379c6b37ea022a", "text": "This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.", "title": "" }, { "docid": "5c716fbdc209d5d9f703af1e88f0d088", "text": "Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.", "title": "" }, { "docid": "080c1666b7324bef25347496db11fb28", "text": "As the technical skills and costs associated with the deployment of phishing attacks decrease, we are witnessing an unprecedented level of scams that push the need for better methods to proactively detect phishing threats. In this work, we explored the use of URLs as input for machine learning models applied for phishing site prediction. In this way, we compared a feature-engineering approach followed by a random forest classifier against a novel method based on recurrent neural networks. We determined that the recurrent neural network approach provides an accuracy rate of 98.7% even without the need of manual feature creation, beating by 5% the random forest method. This means it is a scalable and fast-acting proactive detection system that does not require full content analysis.", "title": "" }, { "docid": "f281b48aba953acc8778aecf35ab310d", "text": "This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new architecture where alignment pairs are compared, compressed and then propagated to upper layers for enhanced representation learning. Secondly, we adopt factorization layers for efficient and expressive compression of alignment vectors into scalar features, which are then used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving competitive performance on all. A lightweight parameterization of our model also enjoys a≈ 3 times reduction in parameter size compared to the existing state-of-the-art models, e.g., ESIM and DIIN, while maintaining competitive performance. Additionally, visual analysis shows that our propagated features are highly interpretable.", "title": "" } ]
scidocsrr
6587bb0346c0a5cf7e802580b6671f89
Robust and Discriminative Self-Taught Learning
[ { "docid": "2c30b761ec425c6bd8fff97a9ce4868c", "text": "We propose a joint representation and classification framework that achieves the dual goal of finding the most discriminative sparse overcomplete encoding and optimal classifier parameters. Formulating an optimization problem that combines the objective function of the classification with the representation error of both labeled and unlabeled data, constrained by sparsity, we propose an algorithm that alternates between solving for subsets of parameters, whilst preserving the sparsity. The method is then evaluated over two important classification problems in computer vision: object categorization of natural images using the Caltech 101 database and face recognition using the Extended Yale B face database. The results show that the proposed method is competitive against other recently proposed sparse overcomplete counterparts and considerably outperforms many recently proposed face recognition techniques when the number training samples is small.", "title": "" } ]
[ { "docid": "80759a5c2e60b444ed96c9efd515cbdf", "text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.", "title": "" }, { "docid": "58f6247a0958bf0087620921c99103b1", "text": "This paper addresses an information-theoretic aspect of k-means and spectral clustering. First, we revisit the k-means clustering and show that its objective function is approximately derived from the minimum entropy principle when the Renyi's quadratic entropy is used. Then we present a maximum within-clustering association that is derived using a quadratic distance measure in the framework of minimum entropy principle, which is very similar to a class of spectral clustering algorithms that is based on the eigen-decomposition method.", "title": "" }, { "docid": "19a73e2e729fa115a89c64058eafc9ca", "text": "This paper aims to present a framework for describing Customer Knowledge Management in online purchase process using two models from literature including consumer online purchase process and ECKM. Since CKM is a recent concept and little empirical research is available, we will first present the theories from which CKM derives. In the first stage we discuss about e-commerce trend and increasing importance of customer loyalty in today’s business environment. Then some related concepts about Knowledge Management, Customer Relationship Management and CKM are presented, in order to provide the reader with a better understanding and clear picture regarding CKM. Finally, providing models representing e-CKM and online purchasing process, we propose a comprehensive procedure to manage customer data and knowledge in e-commerce.", "title": "" }, { "docid": "c8f3b235811dd64b9b1d35d596ff22f5", "text": "Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.", "title": "" }, { "docid": "9d2859ee4e5968237078933e117475f8", "text": "This paper reports on an interview-based study of 18 authors of different chapters of the two-volume book \"Architecture of Open-Source Applications\". The main contributions are a synthesis of the process of authoring essay-style documents (ESDs) on software architecture, a series of observations on important factors that influence the content and presentation of architectural knowledge in this documentation form, and a set of recommendations for readers and writers of ESDs on software architecture. We analyzed the influence of three factors in particular: the evolution of a system, the community involvement in the project, and the personal characteristics of the author. This study provides the first systematic investigation of the creation of ESDs on software architecture. The observations we collected have implications for both readers and writers of ESDs, and for architecture documentation in general.", "title": "" }, { "docid": "b93446bab637abd4394338615a5ef6e9", "text": "Genetic programming is a methodology inspired by biological evolution. By using computational analogs to biological crossover and mutation new versions of a program are generated automatically. This population of new programs is then evaluated by an user defined fittness function to only select the programs that show an improved behavior as compared to the original program. In this case the desired behavior is to retain all original functionality and additionally fixing bugs found in the program code.", "title": "" }, { "docid": "e42a1faf3d983bac59c0bfdd79212093", "text": "L eadership matters, according to prominent leadership scholars (see also Bennis, 2007). But what is leadership? That turns out to be a challenging question to answer. Leadership is a complex and diverse topic, and trying to make sense of leadership research can be an intimidating endeavor. One comprehensive handbook of leadership (Bass, 2008), covering more than a century of scientific study, comprises more than 1,200 pages of text and more than 200 additional pages of references! There is clearly a substantial scholarly body of leadership theory and research that continues to grow each year. Given the sheer volume of leadership scholarship that is available, our purpose is not to try to review it all. That is why our focus is on the nature or essence of leadership as we and our chapter authors see it. But to fully understand and appreciate the nature of leadership, it is essential that readers have some background knowledge of the history of leadership research, the various theoretical streams that have evolved over the years, and emerging issues that are pushing the boundaries of the leadership frontier. Further complicating our task is that more than one hundred years of leadership research have led to several paradigm shifts and a voluminous body of knowledge. On several occasions, scholars of leadership became quite frustrated by the large amount of false starts, incremental theoretical advances, and contradictory findings. As stated more than five decades ago by Warren Bennis (1959, pp. 259–260), “Of all the hazy and confounding areas in social psychology, leadership theory undoubtedly contends for Leadership: Past, Present, and Future", "title": "" }, { "docid": "ce2a19f9f3ee13978845f1ede238e5b2", "text": "Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications. In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest. This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints. The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account. Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture. This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting. The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.", "title": "" }, { "docid": "1d9361cffd8240f3b691c887def8e2f5", "text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.", "title": "" }, { "docid": "082517b83d9a9cdce3caef62a579bf2e", "text": "To enable autonomous driving, a semantic knowledge of the environment is unavoidable. We therefore introduce a multiclass classifier to determine the classes of an object relying solely on radar data. This is a challenging problem as objects of the same category have often a diverse appearance in radar data. As classification methods a random forest classifier and a deep convolutional neural network are evaluated. To get good results despite the limited training data available, we introduce a hybrid approach using an ensemble consisting of the two classifiers. Further we show that the accuracy can be improved significantly by allowing a lower detection rate.", "title": "" }, { "docid": "137fd50e270703682b7233214c18803e", "text": "As a representative of NO-SQL database, MongoDB is widely preferred for its automatic load-balancing to some extent, which including distributing read load to secondary node to reduce the load of primary one and auto-sharding to reduce the load onspecific node through automatically split data and migrate some ofthem to other nodes. However, on one hand, this process is storage-load -- Cbased, which can't meet the demand due to the facts that some particular data are accessed much more frequently than others and the 'heat' is not constant as time going on, thus the load on a node keeps changing even if with unchanged data. On the other hand, data migration will bring out too much cost to affect performance of system. In this paper, we will focus on the mechanism of automatic load balancing of MongoDB and proposean heat-based dynamic load balancing mechanism with much less cost.", "title": "" }, { "docid": "b4fa57fec99131cdf0cb6fc4795fce43", "text": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "title": "" }, { "docid": "3500278940baaf6f510ad47463cbf5ed", "text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.", "title": "" }, { "docid": "3b27d6bae4600236fea1e44367a58edf", "text": "We present a general framework for incorporating sequential data and arbitrary features into language modeling. The general framework consists of two parts: a hidden Markov component and a recursive neural network component. We demonstrate the effectiveness of our model by applying it to a specific application: predicting topics and sentiments in dialogues. Experiments on real data demonstrate that our method is substantially more accurate than previ-", "title": "" }, { "docid": "a6287828106cdfa0360607504016eff1", "text": "Predicting emotion categories, such as anger, joy, and anxiety, expressed by a sentence is challenging due to its inherent multi-label classification difficulty and data sparseness. In this paper, we address above two challenges by incorporating the label dependence among the emotion labels and the context dependence among the contextual instances into a factor graph model. Specifically, we recast sentence-level emotion classification as a factor graph inferring problem in which the label and context dependence are modeled as various factor functions. Empirical evaluation demonstrates the great potential and effectiveness of our proposed approach to sentencelevel emotion classification. 1", "title": "" }, { "docid": "7a6fcfbcfafa96b8e0e52f7356049f6f", "text": "This paper shows that decision trees can be used to improve the performance of case-based learning (CBL) systems. We introduce a performance task for machine learning systems called semi-exible prediction that lies between the classiication task performed by decision tree algorithms and the exible prediction task performed by conceptual clustering systems. In semi-exible prediction, learning should improve prediction of a spe-ciic set of features known a priori rather than a single known feature (as in classii-cation) or an arbitrary set of features (as in conceptual clustering). We describe one such task from natural language processing and present experiments that compare solutions to the problem using decision trees, CBL, and a hybrid approach that combines the two. In the hybrid approach, decision trees are used to specify the features to be included in k-nearest neighbor case retrieval. Results from the experiments show that the hybrid approach outperforms both the decision tree and case-based approaches as well as two case-based systems that incorporate expert knowledge into their case retrieval algorithms. Results clearly indicate that decision trees can be used to improve the performance of CBL systems and do so without reliance on potentially expensive expert knowledge.", "title": "" }, { "docid": "868c64332ae433159a45c1cfbe283341", "text": "The term \"artificial intelligence\" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as \"artificial intelligence\" versus simply advanced computer technologies mimicking aspects of natural intelligence.\n Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts.\n Several technological advances enable what is commonly referred to as \"artificial intelligence\". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities.\n In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.", "title": "" }, { "docid": "2fcaccc147377b4f59998d703bed5733", "text": "We present a multi-species model for the simulation of gravity driven landslides and debris flows with porous sand and water interactions. We use continuum mixture theory to describe individual phases where each species individually obeys conservation of mass and momentum and they are coupled through a momentum exchange term. Water is modeled as a weakly compressible fluid and sand is modeled with an elastoplastic law whose cohesion varies with water saturation. We use a two-grid Material Point Method to discretize the governing equations. The momentum exchange term in the mixture theory is relatively stiff and we use semi-implicit time stepping to avoid associated small time steps. Our semi-implicit treatment is explicit in plasticity and preserves symmetry of force linearizations. We develop a novel regularization of the elastic part of the sand constitutive model that better mimics plasticity during the implicit solve to prevent numerical cohesion artifacts that would otherwise have occurred. Lastly, we develop an improved return mapping for sand plasticity that prevents volume gain artifacts in the traditional Drucker-Prager model.", "title": "" }, { "docid": "bda419b065c53853f86f7fdbf0e330f2", "text": "In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation.", "title": "" }, { "docid": "65405e7f9b510f3a15d826e9969426f2", "text": "Human concept learning is particularly impressive in two respects: the internal structure of concepts can be representationally rich, and yet the very same concepts can also be learned from just a few examples. Several decades of research have dramatically advanced our understanding of these two aspects of concepts. While the richness and speed of concept learning are most often studied in isolation, the power of human concepts may be best explained through their synthesis. This paper presents a large-scale empirical study of one-shot concept learning, suggesting that rich generative knowledge in the form of a motor program can be induced from just a single example of a novel concept. Participants were asked to draw novel handwritten characters given a reference form, and we recorded the motor data used for production. Multiple drawers of the same character not only produced visually similar drawings, but they also showed a striking correspondence in their strokes, as measured by their number, shape, order, and direction. This suggests that participants can infer a rich motorbased concept from a single example. We also show that the motor programs induced by individual subjects provide a powerful basis for one-shot classification, yielding far higher accuracy than state-of-the-art pattern recognition methods based on just the visual form.", "title": "" } ]
scidocsrr
80548003f403743e8b768531b1051350
Optimizing NoSQL DB on Flash: A Case Study of RocksDB
[ { "docid": "f10660b168700e38e24110a575b5aafa", "text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.", "title": "" } ]
[ { "docid": "b1394b4534d1a2d62767f885c180903b", "text": "OBJECTIVE\nTo determine the value of measuring fetal femur and humerus length at 11-14 weeks of gestation in screening for chromosomal defects.\n\n\nMETHODS\nFemur and humerus lengths were measured using transabdominal ultrasound in 1018 fetuses immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In the group of chromosomally normal fetuses, regression analysis was used to determine the association between long bone length and crown-rump length (CRL). Femur and humerus lengths in fetuses with trisomy 21 were compared with those of normal fetuses.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The karyotype was normal in 920 fetuses and abnormal in 98, including 65 cases of trisomy 21. In the chromosomally normal group the fetal femur and humerus lengths increased significantly with CRL (femur length = - 6.330 + 0.215 x CRL in mm, r = 0.874, P < 0.0001; humerus length = - 6.240 + 0.220 x CRL in mm, r = 0.871, P < 0.0001). In the Bland-Altman plot the mean difference between paired measurements of femur length was 0.21 mm (95% limits of agreement - 0.52 to 0.48 mm) and of humerus length was 0.23 mm (95% limits of agreement - 0.57 to 0.55 mm). In the trisomy 21 fetuses the median femur and humerus lengths were significantly below the appropriate normal mean for CRL by 0.4 and 0.3 mm, respectively (P = 0.002), but they were below the respective 5th centile of the normal range in only six (9.2%) and three (4.6%) of the cases, respectively.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation the femur and humerus lengths in trisomy 21 fetuses are significantly reduced but the degree of deviation from normal is too small for these measurements to be useful in screening for trisomy 21.", "title": "" }, { "docid": "89e0687a467c2e026e40b6bd5633e09a", "text": "Secure two-party computation enables two parties to evaluate a function cooperatively without revealing to either party anything beyond the function’s output. The garbled-circuit technique, a generic approach to secure two-party computation for semi-honest participants, was developed by Yao in the 1980s, but has been viewed as being of limited practical significance due to its inefficiency. We demonstrate several techniques for improving the running time and memory requirements of the garbled-circuit technique, resulting in an implementation of generic secure two-party computation that is significantly faster than any previously reported while also scaling to arbitrarily large circuits. We validate our approach by demonstrating secure computation of circuits with over 109 gates at a rate of roughly 10 μs per garbled gate, and showing order-of-magnitude improvements over the best previous privacy-preserving protocols for computing Hamming distance, Levenshtein distance, Smith-Waterman genome alignment, and AES.", "title": "" }, { "docid": "d84a4c4b678329ddb3a81cc1e55150ab", "text": "This paper describes a Robot-Audition based Car Human Machine Interface (RA-CHMI). A RA-CHMI, like a car navigation system, has difficulty dealing with voice commands, since there are many noise sources in a car, including road noise, air-conditioner, music, and passengers. Microphone array processing developed in robot audition, may overcome this problem. Robot audition techniques, including sound source localization, Voice Activity Detection (VAD), sound source separation, and barge-in-able processing, were introduced by considering the characteristics of RA-CHMI. Automatic Speech Recognition (ASR), based on a Deep Neural Network (DNN), improved recognition performance and robustness in a noisy environment. In addition, as an integrated framework, HARK-Dialog was developed to build a multi-party and multi-modal dialog system, enabling the seamless use of cloud and local services with pluggable modular architecture. The constructed multi-party and multimodal RA-CHMI system did not require a push-to-talk button, nor did it require reducing the audio volume or air-conditioner when issuing speech commands. It could also control a four-DOF robot agent to make the system's responses more understandable. The proposed RA-CHMI was validated by evaluating essential techniques in the system, such as VAD and DNN-ASR, using real speech data recorded during driving. The entire design of the RA-CHMI system, including the system response time and the proper use of cloud/local services, are also discussed.", "title": "" }, { "docid": "fbec9e1a860b41575bbe07e3ce27c8bf", "text": "Two different antennas constructed using a new concept, the slot meander patch (SMP) design, are presented in this study. SMP antennas are designed for fourth-generation long-term evolution (4G LTE) handheld devices. These antennas are used for different target specifications: LTE-Time Division Duplex and LTE-Frequency Division Duplex (LTE TDD and LTE FDD). The first antenna is designed to operate in a wideband of 1.68-3.88 GHz to cover eight LTE TDD application frequency bands. Investigations have shown that the antenna designed with unequal meander widths has a higher efficiency compared to its equivalent antenna design with equal meander widths. The second antenna was configured as a multiband SMP antenna, which operates at three distinct frequency bands (0.5-0.75, 1.1-2.7, and 3.3-3.9 GHz), to cover eight LTE FDD application bands including the lowest and the highest bands. There is a good agreement between the measurement and simulation results for both antennas. Moreover, parametric studies have been carried out to investigate the flexible multiband antenna. Results have shown that the bandwidths can be improved through adjusting the meander widths without changing the SMP length and all other parameters.", "title": "" }, { "docid": "7dcc565c03660fbc1da90164a5cba448", "text": "Do continuous word embeddings encode any useful information for constituency parsing? We isolate three ways in which word embeddings might augment a stateof-the-art statistical parser: by connecting out-of-vocabulary words to known ones, by encouraging common behavior among related in-vocabulary words, and by directly providing features for the lexicon. We test each of these hypotheses with a targeted change to a state-of-the-art baseline. Despite small gains on extremely small supervised training sets, we find that extra information from embeddings appears to make little or no difference to a parser with adequate training data. Our results support an overall hypothesis that word embeddings import syntactic information that is ultimately redundant with distinctions learned from treebanks in other ways.", "title": "" }, { "docid": "fabc65effd31f3bb394406abfa215b3e", "text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).", "title": "" }, { "docid": "3ea5607d04419aae36592b6dcce25304", "text": "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.", "title": "" }, { "docid": "e49dcbcb0bb8963d4f724513d66dd3a0", "text": "To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.", "title": "" }, { "docid": "c4ca4238a0b923820dcc509a6f75849b", "text": "1", "title": "" }, { "docid": "076ab7223de2d7eee7b3875bc2bb82e4", "text": "Firewalls are network devices which enforce an organization’s security policy. Since their development, various methods have been used to implement firewalls. These methods filter network traffic at one or more of the seven layers of the ISO network model, most commonly at the application, transport, and network, and data-link levels. In addition, researchers have developed some newer methods, such as protocol normalization and distributed firewalls, which have not yet been widely adopted. Firewalls involve more than the technology to implement them. Specifying a set of filtering rules, known as a policy, is typically complicated and error-prone. High-level languages have been developed to simplify the task of correctly defining a firewall’s policy. Once a policy has been specified, the firewall needs to be tested to determine if it actually implements the policy correctly. Little work exists in the area of firewall theory; however, this article summarizes what exists. Because some data must be able to pass in and out of a firewall, in order for the protected network to be useful, not all attacks can be stopped by firewalls. Some emerging technologies, such as Virtual Private Networks (VPN) and peer-to-peer networking pose new challenges for firewalls.", "title": "" }, { "docid": "ad9f3510ffaf7d0bdcf811a839401b83", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "e86ee868324e80910d57093c30c5c3f7", "text": "These notes are based on a series of lectures I gave at the Tokyo Institute of Technology from April to July 2005. They constituted a course entitled “An introduction to geometric group theory” totalling about 20 hours. The audience consisted of fourth year students, graduate students as well as several staff members. I therefore tried to present a logically coherent introduction to the subject, tailored to the background of the students, as well as including a number of diversions into more sophisticated applications of these ideas. There are many statements left as exercises. I believe that those essential to the logical developments will be fairly routine. Those related to examples or diversions may be more challenging. The notes assume a basic knowledge of group theory, and metric and topological spaces. We describe some of the fundamental notions of geometric group theory, such as quasi-isometries, and aim for a basic overview of hyperbolic groups. We describe group presentations from first principles. We give an outline description of fundamental groups and covering spaces, sufficient to allow us to illustrate various results with more explicit examples. We also give a crash course on hyperbolic geometry. Again the presentation is rather informal, and aimed at providing a source of examples of hyperbolic groups. This is not logically essential to most of what follows. In principle, the basic theory of hyperbolic groups can be developed with no reference to hyperbolic geometry, but interesting examples would be rather sparse. In order not to interupt the exposition, I have not given references in the main text. We give sources and background material as notes in the final section. I am very grateful for the generous support offered by the Tokyo Insititute of Technology, which allowed me to complete these notes, as well as giving me the freedom to pursue my own research interests. I am indebted to Sadayoshi Kojima for his invitation to spend six months there, and for many interesting conversations. I thank Toshiko Higashi for her constant help in making my stay a very comfortable and enjoyable one. My then PhD student Ken Shackleton accompanied me on my visit, and provided some tutorial assistance. Shigeru Mizushima and Hiroshi Ooyama helped with some matters of translatation etc.", "title": "" }, { "docid": "420659637302d82c616bf719968f2f81", "text": "PURPOSE\nTo update previously summarized estimates of diagnostic accuracy for acute cholecystitis and to obtain summary estimates for more recently introduced modalities.\n\n\nMATERIALS AND METHODS\nA systematic search was performed in MEDLINE, EMBASE, Cochrane Library, and CINAHL databases up to March 2011 to identify studies about evaluation of imaging modalities in patients who were suspected of having acute cholecystitis. Inclusion criteria were explicit criteria for a positive test result, surgery and/or follow-up as the reference standard, and sufficient data to construct a 2 × 2 table. Studies about evaluation of predominantly acalculous cholecystitis in intensive care unit patients were excluded. Bivariate random-effects modeling was used to obtain summary estimates of sensitivity and specificity.\n\n\nRESULTS\nFifty-seven studies were included, with evaluation of 5859 patients. Sensitivity of cholescintigraphy (96%; 95% confidence interval [CI]: 94%, 97%) was significantly higher than sensitivity of ultrasonography (US) (81%; 95% CI: 75%, 87%) and magnetic resonance (MR) imaging (85%; 95% CI: 66%, 95%). There were no significant differences in specificity among cholescintigraphy (90%; 95% CI: 86%, 93%), US (83%; 95% CI: 74%, 89%) and MR imaging (81%; 95% CI: 69%, 90%). Only one study about evaluation of computed tomography (CT) met the inclusion criteria; the reported sensitivity was 94% (95% CI: 73%, 99%) at a specificity of 59% (95% CI: 42%, 74%).\n\n\nCONCLUSION\nCholescintigraphy has the highest diagnostic accuracy of all imaging modalities in detection of acute cholecystitis. The diagnostic accuracy of US has a substantial margin of error, comparable to that of MR imaging, while CT is still underevaluated.", "title": "" }, { "docid": "7a8fb7b1383b7f7562dd319a6f43fcab", "text": "An important problem that online work marketplaces face is grouping clients into clusters, so that in each cluster clients are similar with respect to their hiring criteria. Such a separation allows the marketplace to \"learn\" more accurately the hiring criteria in each cluster and recommend the right contractor to each client, for a successful collaboration. We propose a Maximum Likelihood definition of the \"optimal\" client clustering along with an efficient Expectation-Maximization clustering algorithm that can be applied in large marketplaces. Our results on the job hirings at oDesk over a seven-month period show that our client-clustering approach yields significant gains compared to \"learning\" the same hiring criteria for all clients. In addition, we analyze the clustering results to find interesting differences between the hiring criteria in the different groups of clients.", "title": "" }, { "docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" }, { "docid": "7ac1412d56f00fd2defb4220938d9346", "text": "Coingestion of protein with carbohydrate (CHO) during recovery from exercise can affect muscle glycogen synthesis, particularly if CHO intake is suboptimal. Another potential benefit of protein feeding is an increased synthesis rate of muscle proteins, as is well documented after resistance exercise. In contrast, the effect of nutrient manipulation on muscle protein kinetics after aerobic exercise remains largely unexplored. We tested the hypothesis that ingesting protein with CHO after a standardized 2-h bout of cycle exercise would increase mixed muscle fractional synthetic rate (FSR) and whole body net protein balance (WBNB) vs. trials matched for total CHO or total energy intake. We also examined whether postexercise glycogen synthesis could be enhanced by adding protein or additional CHO to a feeding protocol that provided 1.2 g CHO x kg(-1) x h(-1), which is the rate generally recommended to maximize this process. Six active men ingested drinks during the first 3 h of recovery that provided either 1.2 g CHO.kg(-1).h(-1) (L-CHO), 1.2 g CHO + 0.4 g protein x kg(-1) x h(-1) (PRO-CHO), or 1.6 g CHO x kg(-1) x h(-1) (H-CHO) in random order. Based on a primed constant infusion of l-[ring-(2)H(5)]phenylalanine, analysis of biopsies (vastus lateralis) obtained at 0 and 4 h of recovery showed that muscle FSR was higher (P < 0.05) in PRO-CHO (0.09 +/- 0.01%/h) vs. both L-CHO (0.07 +/- 0.01%/h) and H-CHO (0.06 +/- 0.01%/h). WBNB assessed using [1-(13)C]leucine was positive only during PRO-CHO, and this was mainly attributable to a reduced rate of protein breakdown. Glycogen synthesis rate was not different between trials. We conclude that ingesting protein with CHO during recovery from aerobic exercise increased muscle FSR and improved WBNB, compared with feeding strategies that provided CHO only and were matched for total CHO or total energy intake. However, adding protein or additional CHO to a feeding strategy that provided 1.2 g CHO x kg(-1) x h(-1) did not further enhance glycogen resynthesis during recovery.", "title": "" }, { "docid": "70331b25d31da354c14612df08fda33b", "text": "Today, Sales forecasting plays a key role for each business in this competitive environment. The forecasting of sales data in automobile industry has become a primary concern to predict the accuracy in future sales. This work addresses the problem of monthly sales forecasting in automobile industry (maruti car). The data set is based on monthly sales (past 5 year data from 2008 to 2012). Primarily, we used two forecasting methods namely Moving Average and Exponential smoothing to forecast the past data set and then we use these forecasted values as a input for ANFIS (Adaptive Neuro Fuzzy Inference System). Here, MA and ES forecasted values used as input variable for ANFIS to obtain the final accurate sales forecast. Finally we compare our model with two other forecasting models: ANN (Artificial Neural Network) and Linear Regression. Empirical results demonstrate that the ANFIS model gives better results out than other two models.", "title": "" }, { "docid": "d9edc458cee2261b78214132c2e4b811", "text": "Since its discovery, the asymmetric Fano resonance has been a characteristic feature of interacting quantum systems. The shape of this resonance is distinctively different from that of conventional symmetric resonance curves. Recently, the Fano resonance has been found in plasmonic nanoparticles, photonic crystals, and electromagnetic metamaterials. The steep dispersion of the Fano resonance profile promises applications in sensors, lasing, switching, and nonlinear and slow-light devices.", "title": "" }, { "docid": "0dfba09dc9a01e4ebca16eb5688c81aa", "text": "Machine-to-Machine (M2M) refers to technologies with various applications. In order to provide the vision and goals of M2M, an M2M ecosystem with a service platform must be established by the key players in industrial domains so as to substantially reduce development costs and improve time to market of M2M devices and services. The service platform must be supported by M2M enabling technologies and standardization. In this paper, we present a survey of existing M2M service platforms and explore the various research issues and challenges involved in enabling an M2M service platform. We first classify M2M nodes according to their characteristics and required functions, and we then highlight the features of M2M traffic. With these in mind, we discuss the necessity of M2M platforms. By comparing and analyzing the existing approaches and solutions of M2M platforms, we identify the requirements and functionalities of the ideal M2M service platform. Based on these, we propose an M2M service platform (M2SP) architecture and its functionalities, and present the M2M ecosystem with this platform. Different application scenarios are given to illustrate the interaction between the components of the proposed platform. In addition, we discuss the issues and challenges of enabling technologies and standardization activities, and outline future research directions for the M2M network.", "title": "" }, { "docid": "2372c664173be9aa8c2497b42703a80e", "text": "Medical devices have a great impact but rigorous production and quality norms to meet, which pushes manufacturing technology to its limits in several fields, such as electronics, optics, communications, among others. This paper briefly explores how the medical industry is absorbing many of the technological developments from other industries, and making an effort to translate them into the healthcare requirements. An example is discussed in depth: implantable neural microsystems used for brain circuits mapping and modulation. Conventionally, light sources and electrical recording points are placed on silicon neural probes for optogenetic applications. The active sites of the probe must provide enough light power to modulate connectivity between neural networks, and simultaneously ensure reliable recordings of action potentials and local field activity. These devices aim at being a flexible and scalable technology capable of acquiring knowledge about neural mechanisms. Moreover, this paper presents a fabrication method for 2-D LED-based microsystems with high aspect-ratio shafts, capable of reaching up to 20 mm deep neural structures. In addition, PDMS $\\mu $ lenses on LEDs top surface are presented for focusing and increasing light intensity on target structures.", "title": "" } ]
scidocsrr
515c9d140703ff8c9111583f92249697
Semi-supervised truth discovery
[ { "docid": "a15f80b0a0ce17ec03fa58c33c57d251", "text": "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.", "title": "" }, { "docid": "6c784fc34cf7a8e700c67235e05d8cb0", "text": "Fully automatic methods that extract lists of objects from the Web have been studied extensively. Record extraction, the first step of this object extraction process, identifies a set of Web page segments, each of which represents an individual object (e.g., a product). State-of-the-art methods suffice for simple search, but they often fail to handle more complicated or noisy Web page structures due to a key limitation -- their greedy manner of identifying a list of records through pairwise comparison (i.e., similarity match) of consecutive segments. This paper introduces a new method for record extraction that captures a list of objects in a more robust way based on a holistic analysis of a Web page. The method focuses on how a distinct tag path appears repeatedly in the DOM tree of the Web document. Instead of comparing a pair of individual segments, it compares a pair of tag path occurrence patterns (called visual signals) to estimate how likely these two tag paths represent the same list of objects. The paper introduces a similarity measure that captures how closely the visual signals appear and interleave. Clustering of tag paths is then performed based on this similarity measure, and sets of tag paths that form the structure of data records are extracted. Experiments show that this method achieves higher accuracy than previous methods.", "title": "" } ]
[ { "docid": "c6baff0d600c76fac0be9a71b4238990", "text": "Nature has provided rich models for computational problem solving, including optimizations based on the swarm intelligence exhibited by fireflies, bats, and ants. These models can stimulate computer scientists to think nontraditionally in creating tools to address application design challenges.", "title": "" }, { "docid": "b7b01049a4cc9cfd2dd951ee1302bfbc", "text": "This article describes the design, implementation, and results of the latest installment of the dermoscopic image analysis benchmark challenge. The goal is to support research and development of algorithms for automated diagnosis of melanoma, the most lethal skin cancer. The challenge was divided into 3 tasks: lesion segmentation, feature detection, and disease classification. Participation involved 593 registrations, 81 pre-submissions, 46 finalized submissions (including a 4-page manuscript), and approximately 50 attendees, making this the largest standardized and comparative study in this field to date. While the official challenge duration and ranking of participants has concluded, the dataset snapshots remain available for further research and development.", "title": "" }, { "docid": "e754c7c7821703ad298d591a3f7a3105", "text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.", "title": "" }, { "docid": "e1e77ff5d0fc9b21003b90c49700badc", "text": "Large intermittent generations have grown the influence on the grid security, system operation, and market economics. Although wind energy may not be dispatched, the cost impacts of wind can be substantially reduced if the wind energy can be scheduled using accurate wind forecasting. In other words, the improvement of the performance of wind power forecasting tool has significant technology and economic impact on the system operation with increased wind power penetration. Forecasting has been a vital part of business planning in today's competitive environment, especially in areas characterized by a high concentration of wind generation and a limited capacity of network. The target of this paper is to present a critical literature review and an up-to-date bibliography on wind forecasting technologies over the world. Various forecasting aspects concerning the wind speed and power have been highlighted. These technologies based on numeric weather prediction (NWP) methods, statistical methods, methods based upon artificial neural networks (ANNs), and hybrid forecasting approaches will be discussed. Furthermore, the difference between wind speed and power forecasting, the lead time of forecasting, and the further research will also be discussed in this paper.", "title": "" }, { "docid": "1b71fe29cd2808b623cd42cf1bb71f6a", "text": "Lannea barteri (Oliv.) Engl (Anacardiaceae) is a medicinal plant used in west African countries such as Côte d’Ivoire for the treatment of various diseases (wound, rheumatic, diarrhoea). Dichloromethane and methanol extracts from the roots and stem bark of L. barteri were screened for their antibacterial, antifungal, radical scavenging and acetylcholinesterase inhibitory activities. TLC bioautography and agar overlay assay for antifungal activity were run with Cladosporium cucumerinum, Fusarium oxysporum f. sp. vasinfectum, Fusarium oxysporum f. sp. lycopersici and Candida albicans respectively. Also extracts were tested on bacteria (Staphylococcus aureus, Staphylococcus epidermis, Enterococcus faecalis, Proteus mirabilis, Pseudomonas aeruginosa and Escherichia coli), some of which were multidrug resistant bacteria. DPPH and Acetylcholinesterase solutions sprayed on TLC plates were used for radical scavengers and acetylcholinesterase inhibitors. L. barteri gave high positive responses in all four tests, exhibiting activity against bacteria, fungi, free radicals and acetycholinesterase. The phytochemical screening showed that all the extracts contained at least trace amount of steroids, terpenoïds, saponins, quinones, tannins and flavonoïds. This study which is the first report on the biological activities and phytochemicals of Lannea barteri, supports its traditional uses in the treatment of infectious and non infectious diseases.", "title": "" }, { "docid": "76a97740682bbe00f7919ff5a396fb9c", "text": "Graph Isomorphism has been proved as most crucial and very difficult process in Pattern Matching. The Graph Isomorphism problem is to check if two graphs are similar or not based on different properties like degree, vertex, edges etc. Two graphs are Isomorphic if they satisfy above properties. A Novel Approach is proposed for Graph Isomorphism Detection Problem (GIDP) based on two different methods. First method is to match an Input Graph with a Model Graph and second method is to match an input graph to the set of Model Graphs (Database of Model Graphs). This Novel Approach is used to solve Isomorphic Problems in an efficient way. Numbers of experiments are performed on large graphs and compared its performance with well-established algorithms like Ullman and VF2.", "title": "" }, { "docid": "ab3ec842ab5296e873d624732da6ee6b", "text": "In many computer applications involving the recording and processing of personal data there is a need to allow for variations in surname spelling, caused for example by transcription errors. A number of algorithms have been developed for name matching, i.e. which attempt to identify name spelling variations, one of the best known of which is the Soundex algorithm. This paper describes a comparative analysis of a number of these algorithms and, based on an analysis of their comparative strengths and weaknesses, proposes a new and improved name matching algorithm, which we call the Phonex algorithm. The analysis takes advantage of the recent creation of a large list of “equivalent surnames”, published in the book Family History Knowledge UK [Park1992]. This list is based on data supplied by some thousands of individual genealogists, and can be presumed to be representative of British surnames and their variations over the last two or three centuries. It thus made it possible to perform what we would argue were objective tests of name matching, the results of which provide a solid basis for the analysis that we have performed, and for our claims for the merits of the new algorithm, though these are unlikely to hold fully for surnames emanating largely from other countries.", "title": "" }, { "docid": "15a37341901e410e2754ae46d7ba11e7", "text": "Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Usually, these processes must be completed in a certain time window; thus, it is necessary to optimize their execution time. In this paper, we delve into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide algorithms towards the minimization of the execution cost of an ETL workflow.", "title": "" }, { "docid": "326f87b785d14181baf0711ea2f7b0af", "text": "Last years research gave some preliminary results in approaches to customer online purchase prediction. However, it still remains unclear what exact set of features of data instances should be incorporated in a model and is enough for prediction, what is the best data mining method (algorithm) to use, how stable over time could be such a model, whether a model is transferable from one online store to another. This study is focused on a heuristic approach to dealing with the problem under conditions of such theoretical and methodological diversity in order to find a quick and inexpensive first approximation to the solution or at least to find useful patterns and facts in the data.", "title": "" }, { "docid": "7b7b0c7ef54255839f9ff9d09669fe11", "text": "Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.", "title": "" }, { "docid": "033792460507de261ee77c96dae3a6f7", "text": "Being happy and finding life meaningful overlap, but there are important differences. A large survey revealed multiple differing predictors of happiness (controlling for meaning) and meaningfulness (controlling for happiness). Satisfying one’s needs and wants increased happiness but was largely irrelevant to meaningfulness. Happiness was largely present oriented, whereas meaningfulness involves integrating past, present, and future. For example, thinking about future and past was associated with high meaningfulness but low happiness. Happiness was linked to being a taker rather than a giver, whereas meaningfulness went with being a giver rather than a taker. Higher levels of worry, stress, and anxiety were linked to higher meaningfulness but lower happiness. Concerns with personal identity and expressing the self contributed to meaning but not happiness. We offer brief composite sketches of the unhappy but meaningful life and of the happy but meaningless life.", "title": "" }, { "docid": "ab36fe1484f2ad3c9ffc6514bf1c56c5", "text": "The design of array antenna is vital study for today’s Wireless communication system to achieve higher gain, highly directional beam and also to counteract the effect of fading while signal propagates through various corrupted environments. In this paper, the design and analysis of a 2x4 microstrip patch antenna array is introduced and a rat-race coupler is incorporated. The antenna array is designed to function in the C-band and is used to receive signals from the telemetry link of an Unmanned Air Vehicle. The transmitter in the aircraft radiates two other directional beams adjacent to the main lobe, called the left lobe (L) and the right lobe (R). The rat race coupler generates the sum and difference patterns by adding and subtracting the left lobe signals with the right lobe signals respectively to generate L+R and L-R signals. The array of square patch antenna provides frequency close to the designed operating frequency with an acceptable Directivity and Gain. The proposed antenna array is a high gain, low-cost, low weight Ground Control Station (GCS) antenna. This paper, aims at a VSWR less than 2 and bandwidth greater than 50 MHz and a high antenna gain. The simulation has been done by using Advanced Design System (A.D.S) software. Keywords— 2x4 microstrip patch antenna, Rat-race coupler, Inset feed, Square patch antenna", "title": "" }, { "docid": "62bcc6459e60e0c4ecdb798da4dd2e31", "text": "In this paper, we show how DEA may be used to identify component profiles as well as overall indices of performance in the context of an application to assessments of basketball players. We go beyond the usual uses of DEA to provide only overall indexes of performance. Our focus is, instead, on the multiplier values for the efficiently rated players. For this purpose we use a procedure that we recently developed that guarantees a full profile of non-zero weights, or ‘‘multipliers.” We demonstrate how these values can be used to identify relative strengths and weaknesses in individual players. Here we also utilize the flexibility of DEA by introducing bounds on the allowable values to reflect the views of coaches, trainers and other experts on the basketball team for which evaluations are being conducted. Finally we show how these combinations can be extended by taking account of team as well as individual considerations. Published by Elsevier B.V.", "title": "" }, { "docid": "3092e0006fd965034352e04ba9933a46", "text": "In classification, it is often difficult or expensive to obtain completely accurate and reliable labels. Indeed, labels may be polluted by label noise, due to e.g. insufficient information, expert mistakes, and encoding errors. The problem is that errors in training labels that are not properly handled may deteriorate the accuracy of subsequent predictions, among other effects. Many works have been devoted to label noise and this paper provides a concise and comprehensive introduction to this research topic. In particular, it reviews the types of label noise, their consequences and a number of state of the art approaches to deal with label noise.", "title": "" }, { "docid": "aaec79a58537f180aba451ea825ed013", "text": "In my March 2006 CACM article I used the term \" computational thinking \" to articulate a vision that everyone, not just those who major in computer science, can benefit from thinking like a computer scientist [Wing06]. So, what is computational thinking? Here is a definition that Jan use; it is inspired by an email exchange I had with Al Aho of Columbia University: Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent [CunySnyderWing10] Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines. When I use the term computational thinking, my interpretation of the words \" problem \" and \" solution \" is broad; in particular, I mean not just mathematically well-defined problems whose solutions are completely analyzable, e.g., a proof, an algorithm, or a program, but also real-world problems whose solutions might be in the form of large, complex software systems. Thus, computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, e.g., compositional reasoning, pattern matching, procedural thinking, and recursive thinking. Computational thinking is used in the design and analysis of problems and their solutions, broadly interpreted. The most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types. Abstraction gives us the power to scale and deal with complexity. Recursively applying abstraction gives us the ability to build larger and larger systems, with the base case (at least for computer science) being bits (0's …", "title": "" }, { "docid": "df3d9037bff693c574a03875e7f4f0ea", "text": "We study the problem of imitation learning from demonstrations of multiple coordinating agents. One key challenge in this setting is that learning a good model of coordination can be difficult, since coordination is often implicit in the demonstrations and must be inferred as a latent variable. We propose a joint approach that simultaneously learns a latent coordination model along with the individual policies. In particular, our method integrates unsupervised structure learning with conventional imitation learning. We illustrate the power of our approach on a difficult problem of learning multiple policies for finegrained behavior modeling in team sports, where different players occupy different roles in the coordinated team strategy. We show that having a coordination model to infer the roles of players yields substantially improved imitation loss compared to conventional baselines.", "title": "" }, { "docid": "2269c84a2725605242790cf493425e0c", "text": "Tissue engineering aims to improve the function of diseased or damaged organs by creating biological substitutes. To fabricate a functional tissue, the engineered construct should mimic the physiological environment including its structural, topographical, and mechanical properties. Moreover, the construct should facilitate nutrients and oxygen diffusion as well as removal of metabolic waste during tissue regeneration. In the last decade, fiber-based techniques such as weaving, knitting, braiding, as well as electrospinning, and direct writing have emerged as promising platforms for making 3D tissue constructs that can address the abovementioned challenges. Here, we critically review the techniques used to form cell-free and cell-laden fibers and to assemble them into scaffolds. We compare their mechanical properties, morphological features and biological activity. We discuss current challenges and future opportunities of fiber-based tissue engineering (FBTE) for use in research and clinical practice.", "title": "" }, { "docid": "06c0ee8d139afd11aab1cc0883a57a68", "text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.", "title": "" }, { "docid": "4c290421dc42c3a5a56c7a4b373063e5", "text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.", "title": "" }, { "docid": "581e3373ecfbc6c012df7c166636cc50", "text": "The deep convolutional neural network(CNN) has significantly raised the performance of image classification and face recognition. Softmax is usually used as supervision, but it only penalizes the classification loss. In this paper, we propose a novel auxiliary supervision signal called contrastive-center loss, which can further enhance the discriminative power of the features, for it learns a class center for each class. The proposed contrastive-center loss simultaneously considers intra-class compactness and inter-class separability, by penalizing the contrastive values between: (1)the distances of training samples to their corresponding class centers, and (2)the sum of the distances of training samples to their non-corresponding class centers. Experiments on different datasets demonstrate the effectiveness of contrastive-center loss.", "title": "" } ]
scidocsrr
0bccb289026a661cc1f5c6d4613f96c1
Digitizing paper forms with mobile imaging technologies
[ { "docid": "84758d6a24a3b380d1df9f305683c4aa", "text": "CAM is a user interface toolkit that allows a camera-equipped mobile phone to interact with paper documents. It is designed to automate inefficient, paper-intensive information processes in the developing world. In this paper we present a usability evaluation of an application built using CAM for collecting data from microfinance groups in rural India. This application serves an important and immediate need in the microfinance industry. Our quantitative results show that the user interface is efficient, accurate and can quickly be learned by rural users. The results were competitive with an equivalent PC-based UI. Qualitatively, the interface was found easy to use by almost all users. This shows that, with a properly designed user interface, mobile phones can be a preferred platform for many rural computing applications. Voice feedback and numeric data entry were particularly well-received by users. We are conducting a pilot of this application with 400 microfinance groups in India.", "title": "" } ]
[ { "docid": "8fcc9f13f34b03d68f59409b2e3b007a", "text": "Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitland's performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds.", "title": "" }, { "docid": "6a9e30fd08b568ef6607158cab4f82b2", "text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.", "title": "" }, { "docid": "889dd22fcead3ce546e760bda8ef4980", "text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.", "title": "" }, { "docid": "bba5b47ba3c7cca54f0da2d1734ee448", "text": "On-Off Keying (OOK) and Pulse Position Modulation (PPM) are the most commonly used modulation techniques in Free Space Optical (FSO) communications. In this paper, the performance of an FSO system with OOK and various PPM schemes has been analysed. The log-normal turbulence model for weak atmospheric turbulence and Avalanche Photo Diode (APD) receiver are considered for the system performance evaluation. The bit-error-rate (BER) performance for various schemes have been analysed and compared graphically. It was found that the performance of Differential Amplitude Pulse Position Modulation (DAPPM) is better than that of other schemes for the same peak power.", "title": "" }, { "docid": "06e6704699652849e745df7c472fdc7b", "text": "Despite extensive research, many methods in software quality prediction still exhibit some degree of uncertainty in their results. Rather than treating this as a problem, this paper asks if this uncertainty is a resource that can simplify software quality prediction. For example, Deb’s principle of ε-dominance states that if there exists some ε value below which it is useless or impossible to distinguish results, then it is superfluous to explore anything less than ε . We say that for “large ε problems”, the results space of learning effectively contains just a few regions. If many learners are then applied to such large ε problems, they would exhibit a “many roads lead to Rome” property; i.e., many different software quality prediction methods would generate a small set of very similar results. This paper explores DART, an algorithm especially selected to succeed for large ε software quality prediction problems. DART is remarkable simple yet, on experimentation, it dramatically outperforms three sets of state-of-the-art defect prediction methods. The success of DART for defect prediction begs the questions: how many other domains in software quality predictors can also be radically simplified? This will be a fruitful direction for future work.", "title": "" }, { "docid": "e6aab5125ddb5beb83c31874cf4119d0", "text": "Contextual information and word orders are proved valuable for text classification task. To make use of local word order information, n-grams are commonly used features in several models, such as linear models. However, these models commonly suffer the data sparsity problem and are difficult to represent large size region. The discrete or distributed representations of n-grams can be regarded as region embeddings, which are representations of fixed size regions. In this paper, we propose two novel text classification models that learn task specific region embeddings without hand crafted features, hence the drawbacks of n-grams can be overcome. In our model, each word has two attributes, a commonly used word embedding, and an additional local context unit which is used to interact with the word’s local context. Both the units and word embeddings are used to generate representations of regions, and are learned as model parameters. Finally, bag of region embeddings of a document is fed to a linear classifier. Experimental results show that our proposed methods achieve state-of-the-art performance on several benchmark datasets. We provide visualizations and analysis illustrating that our proposed local context unit can capture the syntactic and semantic information.", "title": "" }, { "docid": "a5b71d7162abd4408e2ec821302c0431", "text": "The Army Digital Array Radar (DAR) project's goal is to demonstrate how wide-bandgap semiconductor technology, highly-integrated transceivers, and the ever-increasing capabilities of commercial digital components can be leveraged to provide new capabilities and enhanced performance in future low-cost phased array systems. A 16-element, S-band subarray has been developed with panel-integrated, plastic-packaged gallium-nitride (GaN) amplifiers, multi-channel transceiver ICs, and digitization at the element level. In addition to full digital beamforming on transmit and receive, the DAR subarray has demonstrated efficient RF power generation exceeding 25 Watts per element, in-situ, element-level calibration monitoring and self-correction capabilities, simultaneous transmit and receive operation through subarray partitioning for an indoor target tracker, and more. An overview is given of these results and capabilities.", "title": "" }, { "docid": "de6e139d0b5dc295769b5ddb9abcc4c6", "text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.", "title": "" }, { "docid": "08731e24a7ea5e8829b03d79ef801384", "text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.", "title": "" }, { "docid": "36b1972e3a1f8c8f192b80c8f49ef406", "text": "Twitter, with its rising popularity as a micro-blogging website, has inevitably attracted the attention of spammers. Spammers use myriad of techniques to evade security mechanisms and post spam messages, which are either unwelcome advertisements for the victim or lure victims in to clicking malicious URLs embedded in spam tweets. In this paper, we propose several novel features capable of distinguishing spam accounts from legitimate accounts. The features analyze the behavioral and content entropy, bait-techniques, and profile vectors characterizing spammers, which are then fed into supervised learning algorithms to generate models for our tool, CATS. Using our system on two real-world Twitter data sets, we observe a 96% detection rate with about 0.8% false positive rate beating state of the art detection approach. Our analysis reveals detection of more than 90% of spammers with less than five tweets and about half of the spammers detected with only a single tweet. Our feature computation has low latency and resource requirement making fast detection feasible. Additionally, we cluster the unknown spammers to identify and understand the prevalent spam campaigns on Twitter.", "title": "" }, { "docid": "d9fcfc15c1c310aef6eec96e230074d1", "text": "There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a “balanced” representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.", "title": "" }, { "docid": "26d0f9ea9e939cd09d1572965127e030", "text": "The emergence of “Fake News” and misinformation via online news and social media has spurred an interest in computational tools to combat this phenomenon. In this paper we present a new “Related Fact Checks” service, which can help a reader critically evaluate an article and make a judgment on its veracity by bringing up fact checks that are relevant to the article. We describe the core technical problems that need to be solved in building a “Related Fact Checks” service, and present results from an evaluation of an implementation.", "title": "" }, { "docid": "3d90ebf4a0c7afcca90ba884901486ea", "text": "Cuckoo search algorithm via L'evy flights by Xin-She Yang and Saush Deb [1] for optimizing a nonlinear function uses generation of random numbers with symmetric L'evy distribution obtained by Mantegna's algorithm. However, instead of using the original algorithm, they have used a simplified version to generate L'evy flights during the Cuckoo search algorithm [2]. Also, apaper by MatteoLeccardi [3] describes three algorithms to generate such random numbers and claims that McCulloch's algorithm is outperforming the other two, namely, Mantegna's algorithm and rejection algorithm. The idea in this paper is to compare and see if the Cuckoo Search algorithm shows any improvement in the performance in three cases when the simplified version algorithm, Mantegna's algorithm and McCulloch's algorithm each of them is included in Cuckoo Search algorithm to generate L'evy flights.", "title": "" }, { "docid": "af4d583cf45d13c09e59a927905a7794", "text": "Background and aims: Addiction to internet and mobile phone may be affecting all aspect of student’s life. Knowledge about prevalence and related factors of internet and mobile phone addiction is necessary for planning for prevention and treatment. This study was conducted to evaluate the prevalence of internet and mobile phone addiction among Iranian students. Methods: This cross sectional study conducted from Jun to April 2015 in Rasht Iran. With using stratified sampling method, 581 high school students from two region of Rasht in North of Iran were recruited as the subjects for this study. Data were collected with using demographics questionnaire, Cell phone Overuse Scale (COS), and the Internet Addiction Test (IAT). Analysis was performed using Statistical Package for Social Sciences (SPSS) 17 21 version. Results: Of the 581 students, who participate in present study, 53.5% were female and the rest were male. The mean age of students was 16.28±1.01 years. The mean score of IAT was 42.03±18.22. Of the 581 students, 312 (53.7%), 218 (37.5%) and 51 (8.8%) showed normal, mild and moderate level of internet addiction. The mean score of COS was 55.10±19.86.Of the 581 students, 27(6/4%), 451(6/77) and 103 (7/17) showed low, moderate and high level of mobile phone addiction. Conclusion: according to finding of present study, rate of mobile phone and internet addiction were high among Iranian students. Health care authorities should pay more attention to these problems.", "title": "" }, { "docid": "780095276d7ac3cae1b95b7a1ceee8b3", "text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.", "title": "" }, { "docid": "c581d1300bf07663fcfd8c704450db09", "text": "This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "51abd97e099479ad1c4c508632dc1913", "text": "Internet addiction (IA) has become a serious mental health condition in many countries. To better understand the clinical implications of IA, this study tested statistically a new theoretical model illustrating underlying cognitive mechanisms contributing to development and maintenance of the disorder. The model differentiates between a generalized Internet addiction (GIA) and specific forms. This study tested the model on GIA on a population of general Internet users. The findings from 1019 users show that the hypothesized structural equation model explained 63.5% of the variance of GIA symptoms, as measured by the short version of the Internet Addiction Test. Using psychological and personality testing, the results show that a person's specific cognitions (poor coping and cognitive expectations) increased the risk for GIA. These two factors mediated the symptoms of GIA if other risk factors were present such as depression, social anxiety, low self-esteem, low self-efficacy, and high stress vulnerability to name a few areas that were measured in the study. The model shows that individuals with high coping skills and no expectancies that the Internet can be used to increase positive or reduce negative mood are less likely to engage in problematic Internet use, even when other personality or psychological vulnerabilities are present. The implications for treatment include a clear cognitive component to the development of GIA and the need to assess a patient's coping style and cognitions and improve faulty thinking to reduce symptoms and engage in recovery.", "title": "" }, { "docid": "cc5516333c3ed4773eec4dab874b31e9", "text": "Communities, whose reliance on critical cyber infrastructures is growing, are threatened by a wide range of cyber events that can adversely affect these systems and networks. The development of computer security taxonomies to classify computer and network vulnerabilities and attacks has led to a greater insight into the causes, effects, mitigation, and remediation of cyber attacks. In developing these taxonomies researchers are better able to understand and address the many different attacks that can occur. No current taxonomy, however, has been developed that takes into account the community aspects of cyber attacks or other cyber events affecting communities. We present a new taxonomy that considers the motivation, methodology, and effects of cyber events that can affect communities. We include a discussion on how our taxonomy is useful to e-government, industry, and security researchers.", "title": "" }, { "docid": "94af221c857462b51e14f527010fccde", "text": "The immunology of the hygiene hypothesis of allergy is complex and involves the loss of cellular and humoral immunoregulatory pathways as a result of the adoption of a Western lifestyle and the disappearance of chronic infectious diseases. The influence of diet and reduced microbiome diversity now forms the foundation of scientific thinking on how the allergy epidemic occurred, although clear mechanistic insights into the process in humans are still lacking. Here we propose that barrier epithelial cells are heavily influenced by environmental factors and by microbiome-derived danger signals and metabolites, and thus act as important rheostats for immunoregulation, particularly during early postnatal development. Preventive strategies based on this new knowledge could exploit the diversity of the microbial world and the way humans react to it, and possibly restore old symbiotic relationships that have been lost in recent times, without causing disease or requiring a return to an unhygienic life style.", "title": "" }, { "docid": "e3f3bd7c65d9669fa8be06506094672c", "text": "Spatial clustering deals with the unsupervised grouping of places into clusters and finds important applications in urban planning and marketing. Current spatial clustering models disregard information about the people who are related to the clustered places. In this paper, we show how the density-based clustering paradigm can be extended to apply on places which are visited by users of a geo-social network. Our model considers both spatial information and the social relationships between users who visit the clustered places. After formally defining the model and the distance measure it relies on, we present efficient algorithms for its implementation, based on spatial indexing. We evaluate the effectiveness of our model via a case study on real data; in addition, we design two quantitative measures, called social entropy and community score to evaluate the quality of the discovered clusters. The results show that geo-social clusters have special properties and cannot be found by applying simple spatial clustering approaches. The efficiency of our index-based implementation is also evaluated experimentally.", "title": "" } ]
scidocsrr
3df264b4fa3ccc7e7e3aafe8506ad4af
Promotional Marketing or Word-of-Mouth? Evidence from Online Restaurant Reviews
[ { "docid": "1993b540ff91922d381128e9c8592163", "text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.", "title": "" }, { "docid": "c57cbe432fdab3f415d2c923bea905ff", "text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.", "title": "" }, { "docid": "b445de6f864c345d90162cb8b2527240", "text": "he growing popularity of online product review forums invites the development of models and metrics that allow firms to harness these new sources of information for decision support. Our work contributes in this direction by proposing a novel family of diffusion models that capture some of the unique aspects of the entertainment industry and testing their performance in the context of very early postrelease motion picture revenue forecasting. We show that the addition of online product review metrics to a benchmark model that includes prerelease marketing, theater availability and professional critic reviews substantially increases its forecasting accuracy; the forecasting accuracy of our best model outperforms that of several previously published models. In addition to its contributions in diffusion theory, our study reconciles some inconsistencies among previous studies with respect to what online review metrics are statistically significant in forecasting entertainment good sales. CHRYSANTHOS DELLAROCAS, XIAOQUAN (MICHAEL) ZHANG, AND NEVEEN F. AWAD", "title": "" } ]
[ { "docid": "b2470a98c3b278e18b8c4852858ffd98", "text": "Object recognition is an important problem with a wide range of applications. It is also a challenging problem, especially for animal categorization as the differences among breeds can be subtle. In this paper, based on statistical techniques for landmark-based shape representation, we propose to model the shape of dog breed as points on the Grassmann manifold. We consider the dog breed categorization as the classification problem on this manifold. The proposed scheme is tested on a dataset including 8,351 images of 133 different breeds. Experimental results demonstrate the advocated scheme outperforms state of the art approaches by nearly 20%.", "title": "" }, { "docid": "57fa4164381d9d9691b9ba5c506addbd", "text": "The aim of this study was to evaluate the acute effects of unilateral ankle plantar flexors static-stretching (SS) on the passive range of movement (ROM) of the stretched limb, surface electromyography (sEMG) and single-leg bounce drop jump (SBDJ) performance measures of the ipsilateral stretched and contralateral non-stretched lower limbs. Seventeen young men (24 ± 5 years) performed SBDJ before and after (stretched limb: immediately post-stretch, 10 and 20 minutes and non-stretched limb: immediately post-stretch) unilateral ankle plantar flexor SS (6 sets of 45s/15s, 70-90% point of discomfort). SBDJ performance measures included jump height, impulse, time to reach peak force, contact time as well as the sEMG integral (IEMG) and pre-activation (IEMGpre-activation) of the gastrocnemius lateralis. Ankle dorsiflexion passive ROM increased in the stretched limb after the SS (pre-test: 21 ± 4° and post-test: 26.5 ± 5°, p < 0.001). Post-stretching decreases were observed with peak force (p = 0.029), IEMG (P<0.001), and IEMGpre-activation (p = 0.015) in the stretched limb; as well as impulse (p = 0.03), and jump height (p = 0.032) in the non-stretched limb. In conclusion, SS effectively increased passive ankle ROM of the stretched limb, and transiently (less than 10 minutes) decreased muscle peak force and pre-activation. The decrease of jump height and impulse for the non-stretched limb suggests a SS-induced central nervous system inhibitory effect. Key pointsWhen considering whether or not to SS prior to athletic activities, one must consider the potential positive effects of increased ankle dorsiflexion motion with the potential deleterious effects of power and muscle activity during a simple jumping task or as part of the rehabilitation process.Since decreased jump performance measures can persist for 10 minutes in the stretched leg, the timing of SS prior to performance must be taken into consideration.Athletes, fitness enthusiasts and therapists should also keep in mind that SS one limb has generalized effects upon contralateral limbs as well.", "title": "" }, { "docid": "ef7b6c2b0254535e9dbf85a4af596080", "text": "African swine fever virus (ASFV) is a highly virulent swine pathogen that has spread across Eastern Europe since 2007 and for which there is no effective vaccine or treatment available. The dynamics of shedding and excretion is not well known for this currently circulating ASFV strain. Therefore, susceptible pigs were exposed to pigs intramuscularly infected with the Georgia 2007/1 ASFV strain to measure those dynamics through within- and between-pen transmission scenarios. Blood, oral, nasal and rectal fluid samples were tested for the presence of ASFV by virus titration (VT) and quantitative real-time polymerase chain reaction (qPCR). Serum was tested for the presence of ASFV-specific antibodies. Both intramuscular inoculation and contact transmission resulted in development of acute disease in all pigs although the experiments indicated that the pathogenesis of the disease might be different, depending on the route of infection. Infectious ASFV was first isolated in blood among the inoculated pigs by day 3, and then chronologically among the direct and indirect contact pigs, by day 10 and 13, respectively. Close to the onset of clinical signs, higher ASFV titres were found in blood compared with nasal and rectal fluid samples among all pigs. No infectious ASFV was isolated in oral fluid samples although ASFV genome copies were detected. Only one animal developed antibodies starting after 12 days post-inoculation. The results provide quantitative data on shedding and excretion of the Georgia 2007/1 ASFV strain among domestic pigs and suggest a limited potential of this isolate to cause persistent infection.", "title": "" }, { "docid": "ef9b5b0fbfd71c8d939bfe947c60292d", "text": "OBJECTIVE\nSome prolonged and turbulent grief reactions include symptoms that differ from the DSM-IV criteria for major depressive disorder. The authors investigated a new diagnosis that would include these symptoms.\n\n\nMETHOD\nThey developed observer-based definitions of 30 symptoms noted clinically in previous longitudinal interviews of bereaved persons and then designed a plan to investigate whether any combination of these would serve as criteria for a possible new diagnosis of complicated grief disorder. Using a structured diagnostic interview, they assessed 70 subjects whose spouses had died. Latent class model analyses and signal detection procedures were used to calibrate the data against global clinical ratings and self-report measures of grief-specific distress.\n\n\nRESULTS\nComplicated grief disorder was found to be characterized by a smaller set of the assessed symptoms. Subjects elected by an algorithm for these symptoms patterns did not significantly overlap with subjects who received a diagnosis of major depressive disorder.\n\n\nCONCLUSIONS\nA new diagnosis of complicated grief disorder may be indicated. Its criteria would include the current experience (more than a year after a loss) of intense intrusive thoughts, pangs of severe emotion, distressing yearnings, feeling excessively alone and empty, excessively avoiding tasks reminiscent of the deceased, unusual sleep disturbances, and maladaptive levels of loss of interest in personal activities.", "title": "" }, { "docid": "940df82b743d99cb3f6dff903920482f", "text": "Online publishing, social networks, and web search have dramatically lowered the costs to produce, distribute, and discover news articles. Some scholars argue that such technological changes increase exposure to diverse perspectives, while others worry they increase ideological segregation. We address the issue by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that social networks and search engines increase the mean ideological distance between individuals. However, somewhat counterintuitively, we also find these same channels increase an individual’s exposure to material from his or her less preferred side of the political spectrum. Finally, we show that the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets, tempering the consequences—both positive and negative—of recent technological changes. We thus uncover evidence for both sides of the debate, while also finding that the magnitude of the e↵ects are relatively modest. WORD COUNT: 5,762 words", "title": "" }, { "docid": "4bf485a218fca405a4d8655bc2a2be86", "text": "In today’s competitive business environment, companies are facing challenges in dealing with big data issues for rapid decision making for improved productivity. Many manufacturing systems are not ready to manage big data due to the lack of smart analytics tools. Germany is leading a transformation toward 4th Generation Industrial Revolution (Industry 4.0) based on Cyber-Physical System based manufacturing and service innovation. As more software and embedded intelligence are integrated in industrial products and systems, predictive technologies can further intertwine intelligent algorithms with electronics and tether-free intelligence to predict product performance degradation and autonomously manage and optimize product service needs. This article addresses the trends of industrial transformation in big data environment as well as the readiness of smart predictive informatics tools to manage big data to achieve transparency and productivity. Keywords—Industry 4.0; Cyber Physical Systems; Prognostics and Health Management; Big Data;", "title": "" }, { "docid": "03bddfeabe8f9a6e9f333659d028c038", "text": "This paper presents a methodology for the evaluation of table understanding algorithms for PDF documents. The evaluation takes into account three major tasks: table detection, table structure recognition and functional analysis. We provide a general and flexible output model for each task along with corresponding evaluation metrics and methods. We also present a methodology for collecting and ground-truthing PDF documents based on consensus-reaching principles and provide a publicly available ground-truthed dataset.", "title": "" }, { "docid": "b4e3d2f5e4bb1238cb6f4dad5c952c4c", "text": "Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.", "title": "" }, { "docid": "70c8caf1bdbdaf29072903e20c432854", "text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.", "title": "" }, { "docid": "3bbdf7e9ab4181d16b8f308c59da81c9", "text": "This letter presents a dexterous soft robotic hand, BCL-13, with 4 fingers and 13 independently actuated joints capable of in-hand manipulation. The iconic dexterity is enabled by a novel soft robotic finger design with three degrees of freedom (DOFs), significantly improving over existing soft actuator dexterity and realizing human-finger-like workspace. The palm is also equipped with a dedicated rotational DOF to enable opposition of fingers. Investigations on human hand model reduction, in-hand manipulation principles, as well as the fabrication procedures of the soft robotic fingers and hand were presented in detail. Dedicated experiments using the fabricated prototypes were conducted to evaluate the effectiveness of the proposed robotic anthropomorphic system via a series of workspace, grasping, and in-hand manipulation tasks. The proposed BCL-13 hand offers a promising design solution to a lightweight, dexterous, affordable, and highly anthropomorphic robotic hand design.", "title": "" }, { "docid": "40577d34e714b9b15eabcea5fd5dabdc", "text": "This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93% in dense clutter. This is a 20% improvement compared to our prior work.", "title": "" }, { "docid": "95f81c1063b9965213061238f4cca2f1", "text": "The poisoned child presents unique considerations in circumstances of exposure, clinical effects, diagnostic approach, and therapeutic interventions. The emergency provider must be aware of the pathophysiologic vulnerabilities of infants and children and substances that are especially toxic. Awareness is essential for situations in which the risk of morbidity and mortality is increased, such as child abuse by poisoning. Considerations in treatment include the need for attentive supportive care, pediatric implications for antidotal therapy, and extracorporeal removal methods such as hemodialysis in children. In this article, each of these issues and emerging poison hazards are discussed.", "title": "" }, { "docid": "260527c2cd3c7942ccd2d57a77d64780", "text": "Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Datacentric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-toend routing schemes. We examine the impact of sourcedestination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases.", "title": "" }, { "docid": "41b745c7958ca8576b4cd7394ad47f44", "text": "We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene/object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels.", "title": "" }, { "docid": "5e896b2d47853088dc51323507f2f23a", "text": "A number of Learning Management Systems (LMSs) exist on the market today. A subset of a LMS is the component in which student assessment is managed. In some forms of assessment, such as open questions, the LMS is incapable of evaluating the students’ responses and therefore human intervention is necessary. In order to assess at higher levels of Bloom’s (1956) taxonomy, it is necessary to include open-style questions in which the student is given the task as well as the freedom to arrive at a response without the comfort of recall words and/or phrases. Automating the assessment process of open questions is an area of research that has been ongoing since the 1960s. Earlier work focused on statistical or probabilistic approaches based primarily on conceptual understanding. Recent gains in Natural Language Processing have resulted in a shift in the way in which free text can be evaluated. This has allowed for a more linguistic approach which focuses heavily on factual understanding. This study will leverage the research conducted in recent studies in the area of Natural Language Processing, Information Extraction and Information Retrieval in order to provide a fair, timely and accurate assessment of student responses to open questions based on the semantic meaning of those responses.", "title": "" }, { "docid": "6bc08fac6363e9aaadc1937a95c99795", "text": "In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite-horizon discrete-time nonlinear systems with finite approximation errors. The idea is to use an iterative ADP algorithm to obtain the iterative control law that makes the iterative performance index function reach the optimum. When the iterative control law and the iterative performance index function in each iteration cannot be accurately obtained, the convergence conditions of the iterative ADP algorithm are obtained. When convergence conditions are satisfied, it is shown that the iterative performance index functions can converge to a finite neighborhood of the greatest lower bound of all performance index functions under some mild assumptions. Neural networks are used to approximate the performance index function and compute the optimal control policy, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.", "title": "" }, { "docid": "a3e7a0cd6c0e79dee289c5b31c3dac76", "text": "Silicone is one of the most widely used filler for facial cosmetic correction and soft tissue augmentation. Although initially it was considered to be a biologically inert material, many local and generalized adverse effects have been reported after silicone usage for cosmetic purposes. We present a previously healthy woman who developed progressive and persistent generalized livedo reticularis after cosmetic surgery for volume augmentation of buttocks. Histopathologic study demonstrated dermal presence of interstitial vacuoles and cystic spaces of different sizes between the collagen bundles, which corresponded to the silicone particles implanted years ago. These vacuoles were clustered around vascular spaces and surrounded by a few foamy macrophages. General examination and laboratory investigations failed to show any evidence of connective tissue disease or other systemic disorder. Therefore, we believe that the silicone implanted may have induced some kind of blood dermal perturbation resulting in the characteristic violet reticular discoloration of livedo reticularis.", "title": "" }, { "docid": "43233ce6805a50ed931ce319245e4f6b", "text": "Currently the use of three-phase induction machines is widespread in industrial applications due to several methods available to control the speed and torque of the motor. Many applications require that the same torque be available at all revolutions up to the nominal value. In this paper two control methods are compared: scalar control and vector control. Scalar control is a relatively simple method. The purpose of the technique is to control the magnitude of the chosen control quantities. At the induction motor the technique is used as Volts/Hertz constant control. Vector control is a more complex control technique, the evolution of which was inevitable, too, since scalar control cannot be applied for controlling systems with dynamic behaviour. The vector control technique works with vector quantities, controlling the desired values by using space phasors which contain all the three phase quantities in one phasor. It is also known as field-oriented control because in the course of implementation the identification of the field flux of the motor is required. This paper reports on the changing possibilities of the revolution – torque characteristic curve, and demonstrates the results of the two control methods with simulations. The simulations and the applied equivalent circuit parameters are based on real measurements done with no load, with direct current and with locked-rotor.", "title": "" }, { "docid": "fc7b3aa8c3314ae228659739664dcdee", "text": "Myhre syndrome is a rare, distinctive syndrome due to specific gain-of-function mutations in SMAD4. The characteristic phenotype includes short stature, dysmorphic facial features, hearing loss, laryngotracheal anomalies, arthropathy, radiographic defects, intellectual disability, and a more recently appreciated spectrum of cardiovascular defects with a striking fibroproliferative response to surgical intervention. We report four newly described patients with typical features of Myhre syndrome who had (i) a mildly narrow descending aorta and restrictive cardiomyopathy; (ii) recurrent pericardial and pleural effusions; (iii) a large persistent ductus arteriosus with juxtaductal aortic coarctation; and (iv) restrictive pericardial disease requiring pericardiectomy. Additional information is provided about a fifth previously reported patient with fatal pericardial disease. A literature review of the cardiovascular features of Myhre syndrome was performed on 54 total patients, all with a SMAD4 mutation. Seventy percent had a cardiovascular abnormality including congenital heart defects (63%), pericardial disease (17%), restrictive cardiomyopathy (9%), and systemic hypertension (15%). Pericarditis and restrictive cardiomyopathy are associated with high mortality (three patients each among 10 deaths); one patient with restrictive cardiomyopathy also had epicarditis. Cardiomyopathy and pericardial abnormalities distinguish Myhre syndrome from other disorders caused by mutations in the TGF-β signaling cascade (Marfan, Loeys-Dietz, or Shprintzen-Goldberg syndromes). We hypothesize that the expanded spectrum of cardiovascular abnormalities relates to the ability of the SMAD4 protein to integrate diverse signaling pathways, including canonical TGF-β, BMP, and Activin signaling. The co-occurrence of congenital and acquired phenotypes demonstrates that the gene product of SMAD4 is required for both developmental and postnatal cardiovascular homeostasis. © 2016 Wiley Periodicals, Inc.", "title": "" } ]
scidocsrr
462ad0f689280722d97c4145ad0e7c82
Employing a fully convolutional neural network for road marking detection
[ { "docid": "7228ebec1e9ffddafab50e3ac133ebad", "text": "Building robust low and mid-level image representations, beyond edge primitives, is a long-standing goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these mid-level cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images.", "title": "" }, { "docid": "884121d37d1b16d7d74878fb6aff9cdb", "text": "All models are wrong, but some are useful. 2 Acknowledgements The authors of this guide would like to thank David Warde-Farley, Guillaume Alain and Caglar Gulcehre for their valuable feedback. Special thanks to Ethan Schoonover, creator of the Solarized color scheme, 1 whose colors were used for the figures. Feedback Your feedback is welcomed! We did our best to be as precise, informative and up to the point as possible, but should there be anything you feel might be an error or could be rephrased to be more precise or com-prehensible, please don't refrain from contacting us. Likewise, drop us a line if you think there is something that might fit this technical report and you would like us to discuss – we will make our best effort to update this document. Source code and animations The code used to generate this guide along with its figures is available on GitHub. 2 There the reader can also find an animated version of the figures.", "title": "" }, { "docid": "ca1cc40633a97f557b2c97e135534e27", "text": "This paper presents a real-time long-range lane detection and tracking approach to meet the requirements of the high-speed intelligent vehicles running on highway roads. Based on a linear-parabolic two-lane highway road model and a novel strong lane marking feature named Lane Marking Segmentation, the maximal lane detection distance of this approach is up to 120 meters. Then the lane lines are selected and tracked by estimating the ego vehicle lateral offset with a Kalman filter. Experiment results with test dataset extracted from real traffic scenes on highway roads show that the approaches proposed in this paper can achieve a high detection rate with a low time cost.", "title": "" } ]
[ { "docid": "8b002f094c6979f718426f46766b122b", "text": "Recent developments in smartphones create an ideal platform for robotics and computer vision applications: they are small, powerful, embedded devices with low-power mobile CPUs. However, though the computational power of smartphones has increased substantially in recent years, they are still not capable of performing intense computer vision tasks in real time, at high frame rates and low latency. We present a combination of FPGA and mobile CPU to overcome the computational and latency limitations of mobile CPUs alone. With the FPGA as an additional layer between the image sensor and CPU, the system is capable of accelerating computer vision algorithms to real-time performance. Low latency calculation allows for direct usage within control loops of mobile robots. A stereo camera setup with disparity estimation based on the semi global matching algorithm is implemented as an accelerated example application. The system calculates dense disparity images with 752×480 pixels resolution at 60 frames per second. The overall latency of the disparity estimation is less than 2 milliseconds. The system is suitable for any mobile robot application due to its light weight and low power consumption.", "title": "" }, { "docid": "9003a12f984d2bf2fd84984a994770f0", "text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.", "title": "" }, { "docid": "48c49e1f875978ec4e2c1d4549a98ffd", "text": "Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.", "title": "" }, { "docid": "ff1cc31ab089d5d1d09002866c7dc043", "text": "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.", "title": "" }, { "docid": "35a6a9b41273d6064d4daf5f39f621af", "text": "A systematic approach to develop a literature review is attractive because it aims to achieve a repeatable, unbiased and evidence-based outcome. However the existing form of systematic review such as Systematic Literature Review (SLR) and Systematic Mapping Study (SMS) are known to be an effort, time, and intellectual intensive endeavour. To address these issues, this paper proposes a model-based approach to Systematic Review (SR) production. The approach uses a domain-specific language expressed as a meta-model to represent research literature, a meta-model to specify SR constructs in a uniform manner, and an associated development process all of which can benefit from computer-based support. The meta-models and process are validated using real-life case study. We claim that the use of meta-modeling and model synthesis lead to a reduction in time, effort and the current dependence on human expertise.", "title": "" }, { "docid": "1cd0a8b7d12ca5e147408b1aaa4c5957", "text": "OpenMusic is an open source environment dedicated to music composition. The core of this environment is a full-featured visual programming language based on Common Lisp and CLOS (Common Lisp Object System) allowing to design processes for the generation or manipulation of musical material. This language can also be used for general purpose visual programming and other (possibly extra-musical) applications.", "title": "" }, { "docid": "8686ffed021b68574b4c3547d361eac8", "text": "* To whom all correspondence should be addressed. Abstract Face detection is an important prerequisite step for successful face recognition. The performance of previous face detection methods reported in the literature is far from perfect and deteriorates ungracefully where lighting conditions cannot be controlled. We propose a method that outperforms state-of-the-art face detection methods in environments with stable lighting. In addition, our method can potentially perform well in environments with variable lighting conditions. The approach capitalizes upon our near-IR skin detection method reported elsewhere [13][14]. It ascertains the existence of a face within the skin region by finding the eyes and eyebrows. The eyeeyebrow pairs are determined by extracting appropriate features from multiple near-IR bands. Very successful feature extraction is achieved by simple algorithmic means like integral projections and template matching. This is because processing is constrained in the skin region and aided by the near-IR phenomenology. The effectiveness of our method is substantiated by comparative experimental results with the Identix face detector [5].", "title": "" }, { "docid": "20acbae6f76e3591c8b696481baffc90", "text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.", "title": "" }, { "docid": "33bd561e2d8e1799d5d5156cbfe3f2e5", "text": "OBJECTIVE\nTo assess the effects of Balint groups on empathy measured by the Consultation And Relational Empathy Measure (CARE) scale rated by standardized patients during objective structured clinical examination and self-rated Jefferson's School Empathy Scale - Medical Student (JSPE-MS©) among fourth-year medical students.\n\n\nMETHODS\nA two-site randomized controlled trial were planned, from October 2015 to December 2015 at Paris Diderot and Paris Descartes University, France. Eligible students were fourth-year students who gave their consent to participate. Participants were allocated in equal proportion to the intervention group or to the control group. Participants in the intervention group received a training of 7 sessions of 1.5-hour Balint groups, over 3months. The main outcomes were CARE and the JSPE-MS© scores at follow-up.\n\n\nRESULTS\nData from 299 out of 352 randomized participants were analyzed: 155 in the intervention group and 144 in the control group, with no differences in baseline measures. There was no significant difference in CARE score at follow-up between the two groups (P=0.49). The intervention group displayed significantly higher JSPE-MS© score at follow-up than the control group [Mean (SD): 111.9 (10.6) versus 107.7 (12.7), P=0.002]. The JSPE-MS© score increased from baseline to follow-up in the intervention group, whereas it decreased in the control group [1.5 (9.1) versus -1.8 (10.8), P=0.006].\n\n\nCONCLUSIONS\nBalint groups may contribute to promote clinical empathy among medical students.\n\n\nTRIAL REGISTRATION\nNCT02681380.", "title": "" }, { "docid": "0072941488ef0e22b06d402d14cbe1be", "text": "This chapter is about computational modelling of the process of musical composition, based on a cognitive model of human behaviour. The idea is to try to study not only the requirements for a computer system which is capable of musical composition, but also to relate it to human behaviour during the same process, so that it may, perhaps, work in the same way as a human composer, but also so that it may, more likely, help us understand how human composers work. Pearce et al. (2002) give a fuller discussion of the motivations behind this endeavour.", "title": "" }, { "docid": "24da291ca2590eb614f94f8a910e200d", "text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.", "title": "" }, { "docid": "583623f15d855131d190fcef37839999", "text": "Service providers want to reduce datacenter costs by consolidating workloads onto fewer servers. At the same time, customers have performance goals, such as meeting tail latency Service Level Objectives (SLOs). Consolidating workloads while meeting tail latency goals is challenging, especially since workloads in production environments are often bursty. To limit the congestion when consolidating workloads, customers and service providers often agree upon rate limits. Ideally, rate limits are chosen to maximize the number of workloads that can be co-located while meeting each workload's SLO. In reality, neither the service provider nor customer knows how to choose rate limits. Customers end up selecting rate limits on their own in some ad hoc fashion, and service providers are left to optimize given the chosen rate limits.\n This paper describes WorkloadCompactor, a new system that uses workload traces to automatically choose rate limits simultaneously with selecting onto which server to place workloads. Our system meets customer tail latency SLOs while minimizing datacenter resource costs. Our experiments show that by optimizing the choice of rate limits, WorkloadCompactor reduces the number of required servers by 30--60% as compared to state-of-the-art approaches.", "title": "" }, { "docid": "f21e55c7509124be8fabfb1d706d76aa", "text": "CTCF and BORIS (CTCFL), two paralogous mammalian proteins sharing nearly identical DNA binding domains, are thought to function in a mutually exclusive manner in DNA binding and transcriptional regulation. Here we show that these two proteins co-occupy a specific subset of regulatory elements consisting of clustered CTCF binding motifs (termed 2xCTSes). BORIS occupancy at 2xCTSes is largely invariant in BORIS-positive cancer cells, with the genomic pattern recapitulating the germline-specific BORIS binding to chromatin. In contrast to the single-motif CTCF target sites (1xCTSes), the 2xCTS elements are preferentially found at active promoters and enhancers, both in cancer and germ cells. 2xCTSes are also enriched in genomic regions that escape histone to protamine replacement in human and mouse sperm. Depletion of the BORIS gene leads to altered transcription of a large number of genes and the differentiation of K562 cells, while the ectopic expression of this CTCF paralog leads to specific changes in transcription in MCF7 cells. We discover two functionally and structurally different classes of CTCF binding regions, 2xCTSes and 1xCTSes, revealed by their predisposition to bind BORIS. We propose that 2xCTSes play key roles in the transcriptional program of cancer and germ cells.", "title": "" }, { "docid": "03329ce0d0d9cc0582d00310f22366fe", "text": "Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management \"Smart Energy\" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad.", "title": "" }, { "docid": "e19743c3b2402090f9647f669a14d554", "text": "To investigate the relation between vocal prosody and change in depression severity over time, 57 participants from a clinical trial for treatment of depression were evaluated at seven-week intervals using a semistructured clinical interview for depression severity (Hamilton Rating Scale for Depression (HRSD)). All participants met criteria for major depressive disorder (MDD) at week one. Using both perceptual judgments by naive listeners and quantitative analyses of vocal timing and fundamental frequency, three hypotheses were tested: 1) Naive listeners can perceive the severity of depression from vocal recordings of depressed participants and interviewers. 2) Quantitative features of vocal prosody in depressed participants reveal change in symptom severity over the course of depression. 3) Interpersonal effects occur as well; such that vocal prosody in interviewers shows corresponding effects. These hypotheses were strongly supported. Together, participants' and interviewers' vocal prosody accounted for about 60 percent of variation in depression scores, and detected ordinal range of depression severity (low, mild, and moderate-to-severe) in 69 percent of cases (kappa = 0.53). These findings suggest that analysis of vocal prosody could be a powerful tool to assist in depression screening and monitoring over the course of depressive disorder and recovery.", "title": "" }, { "docid": "86429b47cefce29547ee5440a8410b83", "text": "AIM\nThe purpose of the study was to observe the outcome of trans-fistula anorectoplasty (TFARP) in treating female neonates with anorectovestibular fistula (ARVF).\n\n\nMETHODS\nA prospective study was carried out on female neonates with vestibular fistula, admitted into the surgical department of a tertiary level children hospital during the period from January 2009 to June 2011. TFARP without a covering colostomy was performed for definitive correction in the neonatal period in all. Data regarding demographics, clinical presentation, associated anomalies, preoperative findings, preoperative preparations, operative technique, difficulties faced during surgery, duration of surgery, postoperative course including complications, hospital stay, bowel habits and continence was prospectively compiled and analyzed. Anorectal function was measured by the modified Wingspread scoring as, \"excellent\", \"good\", \"fair\" and \"poor\".\n\n\nRESULTS\nThirty-nine neonates with vestibular fistula underwent single stage TFARP. Mean operation time was 81 minutes and mean hospital stay was 6 days. Three (7.7%) patients suffered vaginal tear during separation from the rectal wall. Two patients (5.1%) developed wound infection at neoanal site that resulted in anal stenosis. Eight (20.51%) children in the series are more than 3 years of age and are continent; all have attained \"excellent\" fecal continence score. None had constipation or soiling. Other 31 (79.5%) children less than 3 years of age have satisfactory anocutaneous reflex and anal grip on per rectal digital examination, though occasional soiling was observed in 4 patients.\n\n\nCONCLUSION\nPrimary repair of ARVF in female neonates by TFARP without dividing the perineum is a feasible procedure with good cosmetic appearance and good anal continence. Separation of the rectum from the posterior wall of vagina is the most delicate step of the operation, takes place under direct vision. It is very important to keep the perineal body intact. With meticulous preoperative bowel preparation and post operative wound care and bowel management, single stage reconstruction is possible in neonates with satisfactory results.", "title": "" }, { "docid": "9d98fe5183d53bfaaa42e642bc03b9b3", "text": "Cyber-attacks continue to increase worldwide, leading to significant loss or misuse of information assets. Most of the existing intrusion detection systems rely on per-packet inspection, a resource consuming task in today’s high speed networks. A recent trend is to analyze netflows (or simply flows) instead of packets, a technique performed at a relative low level leading to high false alarm rates. Since analyzing raw data extracted from flows lacks the semantic information needed to discover attacks, a novel approach is introduced, which uses contextual information to automatically identify and query possible semantic links between different types of suspicious activities extracted from flows. Time, location, and other contextual information mined from flows is applied to generate semantic links among alerts raised in response to suspicious flows. These semantic links are identified through an inference process on probabilistic semantic link networks (SLNs), which receive an initial prediction from a classifier that analyzes incoming flows. The SLNs are then queried at run-time to retrieve other relevant predictions. We show that our approach can be extended to detect unknown attacks in flows as variations of known attacks. An extensive validation of our approach has been performed with a prototype system on several benchmark datasets yielding very promising results in detecting both known and unknown attacks.", "title": "" }, { "docid": "a0850b5f8b2d994b50bb912d6fca3dfb", "text": "In this paper we describe the development of an accurate, smallfootprint, large vocabulary speech recognizer for mobile devices. To achieve the best recognition accuracy, state-of-the-art deep neural networks (DNNs) are adopted as acoustic models. A variety of speedup techniques for DNN score computation are used to enable real-time operation on mobile devices. To reduce the memory and disk usage, on-the-fly language model (LM) rescoring is performed with a compressed n-gram LM. We were able to build an accurate and compact system that runs well below real-time on a Nexus 4 Android phone.", "title": "" }, { "docid": "40dc7de2a08c07183606235500df3c4f", "text": "Aerial imagery of an urban environment is often characterized by significant occlusions, sharp edges, and textureless regions, leading to poor 3D reconstruction using conventional multi-view stereo methods. In this paper, we propose a novel approach to 3D reconstruction of urban areas from a set of uncalibrated aerial images. A very general structural prior is assumed that urban scenes consist mostly of planar surfaces oriented either in a horizontal or an arbitrary vertical orientation. In addition, most structural edges associated with such surfaces are also horizontal or vertical. These two assumptions provide powerful constraints on the underlying 3D geometry. The main contribution of this paper is to translate the two constraints on 3D structure into intra-image-column and inter-image-column constraints, respectively, and to formulate the dense reconstruction as a 2-pass Dynamic Programming problem, which is solved in complete parallel on a GPU. The result is an accurate cloud of 3D dense points of the underlying urban scene. Our algorithm completes the reconstruction of 1M points with 160 available discrete height levels in under a hundred seconds. Results on multiple datasets show that we are capable of preserving a high level of structural detail and visual quality.", "title": "" }, { "docid": "ca410a7cf7f36fdd145aed738f147d3f", "text": "A range of values of a real function f : Ed + Iw can be used to implicitly define a subset of Euclidean space Ed. Such “implicit functions” have many uses in geometric and solid modeling. This paper focuses on the properties and construction of real functions for the representation of rigid solids (compact, semi-analytic, and regular subsets of Ed). We review some known facts about real functions defining compact semi-analytic sets, and their applications. The theory of R-functions developed in (Rvachev, 1982) provides means for constructing real function representations of solids described by the standard (non-regularized) set operations. But solids are not closed under the standard set operations, and such real function representations are rarely available in modem solid modeling systems. More generally, assuring that a real function f represents a regular set may be difficult. Until now, the regularity has either been assumed, or treated in an ad hoc fashion. We show that topological and extremal properties of real functions can be used to test for regularity, and discuss procedures for constructing real functions with desired properties for arbitrary solids.", "title": "" } ]
scidocsrr
07dc0d9fd457a97b6c3aaa946bbd1897
Static analysis of android apps: A systematic literature review
[ { "docid": "d8fc5a8bc075343b2e70a9b441ecf6e5", "text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.", "title": "" }, { "docid": "9a48e31b5911e68b11c846d543f897be", "text": "Today’s smartphone users face a security dilemma: many apps they install operate on privacy-sensitive data, although they might originate from developers whose trustworthiness is hard to judge. Researchers have addressed the problem with more and more sophisticated static and dynamic analysis tools as an aid to assess how apps use private user data. Those tools, however, rely on the manual configuration of lists of sources of sensitive data as well as sinks which might leak data to untrusted observers. Such lists are hard to come by. We thus propose SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API. Given a training set of hand-annotated sources and sinks, SUSI identifies other sources and sinks in the entire API. To provide more fine-grained information, SUSI further categorizes the sources (e.g., unique identifier, location information, etc.) and sinks (e.g., network, file, etc.). For Android 4.2, SUSI identifies hundreds of sources and sinks with over 92% accuracy, many of which are missed by current information-flow tracking tools. An evaluation of about 11,000 malware samples confirms that many of these sources and sinks are indeed used. We furthermore show that SUSI can reliably classify sources and sinks even in new, previously unseen Android versions and components like Google Glass or", "title": "" }, { "docid": "55a6353fa46146d89c7acd65bee237b5", "text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.", "title": "" } ]
[ { "docid": "4b97e5694dc8f1d2e1b5bf8f28bd9b10", "text": "Poor eating habits are an important public health issue that has large health and economic implications. Many food preferences are established early, but because people make more and more independent eating decisions as they move through adolescence, the transition to independent living during the university days is an important event. To study the phenomenon of food selection, the heath belief model was applied to predict the likelihood of healthy eating among university students. Structural equation modeling was used to investigate the validity of the health belief model (HBM) among 194 students, followed by gender-based analyses. The data strongly supported the HBM. Social change campaign implications are discussed.", "title": "" }, { "docid": "f484a6d546f1556cadada6dd38bcf788", "text": "Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear OBEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.", "title": "" }, { "docid": "1a78e17056cca09250c7cc5f81fb271b", "text": "This paper presents a lightweight stereo vision-based driving lane detection and classification system to achieve the ego-car’s lateral positioning and forward collision warning to aid advanced driver assistance systems (ADAS). For lane detection, we design a self-adaptive traffic lanes model in Hough Space with a maximum likelihood angle and dynamic pole detection region of interests (ROIs), which is robust to road bumpiness, lane structure changing while the ego-car’s driving and interferential markings on the ground. What’s more, this model can be improved with geographic information system or electronic map to achieve more accurate results. Besides, the 3-D information acquired by stereo matching is used to generate an obstacle mask to reduce irrelevant objects’ interfere and detect forward collision distance. For lane classification, a convolutional neural network is trained by using manually labeled ROI from KITTI data set to classify the left/right-side line of host lane so that we can provide significant information for lane changing strategy making in ADAS. Quantitative experimental evaluation shows good true positive rate on lane detection and classification with a real-time (15Hz) working speed. Experimental results also demonstrate a certain level of system robustness on variation of the environment.", "title": "" }, { "docid": "e10dbbc6b3381f535ff84a954fcc7c94", "text": "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of Shotton et al. [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skeletal representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skeletal representation lies in the Lie group SE(3)×.. .×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "title": "" }, { "docid": "2272325860332d5d41c02f317ab2389e", "text": "For a developing nation, deploying big data (BD) technology and introducing data science in higher education is a challenge. A pessimistic scenario is: Mis-use of data in many possible ways, waste of trained manpower, poor BD certifications from institutes, under-utilization of resources, disgruntled management staff, unhealthy competition in the market, poor integration with existing technical infrastructures. Also, the questions in the minds of students, scientists, engineers, teachers and managers deserve wider attention. Besides the stated perceptions and analyses perhaps ignoring socio-political and scientific temperaments in developing nations, the following questions arise: How did the BD phenomenon naturally occur, post technological developments in Computer and Communications Technology and how did different experts react to it? Are academicians elsewhere agreeing on the fact that BD is a new science? Granted that big data science is a new science what are its foundations as compared to conventional topics in Physics, Chemistry or Biology? Or, is it similar in an esoteric sense to astronomy or nuclear science? What are the technological and engineering implications locally and globally and how these can be advantageously used to augment business intelligence, for example? In other words, will the industry adopt the changes due to tactical advantages? How can BD success stories be faithfully carried over elsewhere? How will BD affect the Computer Science and other curricula? How will BD benefit different segments of our society on a large scale? To answer these, an appreciation of the BD as a science and as a technology is necessary. This paper presents a quick BD overview, relying on the contemporary literature; it addresses: characterizations of BD and the BD people, the background required for the students and teachers to join the BD bandwagon, the management challenges in embracing BD so that the bottomline is clear.", "title": "" }, { "docid": "a4d4a06d3e84183eddf7de6c0fd2721b", "text": "Reinforcement learning (RL) is a powerful paradigm for sequential decision-making under uncertainties, and most RL algorithms aim to maximize some numerical value which represents only one long-term objective. However, multiple long-term objectives are exhibited in many real-world decision and control systems, so recently there has been growing interest in solving multiobjective reinforcement learning (MORL) problems where there are multiple conflicting objectives. The aim of this paper is to present a comprehensive overview of MORL. The basic architecture, research topics, and naïve solutions of MORL are introduced at first. Then, several representative MORL approaches and some important directions of recent research are comprehensively reviewed. The relationships between MORL and other related research are also discussed, which include multiobjective optimization, hierarchical RL, and multiagent RL. Moreover, research challenges and open problems of MORL techniques are suggested.", "title": "" }, { "docid": "453386ad4443abf59f6cf98093596c2f", "text": "Maternal depression increases risk of adverse perinatal outcomes, and recent evidence suggests that body image may play an important role in depression. This systematic review identifies studies of body image and perinatal depression with the goal of elucidating the complex role that body image plays in prenatal and postpartum depression, improving measurement, and informing next steps in research. We conducted a literature search of the PubMed database (1996–2014) for English language studies of (1) depression, (2) body image, and (3) pregnancy or postpartum. In total, 19 studies matched these criteria. Cross-sectional studies consistently found a positive association between body image dissatisfaction and perinatal depression. Prospective cohort studies found that body image dissatisfaction predicted incident prenatal and postpartum depression; findings were consistent across different aspects of body image and various pregnancy and postpartum time periods. Prospective studies that examined the reverse association found that depression influenced the onset of some aspects of body image dissatisfaction during pregnancy, but few evaluated the postpartum onset of body image dissatisfaction. The majority of studies found that body image dissatisfaction is consistently but weakly associated with the onset of prenatal and postpartum depression. Findings were less consistent for the association between perinatal depression and subsequent body image dissatisfaction. While published studies provide a foundation for understanding these issues, methodologically rigorous studies that capture the perinatal variation in depression and body image via instruments validated in pregnant women, consistently adjust for important confounders, and include ethnically diverse populations will further elucidate this association.", "title": "" }, { "docid": "e6f5c58910c877ade6594e206ac19e02", "text": "Model-based compression is an effective, facilitating, and expanded model of neural network models with limited computing and low power. However, conventional models of compression techniques utilize crafted features [2,3,12] and explore specialized areas for exploration and design of large spaces in terms of size, speed, and accuracy, which usually have returns Less and time is up. This paper will effectively analyze deep auto compression (ADC) and reinforcement learning strength in an effective sample and space design, and improve the compression quality of the model. The results of compression of the advanced model are obtained without any human effort and in a completely automated way. With a 4fold reduction in FLOP, the accuracy of 2.8% is higher than the manual compression model for VGG-16 in ImageNet.", "title": "" }, { "docid": "f6aadc8f79cc1b989d6fd92d048fa253", "text": "Authorship Identification is task to identify author of an article or document whose author is not known. This can be possible by comparing set of articles or documents whose authorship is known to this unknown article. This paper presents comparative approach based on similarity of unknown documents against the known by use of various features. The main focus of the paper is to show the difference in articles which were written in different time frames. And also observe, how the feature gets addressed in this different time frame of the same author. All method shows the comparison over several features.", "title": "" }, { "docid": "f4639c2523687aa0d5bfdd840df9cfa4", "text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.", "title": "" }, { "docid": "065620d1b22634eebf94bb0b33bc8598", "text": "An increasing amount of information is being collected on the ecological and socio-economic value of goods and services provided by natural and semi-natural ecosystems. However, much of this information appears scattered throughout a disciplinary academic literature, unpublished government agency reports, and across the World Wide Web. In addition, data on ecosystem goods and services often appears at incompatible scales of analysis and is classified differently by different authors. In order to make comparative ecological economic analysis possible, a standardized framework for the comprehensive assessment of ecosystem functions, goods and services is needed. In response to this challenge, this paper presents a conceptual framework and typology for describing, classifying and valuing ecosystem functions, goods and services in a clear and consistent manner. In the following analysis, a classification is given for the fullest possible range of 23 ecosystem functions that provide a much larger number of goods and services. In the second part of the paper, a checklist and matrix is provided, linking these ecosystem functions to the main ecological, socio–cultural and economic valuation methods. © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "3f644f06778e9a0fe64bebae0ff756af", "text": "Smart contracts are one of the most important applications of the blockchain. Most existing smart contract systems assume that for executing contract over a network of decentralized nodes, the outcome in accordance with the majority can be trusted. However, we observe that users involved with a smart contract may strategically take actions to manipulate execution of the contract for purpose to increase their own benefits. We propose an agent model, as the underpinning mechanism for contract execution over a network of decentralized nodes and public ledger, to address this problem and discuss the possibility of preventing users from manipulating smart contract execution by applying principles of game theory and agent based analysis.", "title": "" }, { "docid": "c63559ed7971471d7d4b44b85c2917ac", "text": "Vehicle-bicycle accidents with subsequent dragging of the rider over long distances are extremely rare. The case reported here is that of a 16-year-old mentally retarded bike rider who was run over by a truck whose driver failed to notice the accident. The legs of the victim became trapped by the rear axle of the trailer and the body was dragged over 45 km before being discovered under the parked truck. The autopsy revealed that the boy had died from the initial impact and not from the dragging injuries which had caused extensive mutilation. The reports of the technical expert and the forensic pathologist led the prosecutor to drop the case against the truck driver for manslaughter.", "title": "" }, { "docid": "7539a738cad3a36336dc7019e2aabb21", "text": "In this paper a compact antenna for ultrawideband applications is presented. The antenna is based on the biconical antenna design and has two identical elements. Each element is composed of a cone extended with a ring and an inner cylinder. The modification of the well-known biconical structure is made in order to reduce the influence of the radiation of the feeding cable. To obtain the optimum parameters leading to a less impact of the cable effect on the antenna performance, during the optimization process the antenna was coupled with a feeding coaxial cable. The proposed antenna covers the frequency range from 1.5 to 41 GHz with voltage standing wave ratio below 2 and has an omnidirectional radiation pattern. The realized total efficiency is above 85 % which indicates a good performance.", "title": "" }, { "docid": "18c30c601e5f52d5117c04c85f95105b", "text": "Crohn's disease is a relapsing systemic inflammatory disease, mainly affecting the gastrointestinal tract with extraintestinal manifestations and associated immune disorders. Genome wide association studies identified susceptibility loci that--triggered by environmental factors--result in a disturbed innate (ie, disturbed intestinal barrier, Paneth cell dysfunction, endoplasmic reticulum stress, defective unfolded protein response and autophagy, impaired recognition of microbes by pattern recognition receptors, such as nucleotide binding domain and Toll like receptors on dendritic cells and macrophages) and adaptive (ie, imbalance of effector and regulatory T cells and cytokines, migration and retention of leukocytes) immune response towards a diminished diversity of commensal microbiota. We discuss the epidemiology, immunobiology, amd natural history of Crohn's disease; describe new treatment goals and risk stratification of patients; and provide an evidence based rational approach to diagnosis (ie, work-up algorithm, new imaging methods [ie, enhanced endoscopy, ultrasound, MRI and CT] and biomarkers), management, evolving therapeutic targets (ie, integrins, chemokine receptors, cell-based and stem-cell-based therapies), prevention, and surveillance.", "title": "" }, { "docid": "5d63c5820cc8035822b86ef5fdaebefd", "text": "As the third most popular social network among millennials, Snapchat is well known for its picture and video messaging system that deletes content after it is viewed. However, the Stories feature of Snapchat offers a different perspective of ephemeral content sharing, with pictures and videos that are available for friends to watch an unlimited number of times for 24 hours. We conduct-ed an in-depth qualitative investigation by interviewing 18 participants and reviewing 14 days of their Stories posts. We identify five themes focused on how participants perceive and use the Stories feature, and apply a Goffmanesque metaphor to our analysis. We relate the Stories medium to other research on self-presentation and identity curation in social media.", "title": "" }, { "docid": "209842e00957d1d1786008d943895dc9", "text": "The impact that urban green spaces have on sustainability and quality of life is phenomenal. This is also true for the local South African environment. However, in reality green spaces in urban environments are decreasing due to growing populations, increasing urbanization and development pressure. This further impacts on the provision of child-friendly spaces, a concept that is already limited in local context. Child-friendly spaces are described as environments in which people (children) feel intimately connected to, influencing the physical, social, emotional, and ecological health of individuals and communities. The benefits of providing such spaces for the youth are well documented in literature. This research therefore aimed to investigate the concept of childfriendly spaces and its applicability to the South African planning context, in order to guide the planning of such spaces for future communities and use. Child-friendly spaces in the urban environment of the city of Durban, was used as local case study, along with two international case studies namely Mullerpier public playground in Rotterdam, the Netherlands, and Kadidjiny Park in Melville, Australia. The aim was to determine how these spaces were planned and developed and to identify tools that were used to accomplish the goal of providing successful child-friendly green spaces within urban areas. The need and significance of planning for such spaces was portrayed within the international case studies. It is confirmed that minimal provision is made for green space planning within the South African context, when there is reflected on the international examples. As a result international examples and disciples of providing child-friendly green spaces should direct planning guidelines within local context. The research concluded that childfriendly green spaces have a positive impact on the urban environment and assist in a child’s development and interaction with the natural environment. Regrettably, the planning of these childfriendly spaces is not given priority within current spatial plans, despite the proven benefits of such. Keywords—Built environment, child-friendly spaces, green spaces. public places, urban area. E. J. Cilliers is a Professor at the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: juanee.cilliers@nwu.ac.za). Z. Goosen is a PhD student with the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: goosenzhangoosen@gmail.com). This research (or parts thereof) was made possible by the financial contribution of the NRF (National Research Foundation) South Africa. The opinions, findings and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF does not accept any liability in regard thereto.", "title": "" }, { "docid": "4059c52f56810a463e07f7ed0e00e8ce", "text": "Conservation and maintenance of historic buildings have exceptional requirements and need a detailed diagnosis and an accurate as-is documentation. This paper reports the use of Unmanned Aerial Vehicle (UAV) imagery to create an Intelligent Digital Built Heritage Model (IDBHM) based on Building Information Modeling (BIM) technology. Our work outlines a model-driven approach based on UAV data acquisition, photogrammetry, post-processing and segmentation of point clouds to promote partial automation of BIM modeling process. The methodology proposed was applied to a historical building facade located in Brazil. A qualitative and quantitative assessment of the proposed segmentation method was undertaken through the comparison between segmented clusters and as-designed documents, also as between point clouds and ground control points. An accurate and detailed parametric IDBHM was created from high-resolution Dense Surface Model (DSM). This Model can improve conservation and rehabilitation works. The results demonstrate that the proposed approach yields good results in terms of effectiveness in the clusters segmentation, compared to the as-designed", "title": "" }, { "docid": "46f623cea7c1f643403773fc5ed2508d", "text": "The use of machine learning tools has become widespread in medical diagnosis. The main reason for this is the effective results obtained from classification and diagnosis systems developed to help medical professionals in the diagnosis phase of diseases. The primary objective of this study is to improve the accuracy of classification in medical diagnosis problems. To this end, studies were carried out on 3 different datasets. These datasets are heart disease, Parkinson’s disease (PD) and BUPA liver disorders. Key feature of these datasets is that they have a linearly non-separable distribution. A new method entitled k-medoids clustering-based attribute weighting (kmAW) has been proposed as a data preprocessing method. The support vector machine (SVM) was preferred in the classification phase. In the performance evaluation stage, classification accuracy, specificity, sensitivity analysis, f-measure, kappa statistics value and ROC analysis were used. Experimental results showed that the developed hybrid system entitled kmAW + SVM gave better results compared to other methods described in the literature. Consequently, this hybrid intelligent system can be used as a useful medical decision support tool.", "title": "" } ]
scidocsrr
08a7ba1214e610707f69e64b1acdf4d2
CARL: Content-Aware Representation Learning for Heterogeneous Networks
[ { "docid": "a8dba8c05403b8f7dd8756b31c4a3de6", "text": "A heterogeneous information network is an information network\n composed of multiple types of objects. Clustering on such a network may lead to better understanding of both hidden structures of the network and the individual role played by every object in each cluster. However, although clustering on homogeneous networks has been studied over decades, clustering on heterogeneous networks has not been addressed until recently.\n A recent study proposed a new algorithm, RankClus, for clustering on bi-typed heterogeneous networks. However, a real-world network may consist of more than two types, and the interactions among multi-typed objects play a key role at disclosing the rich semantics that a network carries. In this paper, we study clustering of multi-typed heterogeneous networks with a star network schema and propose a novel algorithm, NetClus, that utilizes links across multityped objects to generate high-quality net-clusters. An iterative enhancement method is developed that leads to effective ranking-based clustering in such heterogeneous networks. Our experiments on DBLP data show that NetClus generates more accurate clustering results than the baseline topic model algorithm PLSA and the recently proposed algorithm, RankClus. Further, NetClus generates informative clusters, presenting good ranking and cluster membership information for each attribute object in each net-cluster.", "title": "" } ]
[ { "docid": "270b0cb1cb95d69d34fc5f6e6cc8b5ce", "text": "A critical problem in an intermodal transport chain is the direct meet at the transhipment nodes. This requires information technology and modern communication facilities as well as close collaboration between all the concerned transport operators in the chain. The T ELETRUCK system – currently under development at the German Research Center for Artificial Intelligence (DFKI) – is a dispatch support system that tackles this problem. Intercompany planning, scheduling, and monitoring of intermodal transport chains will be supported by our system. It aims at providing smooth access to railway time tables and rail-based transport services and – much more important – at allowing for the planning of both, exclusively road-based and combined journeys and showing their cost-effectiveness. We describe our approach – based on intelligent agent technology – both the current state of implementation and our goal of the very next future.", "title": "" }, { "docid": "b921d419bd49335d4f5685d9cc679e85", "text": "Google's Android Native Development Kit (NDK) is a toolset that lets you embed components to use of native code in your Android applications. It makes possible for developers to easily compile in C/C++ for the Android development platform. Generally, developer does not concern how effective between native code and Dalvik Java code that will causes poor performance of Android. There are some researches discussed about benchmark Java and C/C++. But they do not consider the issues of Dalvik and native code for Android programming or evaluate them in real Android device. In this work, we use a more complete approach to benchmark Dalvik java code and Native code on real Android device. We conducted 12 test programs to analyze the performance and found that native code faster than Dalvik Java code for about 34.2%.", "title": "" }, { "docid": "98ab9279efd8aeee6bb58fe84f5142f3", "text": "BACKGROUND\nBreast hypertrophy presents at puberty or thereafter. It is a condition of abnormal enlargement of the breast tissue in excess of the normal proportion. Gland hypertrophy, excessive fatty tissue or a combination of both may cause this condition. Macromastia can be unilateral or bilateral.\n\n\nOBJECTIVE\nTo present a case of massive bilateral gigantomastia with huge bilateral hypertrophy of the axillary breasts.\n\n\nMETHODS\nReview of the prentation, clinical and investigative findings aswell as the outcome of surgical intervention of a young Nigerian woman with bilateral severe breast hypertrophy and severe hypertrophy of axillary breasts.\n\n\nRESULT\nThe patient was a 26-year-old woman who presented with massive swelling of her breasts and bilateral axillary swellings, both of six years duration.. In addition to the breast pathology, she also suffered significant psychological problems. The breast ultrasonography confirmed only diffuse swellings, with no visible lumps or areas of calcifiCation. She had total bilateral excision of the hypertrophied axillary breasts, and bilateral breast amputation with composite nipple-areola complex graft of the normally located breasts.The total weight of the breast tissues removed was 44.8 kilogram.\n\n\nCONCLUSION\nMacromastia of this size is very rare. This case to date is probably the largest in the world literature. Surgical treatment of the condition gives a satisfactory outcome.", "title": "" }, { "docid": "46e38ed8d1191ada2e158115e7e92f0d", "text": "There exists a positive correlation between an economy's exposure to international trade and the size of its government. The correlation holds for most measures of government spending, in low-as well as high-income samples, and is robust to the inclusion of a wide range of controls. One explanation is that government spending plays a risk-reducing role in economies exposed to significant amount of external risk. The paper provides a range of evidence consistent with this hypothesis. In particular, the relationship between openness and government size is strongest when terms-of-trade risk is highest.", "title": "" }, { "docid": "a76b0262c9389df1677b7658ee381613", "text": "Some contemporary theorists and clinicians champion acceptance and mindfulness-based interventions, such as Acceptance and Commitment Therapy (ACT), over cognitive-behavioral therapy (CBT) for the treatment of emotional disorders. The objective of this article is to juxtapose these two treatment approaches, synthesize, and clarify the differences between them. The two treatment modalities can be placed within a larger context of the emotion regulation literature. Accordingly, emotions can be regulated either by manipulating the evaluation of the external or internal emotion cues (antecedent-focused emotion regulation) or by manipulating the emotional responses (response-focused emotion regulation). CBT and ACT both encourage adaptive emotion regulation strategies but target different stages of the generative emotion process: CBT promotes adaptive antecedent-focused emotion regulation strategies, whereas acceptance strategies of ACT counteract maladaptive response-focused emotion regulation strategies, such as suppression. Although there are fundamental differences in the philosophical foundation, ACT techniques are fully compatible with CBT and may lead to improved interventions for some disorders. Areas of future treatment research are discussed.", "title": "" }, { "docid": "2e16758c0f55cd44b88c18b8948ec1cb", "text": "We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays the key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities, our Sintel-trained CNN produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.", "title": "" }, { "docid": "ff0d818dfd07033fb5eef453ba933914", "text": "Hyperplastic placentas have been reported in several experimental mouse models, including animals produced by somatic cell nuclear transfer, by inter(sub)species hybridization, and by somatic cytoplasm introduction to oocytes followed by intracytoplasmic sperm injection. Of great interest are the gross and histological features common to these placental phenotypes--despite their quite different etiologies--such as the enlargement of the spongiotrophoblast layers. To find morphological clues to the pathways leading to these similar placental phenotypes, we analyzed the ultrastructure of the three different types of hyperplastic placenta. Most cells affected were of trophoblast origin and their subcellular ultrastructural lesions were common to the three groups, e.g., a heavy accumulation of cytoplasmic vacuoles in the trophoblastic cells composing the labyrinthine wall and an increased volume of spongiotrophoblastic cells with extraordinarily dilatated rough endoplasmic reticulum. Although the numbers of trophoblastic glycogen cells were greatly increased, they maintained their normal ultrastructural morphology, including a heavy glycogen deposition throughout the cytoplasm. The fetal endothelium and small vessels were nearly intact. Our ultrastructural study suggests that these three types of placental hyperplasias, with different etiologies, may have common pathological pathways, which probably exclusively affect the development of certain cell types of the trophoblastic lineage during mouse placentation.", "title": "" }, { "docid": "de27fcd170903a761f8eb35a5f98f266", "text": "We develop predictive models of pedestrian dynamics by encoding the coupled nature of multi-pedestrian interaction using game theory, and deep learning-based visual analysis to estimate person-specific behavior parameters. Building predictive models for multi-pedestrian interactions however, is very challenging due to two reasons: (1) the dynamics of interaction are complex interdependent processes, where the predicted behavior of one pedestrian can affect the actions taken by others and (2) dynamics are variable depending on an individuals physical characteristics (e.g., an older person may walk slowly while the younger person may walk faster). To address these challenges, we (1) utilize concepts from game theory to model the interdependent decision making process of multiple pedestrians and (2) use visual classifiers to learn a mapping from pedestrian appearance to behavior parameters. We evaluate our proposed model on several public multiple pedestrian interaction video datasets. Results show that our strategic planning model explains human interactions 25% better when compared to state-of-the-art methods.", "title": "" }, { "docid": "000961818e2e0e619f1fc0464f69a496", "text": "Database query languages can be intimidating to the non-expert, leading to the immense recent popularity for keyword based search in spite of its significant limitations. The holy grail has been the development of a natural language query interface. We present NaLIX, a generic interactive natural language query interface to an XML database. Our system can accept an arbitrary English language sentence as query input, which can include aggregation, nesting, and value joins, among other things. This query is translated, potentially after reformulation, into an XQuery expression that can be evaluated against an XML database. The translation is done through mapping grammatical proximity of natural language parsed tokens to proximity of corresponding elements in the result XML. In this demonstration, we show that NaLIX, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed features in NaLIX facilitate the interactive query process and improve the usability of the interface.", "title": "" }, { "docid": "b0bbac53b0a3a0f00a9239fa9e66b6db", "text": "One challenge for maintaining a large-scale software system, especially an online service system, is to quickly respond to customer issues. The issue reports typically have many categorical attributes that reflect the characteristics of the issues. For a commercial system, most of the time the volume of reported issues is relatively constant. Sometimes, there are emerging issues that lead to significant volume increase. It is important for support engineers to efficiently and effectively identify and resolve such emerging issues, since they have impacted a large number of customers. Currently, problem identification for an emerging issue is a tedious and error-prone process, because it requires support engineers to manually identify a particular attribute combination that characterizes the emerging issue among a large number of attribute combinations. We call such an attribute combination effective combination, which is important for issue isolation and diagnosis. In this paper, we propose iDice, an approach that can identify the effective combination for an emerging issue with high quality and performance. We evaluate the effectiveness and efficiency of iDice through experiments. We have also successfully applied iDice to several Microsoft online service systems in production. The results confirm that iDice can help identify emerging issues and reduce maintenance effort.", "title": "" }, { "docid": "6087be6cef33af7d8fbfa55c8125bdb7", "text": "Support Vector Machines (SVM) are the classifiers which were originally designed for binary classification. The classification applications can solve multi-class problems. Decision-tree-based support vector machine which combines support vector machines and decision tree can be an effective way for solving multi-class problems in Intrusion Detection Systems (IDS). This method can decrease the training and testing time of the IDS, increasing the efficiency of the system. The different ways to construct the binary trees divides the data set into two subsets from root to the leaf until every subset consists of only one class. The construction order of binary tree has great influence on the classification performance. In this paper we are studying two decision tree approaches: Hierarchical multiclass SVM and Tree structured multiclass SVM, to construct multiclass intrusion detection system.", "title": "" }, { "docid": "67ca7b4e38b545cd34ef79f305655a45", "text": "Failsafe performance is clarified for electric vehicles (EVs) with the drive structure driven by front and rear wheels independently, i.e., front and rear wheel independent drive type (FRID) EV. A simulator based on the four-wheel vehicle model, which can be applied to various types of drive systems like four in-wheel motor-drive-type EVs, is used for the clarification. Yaw rate and skid angle, which are related to drivability and steerability of vehicles and which further influence the safety of vehicles during runs, are analyzed under the condition that one of the motor drive systems fails while cornering on wet roads. In comparison with the four in-wheel motor-drive-type EVs, it is confirmed that the EVs with the structure focused in this paper have little change of the yaw rate and that hardly any dangerous phenomena appear, which would cause an increase in the skid angle of vehicles even if the front or rear wheel drive systems fail when running on wet roads with low friction coefficient. Moreover, the failsafe drive performance of the FRID EVs with the aforementioned structure is verified through experiments using a prototype EV.", "title": "" }, { "docid": "76ef678b28d41317e2409b9fd2109f35", "text": "Conflicting guidelines for excisions about the alar base led us to develop calibrated alar base excision, a modification of Weir's approach. In approximately 20% of 1500 rhinoplasties this technique was utilized as a final step. Of these patients, 95% had lateral wallexcess (“tall nostrils”), 2% had nostril floor excess (“wide nostrils”), 2% had a combination of these (“tall-wide nostrils”), and 1% had thick nostril rims. Lateral wall excess length is corrected by a truncated crescent excision of the lateral wall above the alar crease. Nasal floor excess is improved by an excision of the nasal sill. Combination noses (e.g., tall-wide) are approached with a combination alar base excision. Finally, noses with thick rims are improved with diamond excision. Closure of the excision is accomplished with fine simple external sutures. Electrocautery is unnecessary and deep sutures are utilized only in wide noses. Few complications were noted. Benefits of this approach include straightforward surgical guidelines, a natural-appearing correction, avoidance of notching or obvious scarring, and it is quick and simple.", "title": "" }, { "docid": "bf7305ceee06b3672825032b78c5e22f", "text": "Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.", "title": "" }, { "docid": "ced0328f339248158e8414c3315330c5", "text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB", "title": "" }, { "docid": "0acbca58270fdbb557906bcdcf2ba2a6", "text": "This work demonstrates the design of a rectenna to operate over wide dynamic input power range. It utilizes an adaptive reconfigurable rectifier to overcome the issue of early breakdown voltage in conventional rectifiers. A depletion-mode field-effect transistor has been introduced to operate as a switch and compensate at low and high input power levels for the rectifier. In addition, a meandered monopole antenna has been exploited to collect RF energy. The rectifier design achieves 40% of RF-DC power conversion efficiency over a wide dynamic input power range from −17 dBm to 27 dBm and the antenna exhibits a directivity of 1.92 dBi as well as a return loss of −33 dB. The rectenna is designed to operate in the 900 MHz ISM band and suitable for Wireless Power Transfer (WPT) applications.", "title": "" }, { "docid": "75acb02c357a97de064242d41e394cb3", "text": "STUDY DESIGN\nA randomized controlled trial, prestest-posttest design, with a 3-, 6-, and 12-month follow-up.\n\n\nOBJECTIVES\nTo investigate the efficacy of a therapeutic exercise approach in a population with chronic low back pain (LBP).\n\n\nBACKGROUND\nTherapeutic approaches developed from the Pilates method are becoming increasingly popular; however, there have been no reports on their efficacy.\n\n\nMETHODS AND MEASURES\nThirty-nine physically active subjects between 20 and 55 years old with chronic LBP were randomly assigned to 1 of 2 groups. The specific-exercise-training group participated in a 4-week program consisting of training on specialized (Pilates) exercise equipment, while the control group received the usual care, defined as consultation with a physician and other specialists and healthcare professionals, as necessary. Treatment sessions were designed to train the activation of specific muscles thought to stabilize the lumbar-pelvic region. Functional disability outcomes were measured with The Roland Morris Disability Questionnaire (RMQ/RMDQ-HK) and average pain intensity using a 101-point numerical rating scale.\n\n\nRESULTS\nThere was a significantly lower level of functional disability (P = .023) and average pain intensity (P = .002) in the specific-exercise-training group than in the control group following the treatment intervention period. The posttest adjusted mean in functional disability level in the specific-exercise-training group was 2.0 (95% CI, 1.3 to 2.7) RMQ/RMDQ-HK points compared to a posttest adjusted mean in the control group of 3.2 (95% CI, 2.5 to 4.0) RMQ/RMDQ-HK points. The posttest adjusted mean in pain intensity in the specific-exercise-training group was 18.3 (95% CI, 11.8 to 24.8), as compared to 33.9 (95% CI, 26.9 to 41.0) in the control group. Improved disability scores in the specific-exercise-training group were maintained for up to 12 months following treatment intervention.\n\n\nCONCLUSIONS\nThe individuals in the specific-exercise-training group reported a significant decrease in LBP and disability, which was maintained over a 12-month follow-up period. Treatment with a modified Pilates-based approach was more efficacious than usual care in a population with chronic, unresolved LBP.", "title": "" }, { "docid": "52fca011caec44823513dbfe24389c15", "text": "Learning novel relations from relational databases is an important problem with many applications. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same database may be represented under different schemas for various reasons, such as data quality, efficiency and usability. The output of current relational learning algorithms tends to vary quite substantially over the choice of schema. This variation complicates their off-the-shelf application. We introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We show that current algorithms are not schema independent. We propose Castor, a relational learning algorithm that achieves schema independence by leveraging data dependencies.", "title": "" }, { "docid": "f4dbd6570f45fa51c914fd6a63a99c23", "text": "We introduce the classified stable matching problem, a problem motivated by academic hiring. Suppose that a number of institutes are hiring faculty members from a pool of applicants. Both institutes and applicants have preferences over the other side. An institute classifies the applicants based on their research areas (or any other criterion), and, for each class, it sets a lower bound and an upper bound on the number of applicants it would hire in that class. The objective is to find a stable matching from which no group of participants has reason to deviate. Moreover, the matching should respect the upper/lower bounds of the classes. In the first part of the paper, we study classified stable matching problems whose classifications belong to a fixed set of \"order types.\" We show that if the set consists entirely of downward forests, there is a polynomial-time algorithm; otherwise, it is NP-complete to decide the existence of a stable matching.\n In the second part, we investigate the problem using a polyhedral approach. Suppose that all classifications are laminar families and there is no lower bound. We propose a set of linear inequalities to describe stable matching polytope and prove that it is integral. This integrality result allows us to find optimal stable matchings in polynomial time using Ellipsoid algorithm; furthermore, it gives a description of the stable matching polytope for the many-to-many (unclassified) stable matching problem, thereby answering an open question posed by Sethuraman, Teo and Qian.", "title": "" }, { "docid": "1ee6393c3507477af9108806690dc3c8", "text": "Image understanding achieves unprecedented performance in content recognition and emotion rating recently. However, previous research mainly focused on visual features. In this paper, inspired by human cognitive activities, we discuss how to measure portrait distance in respective of high level semantic: personality, by employing features from both visual contents and behavior contents. Firstly, a new image distance metric, named Social and Visual Portrait Distance, is designed by jointly considering visual features and human social media behavior features. In portrait images, visual features are defined globally and locally utilizing theoretical and empirical concepts from psychology theory. While in social media, behavior features are designed with considerations of demographic factors, identity claims and behavioral residue. And the new distance is estimated referred to feature reliability. Secondly, we modify the proposed distance calculation formula in order to apply it on only visual features but preserving both visual and social relations by a new Social Embedding Portrait Distance Learning method. In this manner, we could measure the social embedding visual distance of common portrait images in absence of social media information, such as web portrait, or daily photos. Comprehensive experiments are employed to investigate the effectiveness of the new portrait distance and the metric learning method in representing personality distance compared with several baselines. Moreover, the learned distance matrix reveals a reasonable explanation of social preferred visual features with contribute partitions.", "title": "" } ]
scidocsrr
1e1ec049f52a8cb1b65a45d13ba44e93
Swing-Pay : A Digital Card Module using NFC and Biometric Authentication for Peer-to-Peer Payment
[ { "docid": "7a6ae2e12dbd9f4a0a3355caec648ca7", "text": "Near Field Communication (NFC) is an emerging wireless short-range communication technology that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In combination with NFC-capable smartphones it enables intuitive application scenarios for contactless transactions, in particular services for mobile payment and over-theair ticketing. The intention of this paper is to describe basic characteristics and benefits of the underlaying technology, to classify modes of operation and to present various use cases. Both existing NFC applications and possible future scenarios will be analyzed in this context. Furthermore, security concerns, challenges and present conflicts will be discussed eventually.", "title": "" } ]
[ { "docid": "58e6b3b63b2210da621aabd891dbc627", "text": "The precise role of orbitofrontal cortex (OFC) in affective processing is still debated. One view suggests OFC represents stimulus reward value and supports learning and relearning of stimulus-reward associations. An alternate view implicates OFC in behavioral control after rewarding or punishing feedback. To discriminate between these possibilities, we used event-related functional magnetic resonance imaging in subjects performing a reversal task in which, on each trial, selection of the correct stimulus led to a 70% probability of receiving a monetary reward and a 30% probability of obtaining a monetary punishment. The incorrect stimulus had the reverse contingency. In one condition (choice), subjects had to choose which stimulus to select and switch their response to the other stimulus once contingencies had changed. In another condition (imperative), subjects had simply to track the currently rewarded stimulus. In some regions of OFC and medial prefrontal cortex, activity was related to valence of outcome, whereas in adjacent areas activity was associated with behavioral choice, signaling maintenance of the current response strategy on a subsequent trial. Caudolateral OFC-anterior insula was activated by punishing feedback preceding a switch in stimulus in both the choice and imperative conditions, indicating a possible role for this region in signaling a change in reward contingencies. These results suggest functional heterogeneity within the OFC, with a role for this region in representing stimulus-reward values, signaling changes in reinforcement contingencies and in behavioral control.", "title": "" }, { "docid": "d6f52736d78a5b860bdb364f64e4523c", "text": "Deep convolutional neural networks (CNN) have recently been shown to generate promising results for aesthetics assessment. However, the performance of these deep CNN methods is often compromised by the constraint that the neural network only takes the fixed-size input. To accommodate this requirement, input images need to be transformed via cropping, warping, or padding, which often alter image composition, reduce image resolution, or cause image distortion. Thus the aesthetics of the original images is impaired because of potential loss of fine grained details and holistic image layout. However, such fine grained details and holistic image layout is critical for evaluating an images aesthetics. In this paper, we present an Adaptive Layout-Aware Multi-Patch Convolutional Neural Network (A-Lamp CNN) architecture for photo aesthetic assessment. This novel scheme is able to accept arbitrary sized images, and learn from both fined grained details and holistic image layout simultaneously. To enable training on these hybrid inputs, we extend the method by developing a dedicated double-subnet neural network structure, i.e. a Multi-Patch subnet and a Layout-Aware subnet. We further construct an aggregation layer to effectively combine the hybrid features from these two subnets. Extensive experiments on the large-scale aesthetics assessment benchmark (AVA) demonstrate significant performance improvement over the state-of-the-art in photo aesthetic assessment.", "title": "" }, { "docid": "dd6b922a2cced45284cd1c67ad3be247", "text": "Today’s interconnected socio-economic and environmental challenges require the combination and reuse of existing integrated modelling solutions. This paper contributes to this overall research area, by reviewing a wide range of currently available frameworks, systems and emerging technologies for integrated modelling in the environmental sciences. Based on a systematic review of the literature, we group related studies and papers into viewpoints and elaborate on shared and diverging characteristics. Our analysis shows that component-based modelling frameworks and scientific workflow systems have been traditionally used for solving technical integration challenges, but ultimately, the appropriate framework or system strongly depends on the particular environmental phenomenon under investigation. The study also shows that in general individual integrated modelling solutions do not benefit from components and models that are provided by others. It is this island (or silo) situation, which results in low levels of model reuse for multi-disciplinary settings. This seems mainly due to the fact that the field as such is highly complex and diverse. A unique integrated modelling solution, which is capable of dealing with any environmental scenario, seems to be unaffordable because of the great variety of data formats, models, environmental phenomena, stakeholder networks, user perspectives and social aspects. Nevertheless, we conclude that the combination of modelling tools, which address complementary viewpoints such as service-based combined with scientific workflow systems, or resource-modelling on top of virtual research environments could lead to sustainable information systems, which would advance model sharing, reuse and integration. Next steps for improving this form of multi-disciplinary interoperability are sketched.", "title": "" }, { "docid": "fe2ba23200bc7e6fdca420ddf0a22ed9", "text": "Sexual precocity is considered to be present when indications of genital maturation become apparent in boys before the age of 10 years and in girls before the age of 8 years (Seckel, 1946). It is customary to divide these cases into two groups. In those with true precocious puberty maturation with spermatogenesis or ovulation has occurred in a normal manner, but at an abnormally early age; in those with pseudoprecocious puberty, premature development of the secondary sex organs, but without spermatogenesis or ovulation, has occurred as a result of an ovarian or adreno-cortical tumour, unusual sensitivity of end-organs to normal hormonal stimulation, or exogenous application of sex hormones or other compounds (Talbot, Sobel, McArthur and Crawford, 1952). In a small proportion of cases true precocious puberty is associated with tumours or cysts in the region of the hypothalamus or with post-meningitic or postencephalitic lesions, but in the majority diligent and repeated search fails to reveal any abnormality in the nervous system or endocrine glands (Wilkins, 1957). Such cases are generally referred to as 'idiopathic' or 'constitutional' (Novak, 1944), and it is suggested that the genetic factor or factors that determine the time of hypothalamic sex maturation must be at fault (Seckel, 1946). In a small percentage of cases there is a heredo-familial tendency (Rush, Bilderback, Slocum and Rogers, 1937; Jacobsen and Macklin, 1952). Precocious puberty of idiopathic origin represents simply the early appearance of normal phenomena, and many of the physical and laboratory examinations reveal findings normal for older children (Lloyd, Lobotsky and Morley, 1950). Except for the hazard of precocious pregnancy and the possibility of subnormal stature, the prognosis of girls with idiopathic precocious puberty is good, and it does not appear that the menopause is accelerated or that premature senility occurs (Talbot et al., 1952; Jolly, 1955). The following case of precocious puberty occurring in a female child appears to present features of sufficient interest to warrant publication. Case History", "title": "" }, { "docid": "06a241bc0483a910a3fecef8e7e7883a", "text": "Linear programming duality yields e,cient algorithms for solving inverse linear programs. We show that special classes of conic programs admit a similar duality and, as a consequence, establish that the corresponding inverse programs are e,ciently solvable. We discuss applications of inverse conic programming in portfolio optimization and utility function identi0cation. c © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8c61854c397f8c56c4258c53d6d58894", "text": "Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review.", "title": "" }, { "docid": "bc7209b09edae3ca916be1560fb1d396", "text": "The prediction and diagnosis of Tuberculosis survivability has been a challenging research problem for many researchers. Since the early dates of the related research, much advancement has been recorded in several related fields. For instance, thanks to innovative biomedical technologies, better explanatory prognostic factors are being measured and recorded; thanks to low cost computer hardware and software technologies, high volume better quality data is being collected and stored automatically; and finally thanks to better analytical methods, those voluminous data is being processed effectively and efficiently. Tuberculosis is one of the leading diseases for all people in developed countries including India. It is the most common cause of death in human being. The high incidence of Tuberculosis in all people has increased significantly in the last years. In this paper we have discussed various data mining approaches that have been utilized for Tuberculosis diagnosis and prognosis. This study paper summarizes various review and technical articles on Tuberculosis diagnosis and prognosis also we focus on current research being carried out using the data mining techniques to enhance the Tuberculosis diagnosis and prognosis. Here, we took advantage of those available technological advancements to develop the best prediction model for Tuberculosis survivability.", "title": "" }, { "docid": "0ca476ed89607680399604b39d76185b", "text": "Honeybee swarms and complex brains show many parallels in how they make decisions. In both, separate populations of units (bees or neurons) integrate noisy evidence for alternatives, and, when one population exceeds a threshold, the alternative it represents is chosen. We show that a key feature of a brain--cross inhibition between the evidence-accumulating populations--also exists in a swarm as it chooses its nesting site. Nest-site scouts send inhibitory stop signals to other scouts producing waggle dances, causing them to cease dancing, and each scout targets scouts' reporting sites other than her own. An analytic model shows that cross inhibition between populations of scout bees increases the reliability of swarm decision-making by solving the problem of deadlock over equal sites.", "title": "" }, { "docid": "58d8e3bd39fa470d1dfa321aeba53106", "text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.", "title": "" }, { "docid": "8767787aaa4590acda7812411135c168", "text": "Automatic annotation of images is one of the fundamental problems in computer vision applications. With the increasing amount of freely available images, it is quite possible that the training data used to learn a classifier has different distribution from the data which is used for testing. This results in degradation of the classifier performance and highlights the problem known as domain adaptation. Framework for domain adaptation typically requires a classification model which can utilize several classifiers by combining their results to get the desired accuracy. This work proposes depth-based and iterative depth-based fusion methods which are basically rank-based fusion methods and utilize rank of the predicted labels from different classifiers. Two frameworks are also proposed for domain adaptation. The first framework uses traditional machine learning algorithms, while the other works with metric learning as well as transfer learning algorithm. Motivated from ImageCLEF’s 2014 domain adaptation task, these frameworks with the proposed fusion methods are validated and verified by conducting experiments on the images from five domains having varied distributions. Bing, Caltech, ImageNet, and PASCAL are used as source domains and the target domain is SUN. Twelve object categories are chosen from these domains. The experimental results show the performance improvement not only over the baseline system, but also over the winner of the ImageCLEF’s 2014 domain adaptation challenge.", "title": "" }, { "docid": "c068f6eee91a2316e57b7a3f62ff05ba", "text": "Recently, Minimum Cost Multicut Formulations have been proposed and proven to be successful in both motion trajectory segmentation and multi-target tracking scenarios. Both tasks benefit from decomposing a graphical model into an optimal number of connected components based on attractive and repulsive pairwise terms. The two tasks are formulated on different levels of granularity and, accordingly, leverage mostly local information for motion segmentation and mostly high-level information for multi-target tracking. In this paper we argue that point trajectories and their local relationships can contribute to the high-level task of multi-target tracking and also argue that high-level cues from object detection and tracking are helpful to solve motion segmentation. We propose a joint graphical model for point trajectories and object detections whose Multicuts are solutions to motion segmentation and multi-target tracking problems at once. Results on the FBMS59 motion segmentation benchmark as well as on pedestrian tracking sequences from the 2D MOT 2015 benchmark demonstrate the promise of this joint approach.", "title": "" }, { "docid": "3d81e3ed2c0614544887183ac7c049ce", "text": "Today, science is passing through an era of transformation, where the inundation of data, dubbed data deluge is influencing the decision making process. The science is driven by the data and is being termed as data science. In this internet age, the volume of the data has grown up to petabytes, and this large, complex, structured or unstructured, and heterogeneous data in the form of “Big Data” has gained significant attention. The rapid pace of data growth through various disparate sources, especially social media such as Facebook, has seriously challenged the data analytic capabilities of traditional relational databases. The velocity of the expansion of the amount of data gives rise to a complete paradigm shift in how new age data is processed. Confidence in the data engineering of the existing data processing systems is gradually fading whereas the capabilities of the new techniques for capturing, storing, visualizing, and analyzing data are evolving. In this review paper, we discuss some of the modern Big Data models that are leading contributors in the NoSQL era and claim to address Big Data challenges in reliable and efficient ways. Also, we take the potential of Big Data into consideration and try to reshape the original operationaloriented definition of “Big Science” (Furner, 2003) into a new data-driven definition and rephrase it as “The science that deals with Big Data is Big Science.” Disciplines Agriculture | Bioresource and Agricultural Engineering | Computer Sciences | Statistics and Probability Comments This article is from Data Science Journal. 13, pp.138–157. DOI: http://doi.org/10.2481/dsj.14-041. Posted with permission. Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/abe_eng_pubs/771 A BRIEF REVIEW ON LEADING BIG DATA MODELS Sugam Sharma 1* , Udoyara S Tim 2 , Johnny Wong 3 , Shashi Gadia 3 , Subhash Sharma 4 1 Center for Survey Statistics and Methodology, Iowa State University, Ames, IA, 50010, USA *Email: sugamsha@iastate.edu 2 Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50010, USA 3 Department of Computer Science, Iowa State University, Ames, IA, 50010, USA 4 Electronics & Computer Discipline, DPT, Indian Institute of Technology, Roorkee, 247001, INDIA", "title": "" }, { "docid": "58e27ab73a264718f78effb4460c471d", "text": "Cross-chain communication is one of the major design considerations in current blockchain systems [4-7] such as Ethereum[8]. Currently, Blockchain operates like information isolated island, they cannot obtain external data or execute transactions on their own.\n Motivated by recent studies [1-3] on blockchain's multiChain framework, we investigate the cross-chain communication. We introduces blockchain router, which empowers blockchains to connect and communicate cross chains. By establishing an economic model, blockchain router enables different blockchains in the network communicate with each other same like Internet network. In the network of blockchain router, some blockchain plays the role of a router which, according to the communication protocol, analyzes and transmits communication requests, dynamically maintaining a topology structure of the blockchain network.", "title": "" }, { "docid": "fdd4295dc3be3ec06c1785f3bdadd00e", "text": "The paper presents a method for automatically detecting pallets and estimating their position and orientation. For detection we use a sliding window approach with efficient candidate generation, fast integral features and a boosted classifier. Specific information regarding the detection task such as region of interest, pallet dimensions and pallet structure can be used to speed up and validate the detection process. Stereo reconstruction is employed for depth estimation by applying Semi-Global Matching aggregation with Census descriptors. Offline test results show that successful detection is possible under 0.5 seconds.", "title": "" }, { "docid": "bcccf0eb4088fea50ff27d44f288a6e3", "text": "One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm. In language modeling, users’ language (e.g. in private messaging) could change in a year and be completely different from what we observe in publicly available data. At the same time, public data can be used for obtaining general knowledge (i.e. general model of English). We study approaches to distributed fine-tuning of a general model on user private data with the additional requirements of maintaining the quality on the general data and minimization of communication costs. We propose a novel technique that significantly improves prediction quality on users’ language compared to a general model and outperforms gradient compression methods in terms of communication efficiency. The proposed procedure is fast and leads to an almost 70% perplexity reduction and 8.7 percentage point improvement in keystroke saving rate on informal English texts. Finally, we propose an experimental framework for evaluating differential privacy of distributed training of language models and show that our approach has good privacy guarantees.", "title": "" }, { "docid": "dbd7b707910d2b7ba0a3c4574a01bdaa", "text": "Visual recognition for object grasping is a well-known challenge for robot automation in industrial applications. A typical example is pallet recognition in industrial environment for pick-and-place automated process. The aim of vision and reasoning algorithms is to help robots in choosing the best pallets holes location. This work proposes an application-based approach, which ful l all requirements, dealing with every kind of occlusions and light situations possible. Even some ”meaning noise” (or ”meaning misunderstanding”) is considered. A pallet model, with limited degrees of freedom, is described and, starting from it, a complete approach to pallet recognition is outlined. In the model we de ne both virtual and real corners, that are geometrical object proprieties computed by different image analysis operators. Real corners are perceived by processing brightness information directly from the image, while virtual corners are inferred at a higher level of abstraction. A nal reasoning stage selects the best solution tting the model. Experimental results and performance are reported in order to demonstrate the suitability of the proposed approach.", "title": "" }, { "docid": "6fb06fff9f16024cf9ccf9a782bffecd", "text": "In this chapter, we discuss 3D compression techniques for reducing the delays in transmitting triangle meshes over the Internet. We first explain how vertex coordinates, which represent surface samples may be compressed through quantization, prediction, and entropy coding. We then describe how the connectivity, which specifies how the surface interpolates these samples, may be compressed by compactly encoding the parameters of a connectivity-graph construction process and by transmitting the vertices in the order in which they are encountered by this process. The storage of triangle meshes compressed with these techniques is usually reduced to about a byte per triangle. When the exact geometry and connectivity of the mesh are not essential, the triangulated surface may be simplified or retiled. Although simplification techniques and the progressive transmission of refinements may be used as a compression tool, we focus on recently proposed retiling techniques designed specifically to improve 3D compression. They are often able to reduce the total storage, which combines coordinates and connectivity, to half-a-bit per triangle without exceeding a mean square error of 1/10,000 of the diagonal of a box that contains the solid.", "title": "" }, { "docid": "72a5db33e2ba44880b3801987b399c3d", "text": "Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9903cd100831977d129417b8663416a9", "text": "With the development of E-commerce, especially the rapid growth of C2C market, transaction trust plays a more and more role. At present ,there is some shortcomings at the on-line reputation mechanism. Based on the analysis of the factors affecting the transactions in C2C E- commerce, this paper firstly built a hierarchical trust evaluation index system.Then established a trust evaluation model for C2C E- commerce based on fuzzy sets theory.Finally this evaluation model was tested in an example, and it was proved that the model could be used to evaluate the trust status of C2C trading platform objectively. The model proposed in this paper can be not only used in C2C E-commerce but also applied in other autonomous trust management.", "title": "" }, { "docid": "581ed4779ddde2d6f00da0975e71a73b", "text": "Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.", "title": "" } ]
scidocsrr
50214005d861a43b77933e4389507d64
Handbook – No. 33 Text mining for central banks
[ { "docid": "4282e931ced3f8776f6c4cffb5027f61", "text": "OBJECTIVES\nTo provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design.\n\n\nTARGET AUDIENCE\nThis tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art.\n\n\nSCOPE\nWe describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.", "title": "" } ]
[ { "docid": "391412e1fe5593caaa2306496a341909", "text": "The present study examined friendship rules on the online social networking site Facebook. Study one used focus group data to inductively create a list of 36 Facebook friendship rules. Study two utilized survey data to examine college students’ endorsement of the various rules in close, casual, and acquaintance friendships. Results indicated five categories of Facebook friendship rules, which included rules regarding: communication channels, deception and control, relational maintenance, negative consequences for the self, and negatives consequences for a friend. Additionally, close friends, casual friends, and acquaintances significantly differed in their endorsement of four of the five rules categories. Results suggested that interaction rules provide a useful framework for the study of online social networking sites.", "title": "" }, { "docid": "921cb9021dc606af3b63116c45e093b2", "text": "Since its introduction, the orbitrap has proven to be a robust mass analyzer that can routinely deliver high resolving power and mass accuracy. Unlike conventional ion traps such as the Paul and Penning traps, the orbitrap uses only electrostatic fields to confine and to analyze injected ion populations. In addition, its relatively low cost, simple design and high space-charge capacity make it suitable for tackling complex scientific problems in which high performance is required. This review begins with a brief account of the set of inventions that led to the orbitrap, followed by a qualitative description of ion capture, ion motion in the trap and modes of detection. Various orbitrap instruments, including the commercially available linear ion trap-orbitrap hybrid mass spectrometers, are also discussed with emphasis on the different methods used to inject ions into the trap. Figures of merit such as resolving power, mass accuracy, dynamic range and sensitivity of each type of instrument are compared. In addition, experimental techniques that allow mass-selective manipulation of the motion of confined ions and their potential application in tandem mass spectrometry in the orbitrap are described. Finally, some specific applications are reviewed to illustrate the performance and versatility of the orbitrap mass spectrometers.", "title": "" }, { "docid": "90d9f68ebda9faae40206f081aa87fbb", "text": "This paper surveys existing and past research on brain-computer interfaces (BCI) for implicit human-computer interaction. A novel way of using BCI has indeed emerged, which proposes to use BCI in a less explicit way : the so-called “passive” BCI. Implicit BCI or passive BCI refers to BCI in which the user does not try to control his brain activity. Thus the brain activity is assimilated to an input and can be used to adapt the application to the user’s mental state. In this paper, we first study “implicit interaction” in general and recall its main applications. Then, we make a survey of existing and past research on brain-computer interfaces for implicit human-computer interaction. It seems indeed that BCI can be used in many applications in an implicit way, such as for adaptive automation, affective computing, or for video games. In such applications, BCI based on implicit interaction was often reported to improve performance of either the system or the user, or to introduce novel capacities based on mental states.", "title": "" }, { "docid": "d2e434f472b60e17ab92290c78706945", "text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.", "title": "" }, { "docid": "67b5e5ff3edadc31aefec2928ce43b26", "text": "We address the problem of computing semantic differences between a program and a patched version of the program. Our goal is to obtain a precise characterization of the difference between program versions, or establish their equivalence. We focus on infinite-state numerical programs, and use abstract interpretation to compute an over-approximation of program differences.\n Computing differences and establishing equivalence under abstraction requires abstracting relationships between variables in the two programs. Towards that end, we use a correlating abstract domain to compute a sound approximation of these relationships which captures semantic difference. This approximation can be computed over any interleaving of the two programs. However, the choice of interleaving can significantly affect precision. We present a speculative search algorithm that aims to find an interleaving of the two programs with minimal abstract semantic difference. This method is unique as it allows the analysis to dynamically alternate between several interleavings.\n We have implemented our approach and applied it to real-world examples including patches from Git, GNU Coreutils, as well as a few handpicked patches from the Linux kernel and the Mozilla Firefox web browser. Our evaluation shows that we compute precise approximations of semantic differences, and report few false differences.", "title": "" }, { "docid": "de48b60276b27861d58aaaf501606d69", "text": "Many environmental variables that are important for the development of chironomid larvae (such as water temperature, oxygen availability, and food quantity) are related to water depth, and a statistically strong relationship between chironomid distribution and water depth is therefore expected. This study focuses on the distribution of fossil chironomids in seven shallow lakes and one deep lake from the Plymouth Aquifer (Massachusetts, USA) and aims to assess the influence of water depth on chironomid assemblages within a lake. Multiple samples were taken per lake in order to study the distribution of fossil chironomid head capsules within a lake. Within each lake, the chironomid assemblages are diverse and the changes that are seen in the assemblages are strongly related to changes in water depth. Several thresholds (i.e., where species turnover abruptly changes) are identified in the assemblages, and most lakes show abrupt changes at about 1–2 and 5–7 m water depth. In the deep lake, changes also occur at 9.6 and 15 m depth. The distribution of many individual taxa is significantly correlated to water depth, and we show that the identification of different taxa within the genus Tanytarsus is important because different morphotypes show different responses to water depth. We conclude that the chironomid fauna is sensitive to changes in lake level, indicating that fossil chironomid assemblages can be used as a tool for quantitative reconstruction of lake level changes.", "title": "" }, { "docid": "6115cdfda5f7eff0f13d0d841176a3f3", "text": "A quadrotor with a cable-suspended load with eight degrees of freedom and four degrees underactuation is considered and the system is established to be a differentially-flat hybrid system. Using the flatness property, a trajectory generation method is presented that enables finding nominal trajectories with various constraints that not only result in minimal load swing if required, but can also cause a large swing in the load for dynamically agile motions. A control design is presented for the system specialized to the planar case, that enables tracking of either the quadrotor attitude, the load attitude or the position of the load. Stability proofs for the controller design and experimental validation of the proposed controller are presented.", "title": "" }, { "docid": "c14c575eed397c522a3bc0d2b766a836", "text": "Being highly unsaturated, carotenoids are susceptible to isomerization and oxidation during processing and storage of foods. Isomerization of trans-carotenoids to cis-carotenoids, promoted by contact with acids, heat treatment and exposure to light, diminishes the color and the vitamin A activity of carotenoids. The major cause of carotenoid loss, however, is enzymatic and non-enzymatic oxidation, which depends on the availability of oxygen and the carotenoid structure. It is stimulated by light, heat, some metals, enzymes and peroxides and is inhibited by antioxidants. Data on percentage losses of carotenoids during food processing and storage are somewhat conflicting, but carotenoid degradation is known to increase with the destruction of the food cellular structure, increase of surface area or porosity, length and severity of the processing conditions, storage time and temperature, transmission of light and permeability to O2 of the packaging. Contrary to lipid oxidation, for which the mechanism is well established, the oxidation of carotenoids is not well understood. It involves initially epoxidation, formation of apocarotenoids and hydroxylation. Subsequent fragmentations presumably result in a series of compounds of low molecular masses. Completely losing its color and biological activities, the carotenoids give rise to volatile compounds which contribute to the aroma/flavor, desirable in tea and wine and undesirable in dehydrated carrot. Processing can also influence the bioavailability of carotenoids, a topic that is currently of great interest.", "title": "" }, { "docid": "852391aa93e00f9aebdbc65c2e030abf", "text": "The iSTAR Micro Air Vehicle (MAV) is a unique 9-inch diameter ducted air vehicle weighing approximately 4 lb. The configuration consists of a ducted fan with control vanes at the duct exit plane. This VTOL aircraft not only hovers, but it can also fly at high forward speed by pitching over to a near horizontal attitude. The duct both increases propulsion efficiency and produces lift in horizontal flight, similar to a conventional planar wing. The vehicle is controlled using a rate based control system with piezo-electric gyroscopes. The Flight Control Computer (FCC) processes the pilot’s commands and the rate data from the gyroscopes to stabilize and control the vehicle. First flight of the iSTAR MAV was successfully accomplished in October 2000. Flight at high pitch angles and high speed took place in November 2000. This paper describes the vehicle, control system, and ground and flight-test results . Presented at the American Helicopter Society 57 Annual forum, Washington, DC, May 9-11, 2001. Copyright  2001 by the American Helicopter Society International, Inc. All rights reserved. Introduction The Micro Craft Inc. iSTAR is a Vertical Take-Off and Landing air vehicle (Figure 1) utilizing ducted fan technology to hover and fly at high forward speed. The duct both increases the propulsion efficiency and provides direct lift in forward flight similar to a conventional planar wing. However, there are many other benefits inherent in the iSTAR design. In terms of safety, the duct protects personnel from exposure to the propeller. The vehicle also has a very small footprint, essentially a circle equal to the diameter of the duct. This is beneficial for stowing, transporting, and in operations where space is critical, such as on board ships. The simplicity of the design is another major benefit. The absence of complex mechanical systems inherent in other VTOL designs (e.g., gearboxes, articulating blades, and counter-rotating propellers) benefits both reliability and cost. Figure 1: iSTAR Micro Air Vehicle The Micro Craft iSTAR VTOL aircraft is able to both hover and fly at high speed by pitching over towards a horizontal attitude (Figure 2). Although many aircraft in history have utilized ducted fans, most of these did not attempt to transition to high-speed forward flight. One of the few aircraft that did successfully transition was the Bell X-22 (Reference 1), first flown in 1965. The X-22, consisted of a fuselage and four ducted fans that rotated relative to the fuselage to transition the vehicle forward. The X-22 differed from the iSTAR in that its fuselage remained nearly level in forward flight, and the ducts rotated relative to the fuselage. Also planar tandem wings, not the ducts themselves, generated a large portion of the lift in forward flight. 1 Micro Craft Inc. is a division of Allied Aerospace Industry Incorporated (AAII) One of the first aircraft using an annular wing for direct lift was the French Coleoptère (Reference 1) built in the late 1950s. This vehicle successfully completed transition from hovering flight using an annular wing, however a ducted propeller was not used. Instead, a single jet engine was mounted inside the center-body for propulsion. Control was achieved by deflecting vanes inside the jet exhaust, with small external fins attached to the duct, and also with deployable strakes on the nose. Figure 2: Hover & flight at forward speed Less well-known are the General Dynamics ducted-fan Unmanned Air Vehicles, which were developed and flown starting in 1960 with the PEEK (Reference 1) aircraft. These vehicles, a precursor to the Micro Craft iSTAR, demonstrated stable hover and low speed flight in free-flight tests, and transition to forward flight in tethered ground tests. In 1999, Micro Craft acquired the patent, improved and miniaturized the design, and manufactured two 9-inch diameter flight test vehicles under DARPA funding (Reference 1). Working in conjunction with BAE systems (formerly Lockheed Sanders) and the Army/NASA Rotorcraft Division, these vehicles have recently completed a proof-ofconcept flight test program and have been demonstrated to DARPA and the US Army. Military applications of the iSTAR include intelligence, surveillance, target acquisition, and reconnaissance. Commercial applications include border patrol, bridge inspection, and police surveillance. Vehicle Description The iSTAR is composed of four major assemblies as shown in Figure 3: (1) the upper center-body, (2) the lower center body, (3) the duct, and (4) the landing ring. The majority of the vehicle’s structure is composed of Kevlar composite material resulting in a very strong and lightweight structure. Kevlar also lacks the brittleness common to other composite materials. Components that are not composite include the engine bulkhead (aluminum) and the landing ring (steel wire). The four major assemblies are described below. The upper center-body (UCB) is cylindrical in shape and contains the engine, engine controls, propeller, and payload. Three sets of hollow struts support the UCB and pass fuel and wiring to the duct. The propulsion Hover Low Speed High Speed system is a commercial-off-the-shelf (COTS) OS-32 SX single cylinder engine. This engine develops 1.2 hp and weighs approximately 250 grams (~0.5 lb.). Fuel consists of a mixture of alcohol, nitro-methane, and oil. The fixed-pitch propeller is attached directly to the engine shaft (without a gearbox). Starting the engine is accomplished by inserting a cylindrical shaft with an attached gear into the upper center-body and meshing it with a gear fit onto the propeller shaft (see Figure 4). The shaft is rotated using an off-board electric starter (Micro Craft is also investigating on-board starting systems). Figure 3: iSTAR configuration A micro video camera is mounted inside the nose cone, which is easily removable to accommodate modular payloads. The entire UCB can be removed in less than five minutes by removing eight screws securing the struts, and then disconnecting one fuel line and one electrical connector. Figure 4: Engine starting The lower center-body (LCB) is cylindrical in shape and is supported by eight stators. The sensor board is housed in the LCB, and contains three piezo-electric gyroscopes, three accelerometers, a voltage regulator, and amplifiers. The sensor signals are routed to the processor board in the duct via wires integrated into the stators. The duct is nine inches in diameter and contains a significant amount of volume for packaging. The fuel tank, flight control Computer (FCC), voltage regulator, batteries, servos, and receiver are all housed inside the duct. Fuel is contained in the leading edge of the duct. This tank is non-structural, and easily removable. It is attached to the duct with tape. Internal to the duct are eight fixed stators. The angle of the stators is set so that they produce an aerodynamic rolling moment countering the torque of the engine. Control vanes are attached to the trailing edge of the stators, providing roll, yaw, and pitch control. Four servos mounted inside the duct actuate the control vanes. Many different landing systems have been studied in the past. These trade studies have identified the landing ring as superior overall to other systems. The landing ring stabilizes the vehicle in close proximity to the ground by providing a restoring moment in dynamic situations. For example, if the vehicle were translating slowly and contacted the ground, the ring would pitch the vehicle upright. The ring also reduces blockage of the duct during landing and take-off by raising the vehicle above the ground. Blocking the duct can lead to reduced thrust and control power. Landing feet have also been considered because of their reduced weight. However, landing ‘feet’ lack the self-stabilizing characteristics of the ring in dynamic situations and tend to ‘catch’ on uneven surfaces. Electronics and Control System The Flight Control Computer (FCC) is housed in the duct (Figure 5). The computer processes the sensor output and pilot commands and generates pulse width modulated (PWM) signals to drive the servos. Pilot commands are generated using two conventional joysticks. The left joystick controls throttle position and heading. The right joystick controls pitch and yaw rate. The aircraft axis system is defined such that the longitudinal axis is coaxial with the engine shaft. Therefore, in hover the pitch attitude is 90 degrees and rolling the aircraft produces a heading change. Dedicated servos are used for pitch and yaw control. However, all control vanes are used for roll control (four quadrant roll control). The FCC provides the appropriate mixing for each servo. In each axis, the control system architecture consists of a conventional Proportional-Integral-Derivative (PID) controller with single-input and single-output. Initially, an attitude-based control system was desired, however Upper Center-body Fuel Tank Fixed Stator Control Vane Actuator Landing Ring Lower Center-body Duct Engine and Controls Prop/Fan Support struts due to the lack of acceleration information and the high gyroscope drift rates, accurate attitudes could not be calculated. For this reason, a rate system was ultimately implemented. Three Murata micro piezo-electric gyroscopes provide rates about all three axes. These gyroscopes are approximately 0.6”x0.3”x0.15” in size and weigh 1 gram each (Figure 6). Figure 5: Flight Control Computer Four COTS servos are located in the duct to actuate the control surfaces. Each servo weighs 28 grams and is 1.3”x1.3”x0.6” in size. Relative to typical UAV servos, they can generate high rates, but have low bandwidth. Bandwidth is defined by how high a frequency the servo can accurately follow an input signal. For all servos, the output lags behind the input and the signal degrades in magnitude as the frequency increases. At low frequency, the iSTAR MAV servo output signal lags by approximately 30°,", "title": "" }, { "docid": "4aed26d5f35f6059f4afe8cc7225f6a8", "text": "The rapid and quick growth of smart mobile devices has caused users to demand pervasive mobile broadband services comparable to the fixed broadband Internet. In this direction, the research initiatives on 5G networks have gained accelerating momentum globally. 5G Networks will act as a nervous system of the digital society, economy, and everyday peoples life and will enable new future Internet of Services paradigms such as Anything as a Service, where devices, terminals, machines, also smart things and robots will become innovative tools that will produce and will use applications, services and data. However, future Internet will exacerbate the need for improved QoS/QoE, supported by services that are orchestrated on-demand and that are capable of adapt at runtime, depending on the contextual conditions, to allow reduced latency, high mobility, high scalability, and real time execution. A new paradigm called Fog Computing, or briefly Fog has emerged to meet these requirements. Fog Computing extends Cloud Computing to the edge of the network, reduces service latency, and improves QoS/QoE, resulting in superior user-experience. This paper provides a survey of 5G and Fog Computing technologies and their research directions, that will lead to Beyond-5G Network in the Fog.", "title": "" }, { "docid": "4d426f63c73075485479dacbd3ad26c3", "text": "Augmented reality is a visual technology which combines virtual objects into the real environment in real time. E-tourism in Bali needs to be optimized, so that information technology can help tourists and provide new experiences when traveling. Generally, tourists wish for gaining information in an attractive way about visiting tourism objects. Nowadays, mobile-based application programs that provide information about tourism objects in Bali are rarely found. Therefore, it is important to develop an application which provides information system about tourism objects, especially about the Tanah Lot temple. By implementing augmented reality technology, which grows rapidly all over the world, the application of DewataAR can show 3 dimensional objects, video, and audio information of the temples. The application works by scanning brochure of tourism object by using an Android smartphone or tablet, then it can display 3 dimensional objects, video, and audio information about those tourism objects. Hence, augmented reality can be alternative media for promoting tourism object attractively for tourists and also be able to develop tourism in Bali.", "title": "" }, { "docid": "46764b13332b2f51f4dc4cec69d7b170", "text": "P. Abrams , K.E. Andersson, L. Birder, L. Brubaker, L. Cardozo, C. Chapple, A. Cottenden, W. Davila, D. de Ridder, R. Dmochowski, M. Drake, C. DuBeau, C. Fry, P. Hanno, J. Hay Smith, S. Herschorn, G. Hosker, C. Kelleher, H. Koelbl, S. Khoury,* R. Madoff, I. Milsom, K. Moore, D. Newman, V. Nitti, C. Norton, I. Nygaard, C. Payne, A. Smith, D. Staskin, S. Tekgul, J. Thuroff, A. Tubaro, D. Vodusek, A. Wein, and J.J. Wyndaele and the Members of the Committees", "title": "" }, { "docid": "4fb27373155b20702a02ad814a4e9b61", "text": "Sanskrit since many thousands of years has been the oriental language of India. It is the base for most of the Indian Languages. Ambiguity is inherent in the Natural Language sentences. Here, one word can be used in multiple senses. Morphology process takes word in isolation and fails to disambiguate correct sense of a word. Part-Of-Speech Tagging (POST) takes word sequences in to consideration to resolve the correct sense of a word present in the given sentence. Efficient POST have been developed for processing of English, Japanese, and Chinese languages but it is lacking for Indian languages. In this paper our work present simple rule-based POST for Sanskrit language. It uses rule based approach to tag each word of the sentence. These rules are stored in the database. It parses the given Sanskrit sentence and assigns suitable tag to each word automatically. We have tested this approach for 15 tags and 100 words of the language this rule based tagger gives correct tags for all the inflected words in the given sentence.", "title": "" }, { "docid": "2fed3f693a52ca9852c9238d3c9abf36", "text": "A thin artificial magnetic conductor (AMC) structure is designed and breadboarded for radar cross-section (RCS) Reduction applications. The design presented in this paper shows the advantage of geometrical simplicity while simultaneously reducing the overall thickness (for the current design ). The design is very pragmatic and is based on a combination of AMC and perfect electric conductor (PEC) cells in a chessboard like configuration. An array of Sievenpiper's mushrooms constitutes the AMC part, while the PEC part is formed by full metallic patches. Around the operational frequency of the AMC-elements, the reflection of the AMC and PEC have opposite phase, so for any normal incident plane wave the reflections cancel out, thus reducing the RCS. The same applies to specular reflections for off-normal incidence angles. A simple basic model has been implemented in order to verify the behavior of this structure, while Ansoft-HFSS software has been used to provide a more thorough analysis. Both bistatic and monostatic measurements have been performed to validate the approach.", "title": "" }, { "docid": "9d0cec5fda655863bc844374ec17f34f", "text": "Natural products from medicinal plants, either as pure compounds or as standardized extracts, provide unlimited opportunities for new drug leads because of the unmatched availability of chemical diversity. Due to an increasing demand for chemical diversity in screening programs, seeking therapeutic drugs from natural products, interest particularly in edible plants has grown throughout the world. Botanicals and herbal preparations for medicinal usage contain various types of bioactive compounds. The focus of this paper is on the analytical methodologies, which include the extraction, isolation and characterization of active ingredients in botanicals and herbal preparations. The common problems and key challenges in the extraction, isolation and characterization of active ingredients in botanicals and herbal preparations are discussed. As extraction is the most important step in the analysis of constituents present in botanicals and herbal preparations, the strengths and weaknesses of different extraction techniques are discussed. The analysis of bioactive compounds present in the plant extracts involving the applications of common phytochemical screening assays, chromatographic techniques such as HPLC and, TLC as well as non-chromatographic techniques such as immunoassay and Fourier Transform Infra Red (FTIR) are discussed.", "title": "" }, { "docid": "7e6de21317f08e934ecba93a5a8735d7", "text": "Robot technology is emerging for applications in disaster prevention with devices such as fire-fighting robots, rescue robots, and surveillance robots. In this paper, we suggest an portable fire evacuation guide robot system that can be thrown into a fire site to gather environmental information, search displaced people, and evacuate them from the fire site. This spool-like small and light mobile robot can be easily carried and remotely controlled by means of a laptop-sized tele-operator. It contains the following functional units: a camera to capture the fire site; sensors to gather temperature data, CO gas, and O2 concentrations; and a microphone with speaker for emergency voice communications between firefighter and victims. The robot's design gives its high-temperature protection, excellent waterproofing, and high impact resistance. Laboratory tests were performed for evaluating the performance of the proposed evacuation guide robot system.", "title": "" }, { "docid": "eec33c75a0ec9b055a857054d05bcf54", "text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.", "title": "" }, { "docid": "93adb6d22531c0ec6335a7bec65f4039", "text": "The term stroke-based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke-based rendering that exploits multi agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G-buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.", "title": "" }, { "docid": "a6d8fadb1e0e05929dbca89ee7188088", "text": "The polymorphic nature of the cytochrome P450 (CYP) genes affects individual drug response and adverse reactions to a great extent. This variation includes copy number variants (CNV), missense mutations, insertions and deletions, and mutations affecting gene expression and activity of mainly CYP2A6,CYP2B6, CYP2C9, CYP2C19 andCYP2D6,which have been extensively studied andwell characterized. CYP1A2 andCYP3A4 expression varies significantly, and the cause has been suggested to bemainly of genetic origin but the exact molecular basis remains unknown.We present a review of the major polymorphic CYP alleles and conclude that this variability is of greatest importance for treatment with several antidepressants, antipsychotics, antiulcer drugs, anti-HIV drugs, anticoagulants, antidiabetics and the anticancer drug tamoxifen. We also present tables illustrating the relative importance of specific common CYP alleles for the extent of enzyme functionality. The field of pharmacoepigenetics has just opened, and we present recent examples wherein gene methylation influences the expression of CYP. In addition microRNA (miRNA) regulation of P450 has been described. Furthermore, this review updates the fieldwith respect to regulatory initiatives and experience of predictive pharmacogenetic investigations in the clinics. It is concluded that the pharmacogenetic knowledge regarding CYP polymorphism now developed to a stage where it can be implemented in drug development and in clinical routine for specific drug treatments, thereby improving the drug response and reducing costs for drug treatment. © 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "9b1f40687d0c9b78efdf6d1e19769294", "text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.", "title": "" } ]
scidocsrr
7167227378a67f9210ff90c0acfbe48b
RULE MINING AND CLASSIFICATION OF ROAD TRAFFIC ACCIDENTS USING ADAPTIVE REGRESSION TREES
[ { "docid": "f7c7e00e3a2b07cd5845b26d6522d16e", "text": "This work employed Artificial Neural Networks and Decision Trees data analysis techniques to discover new knowledge from historical data about accidents in one of Nigeria’s busiest roads in order to reduce carnage on our highways. Data of accidents records on the first 40 kilometres from Ibadan to Lagos were collected from Nigeria Road Safety Corps. The data were organized into continuous and categorical data. The continuous data were analysed using Artificial Neural Networks technique and the categorical data were also analysed using Decision Trees technique .Sensitivity analysis was performed and irrelevant inputs were eliminated. The performance measures used to determine the performance of the techniques include Mean Absolute Error (MAE), Confusion Matrix, Accuracy Rate, True Positive, False Positive and Percentage correctly classified instances. Experimental results reveal that, between the machines learning paradigms considered, Decision Tree approach outperformed the Artificial Neural Network with a lower error rate and higher accuracy rate. Our research analysis also shows that, the three most important causes of accident are Tyre burst, loss of control and over speeding.", "title": "" } ]
[ { "docid": "49e1d016e1aae07d5e3ae1ad0e96e662", "text": "Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with \"full-fledged\" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources.\n We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about $N^4$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way.\n Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.", "title": "" }, { "docid": "ce8fbbd79223622760ad07d6aab9111c", "text": "Selenium (Se) is a dietary essential trace element with important biological roles. Accumulating evidence indicates that Se compounds possess anticancer properties. Se is specifically incorporated into proteins in the form of selenocysteine and non-specifically incorporated as selenomethionine in place of methionine. The effects of Se compounds on cells are strictly compositional and concentration-dependent. At supranutritional dietary levels, Se can prevent the development of many types of cancer. At higher concentrations, Se compounds can be either cytotoxic or possibly carcinogenic. The cytotoxicity of Se is suggested to be associated with oxidative stress. Accordingly, sodium selenite, an inorganic Se compound, was reported to induce DNA damage, particularly DNA strand breaks and base damage. In this review we summarize the various activities of Se compounds and focus on their relation to DNA damage and repair. We discuss the use of Saccharomyces cerevisiae for identification of the genes involved in Se toxicity and resistance.", "title": "" }, { "docid": "0e144e826ab88464c9e8166b84b483b8", "text": "Video-on-demand streaming services have gained popularity over the past few years. An increase in the speed of the access networks has also led to a larger number of users watching videos online. Online video streaming traffic is estimated to further increase from the current value of 57% to 69% by 2017, Cisco, 2014. In order to retain the existing users and attract new users, service providers attempt to satisfy the user's expectations and provide a satisfactory viewing experience. The first step toward providing a satisfactory service is to be able to quantify the users' perception of the current service level. Quality of experience (QoE) is a quality metric that provides a holistic measure of the users' perception of the quality. In this survey, we first present a tutorial overview of the popular video streaming techniques deployed for stored videos, followed by identifying various metrics that could be used to quantify the QoE for video streaming services; finally, we present a comprehensive survey of the literature on various tools and measurement methodologies that have been proposed to measure or predict the QoE of online video streaming services.", "title": "" }, { "docid": "e790824ac08ceb82000c3cda024dc329", "text": "Cellulolytic bacteria were isolated from manure wastes (cow dung) and degrading soil (municipal solid waste). Nine bacterial strains were screened the cellulolytic activities. Six strains showed clear zone formation on Berg’s medium. CMC (carboxyl methyl cellulose) and cellulose were used as substrates for cellulase activities. Among six strains, cd3 and mw7 were observed in quantitative measurement determined by dinitrosalicylic acid (DNS) method. Maximum enzyme producing activity showed 1.702mg/ml and 1.677mg/ml from cd3 and mw7 for 1% CMC substrate. On the other hand, it was expressed 0.563mg/ml and 0.415mg/ml for 1% cellulose substrate respectively. It was also studied for cellulase enzyme producing activity optimizing with kinetic growth parameters such as different carbon source including various concentration of cellulose, incubation time, temperature, and pH. Starch substrate showed 0.909mg/ml and 0.851mg/ml in enzyme producing activity. The optimum substrate concentration of cellulose was 0.25% for cd3 but 1% for mw7 showing the amount of reducing sugar formation 0.628mg/ml and 0.669mg/ml. The optimum incubation parameters for cd3 were 84 hours, 40C and pH 6. Mw7 also had optimum parameters 60 hours, 40 C and pH6.", "title": "" }, { "docid": "80bfff01fbb1f6453b37d39b3b8b63f8", "text": "We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a \"batch\" setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods.", "title": "" }, { "docid": "3e9f98a1aa56e626e47a93b7973f999a", "text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.", "title": "" }, { "docid": "1f13e466fe482f07e8446345ef811685", "text": "Predicting users' actions based on anonymous sessions is a challenging problem in web-based behavioral modeling research, mainly due to the uncertainty of user behavior and the limited information. Recent advances in recurrent neural networks have led to promising approaches to solving this problem, with long short-term memory model proving effective in capturing users' general interests from previous clicks. However, none of the existing approaches explicitly take the effects of users' current actions on their next moves into account. In this study, we argue that a long-term memory model may be insufficient for modeling long sessions that usually contain user interests drift caused by unintended clicks. A novel short-term attention/memory priority model is proposed as a remedy, which is capable of capturing users' general interests from the long-term memory of a session context, whilst taking into account users' current interests from the short-term memory of the last-clicks. The validity and efficacy of the proposed attention mechanism is extensively evaluated on three benchmark data sets from the RecSys Challenge 2015 and CIKM Cup 2016. The numerical results show that our model achieves state-of-the-art performance in all the tests.", "title": "" }, { "docid": "841b7e21447c848fd999f9237818e52d", "text": "High-frequency B-mode images of 19 fresh human liver samples were obtained to evaluate their usefulness in determining the steatosis grade. The images were acquired by a mechanically controlled singlecrystal probe at 25 MHz. Image features derived from gray-level concurrence and nonseparable wavelet transform were extracted to classify steatosis grade using a classifier known as the support vector machine. A subsequent histologic examination of each liver sample graded the steatosis from 0 to 3. The four grades were then combined into two, three and four classes. The classification results were correlated with histology. The best classification accuracies of the two, three and four classes were 90.5%, 85.8% and 82.6%, respectively, which were markedly better than those at 7 MHz. These results indicate that liver steatosis can be more accurately characterized using high-frequency B-mode ultrasound. Limitations and their potential solutions of applying high-frequency ultrasound to liver imaging are also discussed. (E-mail: paichi@cc.ee.ntu.edu.tw) © 2005 World Federation for Ultrasound in Medicine & Biology.", "title": "" }, { "docid": "3bc800074b32fdf03812638d6a57f23d", "text": "Various low-latency anonymous communication systems such as Tor and Anoymizer have been designed to provide anonymity service for users. In order to hide the communication of users, many anonymity systems pack the application data into equal-sized cells (e.g., 512 bytes for Tor, a known real-world, circuit-based low-latency anonymous communication network). In this paper, we investigate a new cell counter based attack against Tor, which allows the attacker to confirm anonymous communication relationship among users very quickly. In this attack, by marginally varying the counter of cells in the target traffic at the malicious exit onion router, the attacker can embed a secret signal into the variation of cell counter of the target traffic. The embedded signal will be carried along with the target traffic and arrive at the malicious entry onion router. Then an accomplice of the attacker at the malicious entry onion router will detect the embedded signal based on the received cells and confirm the communication relationship among users. We have implemented this attack against Tor and our experimental data validate its feasibility and effectiveness. There are several unique features of this attack. First, this attack is highly efficient and can confirm very short communication sessions with only tens of cells. Second, this attack is effective and its detection rate approaches 100% with a very low false positive rate. Third, it is possible to implement the attack in a way that appears to be very difficult for honest participants to detect (e.g. using our hopping-based signal embedding).", "title": "" }, { "docid": "18ae35dc6bf27ec2182b75ac63348845", "text": "Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.", "title": "" }, { "docid": "970a76190e980afe51928dcaa6d594c8", "text": "Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.", "title": "" }, { "docid": "cf0d47466adec1adebeb14f89f0009cb", "text": "We developed a novel learning-based human detection system, which can detect people having different sizes and orientations, under a wide variety of backgrounds or even with crowds. To overcome the affects of geometric and rotational variations, the system automatically assigns the dominant orientations of each block-based feature encoding by using the rectangularand circulartype histograms of orientated gradients (HOG), which are insensitive to various lightings and noises at the outdoor environment. Moreover, this work demonstrated that Gaussian weight and tri-linear interpolation for HOG feature construction can increase detection performance. Particularly, a powerful feature selection algorithm, AdaBoost, is performed to automatically select a small set of discriminative HOG features with orientation information in order to achieve robust detection results. The overall computational time is further reduced significantly without any performance loss by using the cascade-ofrejecter structure, whose hyperplanes and weights of each stage are estimated by using the AdaBoost approach.", "title": "" }, { "docid": "2dd3ca2e8e9bc9b6d9ab6d4e8c9c3974", "text": "With the advancement of data acquisition techniques, tensor (multidimensional data) objects are increasingly accumulated and generated, for example, multichannel electroencephalographies, multiview images, and videos. In these applications, the tensor objects are usually nonnegative, since the physical signals are recorded. As the dimensionality of tensor objects is often very high, a dimension reduction technique becomes an important research topic of tensor data. From the perspective of geometry, high-dimensional objects often reside in a low-dimensional submanifold of the ambient space. In this paper, we propose a new approach to perform the dimension reduction for nonnegative tensor objects. Our idea is to use nonnegative Tucker decomposition (NTD) to obtain a set of core tensors of smaller sizes by finding a common set of projection matrices for tensor objects. To preserve geometric information in tensor data, we employ a manifold regularization term for the core tensors constructed in the Tucker decomposition. An algorithm called manifold regularization NTD (MR-NTD) is developed to solve the common projection matrices and core tensors in an alternating least squares manner. The convergence of the proposed algorithm is shown, and the computational complexity of the proposed method scales linearly with respect to the number of tensor objects and the size of the tensor objects, respectively. These theoretical results show that the proposed algorithm can be efficient. Extensive experimental results have been provided to further demonstrate the effectiveness and efficiency of the proposed MR-NTD algorithm.", "title": "" }, { "docid": "fc59a335d52d2f895eb6b7e49a836f67", "text": "Workflow management promises a new solution to an age-old problem: controlling, monitoring, optimizing and supporting business processes. What is new about workflow management is the explicit representation of the business process logic which allows for computerized support. This paper discusses the use of Petri nets in the context of workflow management. Petri nets are an established tool for modeling and analyzing processes. On the one hand, Petri nets can be used as a design language for the specification of complex workflows. On the other hand, Petri net theory provides for powerful analysis techniques which can be used to verify the correctness of workflow procedures. This paper introduces workflow management as an application domain for Petri nets, presents state-of-the-art results with respect to the verification of workflows, and highlights some Petri-net-based workflow tools.", "title": "" }, { "docid": "57c090eaab37e615b564ef8451412962", "text": "Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (opvi), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling—allowing inference to scale to massive data—as well as objectives that admit variational programs—a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of opvi on a mixture model and a generative model of images.", "title": "" }, { "docid": "10c7b7a19197c8562ebee4ae66c1f5e8", "text": "Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models∗.", "title": "" }, { "docid": "be49c21abb971f31690fce9dc553e54b", "text": "In last decade, various agile methods have been introduced and used by software industry. It has been observed that many practitioners are using hybrid of agile methods and traditional methods. The knowledge of agile software development process about the theoretical grounds, applicability in large development settings and connections to establish software engineering disciplines remain mostly in dark. It has been reported that it is difficult for average manager to implement agile method in the organization. Further, every agile method has its own development cycle that brings technological, managerial and environmental changes in organization. A proper roadmap of agile software development in the form of agile software development life cycle can be developed to address the aforesaid issues of agile software development process. Thus, there is strong need of agile software development life cycle that clearly defines the phases included in any agile method and also describes the artifacts of each phase. This generalization of agile software development life cycle provides the guideline for average developers about usability, suitability, applicability of agile methods. Keywords-Agile software Development; extreme Programming; Adaptive software developmen; Scrum; Agile Method;story.", "title": "" }, { "docid": "939f8f3894a55ec1c4ccd8dc7f4d17b4", "text": "In the present paper the problems of existence and uniqueness of almost periodic solutions for impulsive cellular neural networks with delay are considered. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "b348a2835a16ac271f2140f9057dcaa1", "text": "The variational method has been introduced by Kass et al. (1987) in the field of object contour modeling, as an alternative to the more traditional edge detection-edge thinning-edge sorting sequence. since the method is based on a pre-processing of the image to yield an edge map, it shares the limitations of the edge detectors it uses. in this paper, we propose a modified variational scheme for contour modeling, which uses no edge detection step, but local computations instead—only around contour neighborhoods—as well as an “anticipating” strategy that enhances the modeling activity of deformable contour curves. many of the concepts used were originally introduced to study the local structure of discontinuity, in a theoretical and formal statement by leclerc & zucker (1987), but never in a practical situation such as this one. the first part of the paper introduces a region-based energy criterion for active contours, and gives an examination of its implications, as compared to the gradient edge map energy of snakes. then, a simplified optimization scheme is presented, accounting for internal and external energy in separate steps. this leads to a complete treatment, which is described in the last sections of the paper (4 and 5). the optimization technique used here is mostly heuristic, and is thus presented without a formal proof, but is believed to fill a gap between snakes and other useful image representations, such as split-and-merge regions or mixed line-labels image fields.", "title": "" } ]
scidocsrr
1274330dbef10dd77b554230b7c6538a
Understanding Graph-Based Trust Evaluation in Online Social Networks: Methodologies and Challenges
[ { "docid": "d09144b7f20f75501e2e0806f6c8258c", "text": "Social Network Marketing techniques employ pre-existing social networks to increase brands or products awareness through word-of-mouth promotion. Full understanding of social network marketing and the potential candidates that can thus be marketed to certainly offer lucrative opportunities for prospective sellers. Due to the complexity of social networks, few models exist to interpret social network marketing realistically. We propose to model social network marketing using Heat Diffusion Processes. This paper presents three diffusion models, along with three algorithms for selecting the best individuals to receive marketing samples. These approaches have the following advantages to best illustrate the properties of real-world social networks: (1) We can plan a marketing strategy sequentially in time since we include a time factor in the simulation of product adoptions; (2) The algorithm of selecting marketing candidates best represents and utilizes the clustering property of real-world social networks; and (3) The model we construct can diffuse both positive and negative comments on products or brands in order to simulate the complicated communications within social networks. Our work represents a novel approach to the analysis of social network marketing, and is the first work to propose how to defend against negative comments within social networks. Complexity analysis shows our model is also scalable to very large social networks.", "title": "" } ]
[ { "docid": "fbce98fcc5f4095754743ed4bdcc3f0b", "text": "Social interactions play a key role in the healthy development of social animals and are most pronounced in species with complex social networks. When developing offspring do not receive proper social interaction, they show developmental impairments. This effect is well documented in mammalian species but controversial in social insects. It has been hypothesized that the enlargement of the mushroom bodies, responsible for learning and memory, observed in social insects is needed for maintaining the large social networks and/or task allocation. This study examines the impact of social isolation on the development of mushroom bodies of the ant Camponotus floridanus. Ants raised in isolation were shown to exhibit impairment in the growth of the mushroom bodies as well as behavioral differences when compared to ants raised in social groups. These results indicate that social interaction is necessary for the proper development of C. floridanus mushroom bodies.", "title": "" }, { "docid": "44368062de68f6faed57d43b8e691e35", "text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.", "title": "" }, { "docid": "bfd946e8b668377295a1672a7bb915a3", "text": "Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.", "title": "" }, { "docid": "8ee0a87116d700c8ad982f08d8215c1d", "text": "Game generation systems perform automated, intelligent design of games (i.e. videogames, boardgames), reasoning about both the abstract rule system of the game and the visual realization of these rules. Although, as an instance of the problem of creative design, game generation shares some common research themes with other creative AI systems such as story and art generators, game generation extends such work by having to reason about dynamic, playable artifacts. Like AI work on creativity in other domains, work on game generation sheds light on the human game design process, offering opportunities to make explicit the tacit knowledge involved in game design and test game design theories. Finally, game generation enables new game genres which are radically customized to specific players or situations; notable examples are cell phone games customized for particular users and newsgames providing commentary on current events. We describe an approach to formalizing game mechanics and generating games using those mechanics, using WordNet and ConceptNet to assist in performing common-sense reasoning about game verbs and nouns. Finally, we demonstrate and describe in detail a prototype that designs micro-games in the style of Nintendo’s", "title": "" }, { "docid": "e64ca3fbdb3acd1ffe0fff9557ce8541", "text": "With the explosive growth of video data, content-based video analysis and management technologies such as indexing, browsing and retrieval have drawn much attention. Video shot boundary detection (SBD) is usually the first and important step for those technologies. Great efforts have been made to improve the accuracy of SBD algorithms. However, most works are based on signal rather than interpretable features of frames. In this paper, we propose a novel video shot boundary detection framework based on interpretable TAGs learned by Convolutional Neural Networks (CNNs). Firstly, we adopt a candidate segment selection to predict the positions of shot boundaries and discard most non-boundary frames. This preprocessing method can help to improve both accuracy and speed of the SBD algorithm. Then, cut transition and gradual transition detections which are based on the interpretable TAGs are conducted to identify the shot boundaries in the candidate segments. Afterwards, we synthesize the features of frames in a shot and get semantic labels for the shot. Experiments on TRECVID 2001 test data show that the proposed scheme can achieve a better performance compared with the state-of-the-art schemes. Besides, the semantic labels obtained by the framework can be used to depict the content of a shot.", "title": "" }, { "docid": "2e40682bca56659428d2919191e1cbf3", "text": "Single-cell RNA-Seq (scRNA-Seq) has attracted much attention recently because it allows unprecedented resolution into cellular activity; the technology, therefore, has been widely applied in studying cell heterogeneity such as the heterogeneity among embryonic cells at varied developmental stages or cells of different cancer types or subtypes. A pertinent question in such analyses is to identify cell subpopulations as well as their associated genetic drivers. Consequently, a multitude of approaches have been developed for clustering or biclustering analysis of scRNA-Seq data. In this article, we present a fast and simple iterative biclustering approach called \"BiSNN-Walk\" based on the existing SNN-Cliq algorithm. One of BiSNN-Walk's differentiating features is that it returns a ranked list of clusters, which may serve as an indicator of a cluster's reliability. Another important feature is that BiSNN-Walk ranks genes in a gene cluster according to their level of affiliation to the associated cell cluster, making the result more biologically interpretable. We also introduce an entropy-based measure for choosing a highly clusterable similarity matrix as our starting point among a wide selection to facilitate the efficient operation of our algorithm. We applied BiSNN-Walk to three large scRNA-Seq studies, where we demonstrated that BiSNN-Walk was able to retain and sometimes improve the cell clustering ability of SNN-Cliq. We were able to obtain biologically sensible gene clusters in terms of GO term enrichment. In addition, we saw that there was significant overlap in top characteristic genes for clusters corresponding to similar cell states, further demonstrating the fidelity of our gene clusters.", "title": "" }, { "docid": "c84032da31c20d7561ee3f89a5074a5b", "text": "We develop a new type of statistical texture image feature, called a Local Radius Index (LRI), which can be used to quantify texture similarity based on human perception. Image similarity metrics based on LRI can be applied to image compression, identical texture retrieval and other related applications. LRI extracts texture features by using simple pixel value comparisons in space domain. Better performance can be achieved when LRI is combined with complementary texture features, e.g., Local Binary Patterns (LBP) and the proposed Subband Contrast Distribution. Compared with Structural Texture Similarity Metrics (STSIM), the LRI-based metrics achieve better retrieval performance with much less computation. Applied to the recently developed structurally lossless image coder, Matched Texture Coding, LRI enables similar performance while significantly accelerating the encoding.", "title": "" }, { "docid": "0470105ef882212930267e85d17b7c57", "text": "Using configuration synthesis and design map, the CPW-fed circular fractal slot antennas are proposed for dual-band applications. In practice, the experimental results with broadband and dual-band responses (47.4% and 13.5% bandwidth) and available radiation gains (peak gain 3.58 and 7.28 dBi) at 0.98 and 1.84 GHz respectively for half-wavelength design are achieved firstly. Then, the other broadband and dual-band responses (75.9% and 16.1% bandwidth) and available radiation gains (peak gain 3.16 and 6.62 dBi) at 2.38 and 5.35 GHz for quarter-wavelength design are described herein. Contour distribution patterns are applied to figure out the omni-directional patterns. The demonstration among the design map and the EM characteristics of the antenna is presented by current distributions.", "title": "" }, { "docid": "cc5815edf96596a1540fa1fca53da0d3", "text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.", "title": "" }, { "docid": "674d347526e5ea2677eec2f2b816935b", "text": "PATIENT\nMale, 70 • Male, 84.\n\n\nFINAL DIAGNOSIS\nAppendiceal mucocele and pseudomyxoma peritonei.\n\n\nSYMPTOMS\n-.\n\n\nMEDICATION\n-.\n\n\nCLINICAL PROCEDURE\n-.\n\n\nSPECIALTY\nSurgery.\n\n\nOBJECTIVE\nRare disease.\n\n\nBACKGROUND\nMucocele of the appendix is an uncommon cystic lesion characterized by distension of the appendiceal lumen with mucus. Most commonly, it is the result of epithelial proliferation, but it can also be caused by inflammation or obstruction of the appendix. When an underlying mucinous cystadenocarcinoma exists, spontaneous or iatrogenic rupture of the mucocele can lead to mucinous intraperitoneal ascites, a syndrome known as pseudomyxoma peritonei.\n\n\nCASE REPORT\nWe report 2 cases that represent the clinical extremities of this heterogeneous disease; an asymptomatic mucocele of the appendix in a 70-year-old female and a case of pseudomyxoma peritonei in an 84-year-old male. Subsequently, we review the current literature focusing to the optimal management of both conditions.\n\n\nCONCLUSIONS\nMucocele of the appendix is a rare disease, usually diagnosed on histopathologic examination of appendectomized specimens. Due to the existing potential for malignant transformation and pseudomyxoma peritonei caused by rupture of the mucocele, extensive preoperative evaluation and thorough intraoperative gastrointestinal and peritoneal examination is required.", "title": "" }, { "docid": "3473417f1701c82a4a06c00545437a3c", "text": "The eXtensible Markup Language (XML) and related technologies offer promise for (among other things) applying data management technology to documents, and also for providing a neutral syntax for interoperability among disparate systems. But like many new technologies, it has raised unrealistic expectations. We give an overview of XML and related standards, and offer opinions to help separate vaporware (with a chance of solidifying) from hype. In some areas, XML technologies may offer revolutionary improvements, such as in processing databases' outputs and extending data management to semi-structured data. For some goals, either a new class of DBMSs is required, or new standards must be built. For such tasks, progress will occur, but may be measured in ordinary years rather than Web time. For hierarchical formatted messages that do not need maximum compression (e.g., many military messages), XML may have considerable benefit. For interoperability among enterprise systems, XML's impact may be moderate as an improved basis for software, but great in generating enthusiasm for standardizing concepts and schemas.", "title": "" }, { "docid": "6f46e0d6ea3fb99c6e6a1d5907995e87", "text": "The study of financial markets has been addressed in many works during the last years. Different methods have been used in order to capture the non-linear behavior which is characteristic of these complex systems. The development of profitable strategies has been associated with the predictive character of the market movement, and special attention has been devoted to forecast the trends of financial markets. This work performs a predictive study of the principal index of the Brazilian stock market through artificial neural networks and the adaptive exponential smoothing method, respectively. The objective is to compare the forecasting performance of both methods on this market index, and in particular, to evaluate the accuracy of both methods to predict the sign of the market returns. Also the influence on the results of some parameters associated to both methods is studied. Our results show that both methods produce similar results regarding the prediction of the index returns. On the contrary, the neural networks outperform the adaptive exponential smoothing method in the forecasting of the market movement, with relative hit rates similar to the ones found in other developed markets. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7b6231f2e0fe08e2f72bf45176b5481f", "text": "PCA is a classical statistical technique whose simplicity and maturity has seen it find widespread use for anomaly detection. However, it is limited in this regard by being sensitive to gross perturbations of the input, and by seeking a linear subspace that captures normal behaviour. The first issue has been dealt with by robust PCA, a variant of PCA that explicitly allows for some data points to be arbitrarily corrupted; however, this does not resolve the second issue, and indeed introduces the new issue that one can no longer inductively find anomalies on a test set. This paper addresses both issues in a single model, the robust autoencoder. This method learns a nonlinear subspace that captures the majority of data points, while allowing for some data to have arbitrary corruption. The model is simple to train and leverages recent advances in the optimisation of deep neural networks. Experiments on a range of real-world datasets highlight the model’s effectiveness.", "title": "" }, { "docid": "61038d16483587c5025ef7bcaf7e6bd1", "text": "BACKGROUND\nMany prior studies have evaluated shoulder motion, yet no three-dimensional analysis comparing the combined clavicular, scapular, and humeral motion during arm elevation has been done. We aimed to describe and compare dynamic three-dimensional motion of the shoulder complex during raising and lowering the arm across three distinct elevation planes (flexion, scapular plane abduction, and coronal plane abduction).\n\n\nMETHODS\nTwelve subjects without a shoulder abnormality were enrolled. Transcortical pin placement into the clavicle, scapula, and humerus allowed electromagnetic motion sensors to be rigidly fixed. The subjects completed two repetitions of raising and lowering the arm in flexion, scapular, and abduction planes. Three-dimensional angles were calculated for sternoclavicular, acromioclavicular, scapulothoracic, and glenohumeral joint motions. Joint angles between humeral elevation planes and between raising and lowering of the arm were compared.\n\n\nRESULTS\nGeneral patterns of shoulder motion observed during humeral elevation were clavicular elevation, retraction, and posterior axial rotation; scapular internal rotation, upward rotation, and posterior tilting relative to the clavicle; and glenohumeral elevation and external rotation. Clavicular posterior rotation predominated at the sternoclavicular joint (average, 31 degrees). Scapular posterior tilting predominated at the acromioclavicular joint (average, 19 degrees). Differences between flexion and abduction planes of humerothoracic elevation were largest for the glenohumeral joint plane of elevation (average, 46 degrees).\n\n\nCONCLUSIONS\nOverall shoulder motion consists of substantial angular rotations at each of the four shoulder joints, enabling the multiple-joint interaction required to elevate the arm overhead.", "title": "" }, { "docid": "886c284d72a01db9bc4eb9467e14bbbb", "text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.", "title": "" }, { "docid": "2ab848215bd066373c9da1c6c01432a8", "text": "PURPOSE\nPersonal mobility vehicles (PMVs) are under active development. Most PMVs are wheel-driven, a mode of transport notable for its efficiency. However, wheeled PMVs tend to have poor mobility against negotiating obstacles. The four-wheeled vehicle RT-Mover PType 3 has been developed featuring wheeled legs capable of leg motion. This allows the PMV to overcome uneven terrains, including a step approached at an angle, which ordinary wheelchairs cannot negotiate.\n\n\nMETHOD\nThis article discusses a gait algorithm in which a leg executes the necessary leg motion when optionally presented with obstacles on a road. In order to lift a wheel off the ground and perform a leg motion, the support wheels must be moved to support points to ensure that the vehicle remains stable on three wheels. When moving towards the target support point, a wheel may encounter another obstacle, and a response method for this case is also described.\n\n\nRESULTS\nTo assess the gait algorithm, several configurations of obstacles were used for performance tests with a passenger. The capabilities of the PMV were demonstrated through experiments.\n\n\nCONCLUSION\nWe proposed a novel gait algorithm for our PMV and realised the proposed motion pattern for PMV-based negotiating obstacles.\n\n\nIMPLICATIONS FOR REHABILITATION\nOur single-seat personal mobility vehicle, RT-Mover PType 3 features wheels attached on legs capable of performing leg motion, which allows the vehicle to traverse rough terrains in urban areas. We proposed a gait algorithm for RT-Mover PType 3 consisting of a series of leg motions in response to rough terrain. With this algorithm, the vehicle can traverse not only randomly placed obstacles, but also a step approached at an oblique angle, which conventional powered wheelchairs cannot navigate. Experiments with a passenger demonstrated the effectiveness of the proposed gait algorithm, suggesting that RT-Mover PType 3 can expand the mobility and range of activities of wheelchair users.", "title": "" }, { "docid": "1d82d994635a0bd0137febd74b8c3835", "text": "research A. Agrawal J. Basak V. Jain R. Kothari M. Kumar P. A. Mittal N. Modani K. Ravikumar Y. Sabharwal R. Sureka Marketing decisions are typically made on the basis of research conducted using direct mailings, mall intercepts, telephone interviews, focused group discussion, and the like. These methods of marketing research can be time-consuming and expensive, and can require a large amount of effort to ensure accurate results. This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments. These concepts, along with the opportunity provided by the increasing numbers of online shoppers, enable rapid, systematic, and cost-effective marketing research.", "title": "" }, { "docid": "205ed1eba187918ac6b4a98da863a6f2", "text": "Since the first papers on asymptotic waveform evaluation (AWE), Pade-based reduced order models have become standard for improving coupled circuit-interconnect simulation efficiency. Such models can be accurately computed using bi-orthogonalization algorithms like Pade via Lanczos (PVL), but the resulting Pade approximates can still be unstable even when generated from stable RLC circuits. For certain classes of RC circuits it has been shown that congruence transforms, like the Arnoldi algorithm, can generate guaranteed stable and passive reduced-order models. In this paper we present a computationally efficient model-order reduction technique, the coordinate-transformed Arnoldi algorithm, and show that this method generates arbitrarily accurate and guaranteed stable reduced-order models for RLC circuits. Examples are presented which demonstrates the enhanced stability and efficiency of the new method.", "title": "" }, { "docid": "e94d22b3f0435440b47421c4472d1278", "text": "In this short paper, a correction is made to the recently proposed solution of Li and Talaga to a 1D biased diffusion model for linear DNA translocation, and a new analysis will be given to their data. It was pointed out by us recently that this 1D linear translocation model is equivalent to the one that was considered by Schrödinger for the Ehrenhaft–Millikan measurements on electron charge. Here, we apply Schrödinger’s first-passage-time distribution formula to the data set in Li and Talaga. It is found that Schrödinger’s formula can be used to describe the time distribution of DNA translocation in solid-state nanopores. These fittings yield two useful parameters: the drift velocity of DNA translocation and the diffusion constant of DNA inside the nanopore. The results suggest two regimes of DNA translocation: (I) at low voltages, there are clear deviations from Smoluchowski’s linear law of electrophoresis, which we attribute to the entropic barrier effects; (II) at high voltages, the translocation velocity is a linear function of the applied electric field. In regime II, the apparent diffusion constant exhibits a quadratic dependence on the applied electric field, suggesting a mechanism of Taylor-dispersion effect likely due the electro-osmotic flow field in the nanopore channel. This analysis yields a dispersion-free diffusion constant value of 11.2 nm2 µs-1 for the segment of DNA inside the nanopore, which is in quantitative agreement with the Stokes–Einstein theory. The implication of Schrödinger’s formula for DNA sequencing is discussed.", "title": "" }, { "docid": "33ed6ab1eef74e6ba6649ff5a85ded6b", "text": "With the rapid increasing of smart phones and their embedded sensing technologies, mobile crowd sensing (MCS) becomes an emerging sensing paradigm for performing large-scale sensing tasks. One of the key challenges of large-scale mobile crowd sensing systems is how to effectively select the minimum set of participants from the huge user pool to perform the tasks and achieve certain level of coverage. In this paper, we introduce a new MCS architecture which leverages the cached sensing data to fulfill partial sensing tasks in order to reduce the size of selected participant set. We present a newly designed participant selection algorithm with caching and evaluate it via extensive simulations with a real-world mobile dataset.", "title": "" } ]
scidocsrr
128440de53e0fd6a9ba61507d6324518
Recommender System for Literature Review and Writing
[ { "docid": "28531c596a9df30b91d9d1e44d5a7081", "text": "The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyword-based search, reading proceedings of conferences, browsing publication lists of known experts, or checking the references of the papers they are interested. Existing tools for the literature search are suitable for a first-level bibliographic search. However, they do not allow complex second-level searches. In this paper, we present a web service called TheAdvisor (http://theadvisor.osu.edu) which helps the users to build a strong bibliography by extending the document set obtained after a first-level search. The service makes use of the citation graph for recommendation. It also features diversification, relevance feedback, graphical visualization, venue and reviewer recommendation. In this work, we explain the design criteria and rationale we employed to make the TheAdvisor a useful and scalable web service along with a thorough experimental evaluation.", "title": "" }, { "docid": "f9905f4b66bc411499e77dc0504108ff", "text": "Automatic recommendation of citations for a manuscript is highly valuable for scholarly activities since it can substantially improve the efficiency and quality of literature search. The prior techniques placed a considerable burden on users, who were required to provide a representative bibliography or to mark passages where citations are needed. In this paper we present a system that considerably reduces this burden: a user simply inputs a query manuscript (without a bibliography) and our system automatically finds locations where citations are needed. We show that naïve approaches do not work well due to massive noise in the document corpus. We produce a successful approach by carefully examining the relevance between segments in a query manuscript and the representative segments extracted from a document corpus. An extensive empirical evaluation using the CiteSeerX data set shows that our approach is effective.", "title": "" } ]
[ { "docid": "b055b213e4f4b9ddf6822f0fc925d03d", "text": "We study a vehicle routing problem with soft time windows and stochastic travel times. In this problem, we consider stochastic travel times to obtain routes which are both efficient and reliable. In our problem setting, soft time windows allow early and late servicing at customers by incurring some penalty costs. The objective is to minimize the sum of transportation costs and service costs. Transportation costs result from three elements which are the total distance traveled, the number of vehicles used and the total expected overtime of the drivers. Service costs are incurred for early and late arrivals; these correspond to time-window violations at the customers. We apply a column generation procedure to solve this problem. The master problem can be modeled as a classical set partitioning problem. The pricing subproblem, for each vehicle, corresponds to an elementary shortest path problem with resource constraints. To generate an integer solution, we embed our column generation procedure within a branch-and-price method. Computational results obtained by experimenting with well-known problem instances are reported.", "title": "" }, { "docid": "df63f02bb95db6eb0bf0b7165fa750f0", "text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2018, INFORMS", "title": "" }, { "docid": "fb655a622c2e299b8d7f8b85769575b4", "text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.", "title": "" }, { "docid": "cb5b60f3b1f577d51f567085170c6cac", "text": "Fundamental frequency (F0) is one of the essential features in many acoustic related applications. Although numerous F0 detection algorithms have been developed, the detection accuracy in noisy environments still needs improvement. We present a hybrid noise resilient F0 detection algorithm named BaNa that combines the approaches of harmonic ratios and Cepstrum analysis. A Viterbi algorithm with a cost function is used to identify the F0 value among several F0 candidates. Speech and music databases with eight different types of additive noise are used to evaluate the performance of the BaNa algorithm and several classic and state-of-the-art F0 detection algorithms. Results show that for almost all types of noise and signal-to-noise ratio (SNR) values investigated, BaNa achieves the lowest Gross Pitch Error (GPE) rate among all the algorithms. Moreover, for the 0 dB SNR scenarios, the BaNa algorithm is shown to achieve 20% to 35% GPE rate for speech and 12% to 39% GPE rate for music. We also describe implementation issues that must be addressed to run the BaNa algorithm as a real-time application on a smartphone platform.", "title": "" }, { "docid": "610068a7b1737375034960f0bf4d208d", "text": "Polymorphic malware detection is challenging due to the continual mutations miscreants introduce to successive instances of a particular virus. Such changes are akin to mutations in biological sequences. Recently, high-throughput methods for gene sequence classification have been developed by the bioinformatics and computational biology communities. In this paper, we argue that these methods can be usefully applied to malware detection. Unfortunately, gene classification tools are usually optimized for and restricted to an alphabet of four letters (nucleic acids). Consequently, we have selected the Strand gene sequence classifier, which offers a robust classification strategy that can easily accommodate unstructured data with any alphabet including source code or compiled machine code. To demonstrate Stand's suitability for classifying malware, we execute it on approximately 500GB of malware data provided by the Kaggle Microsoft Malware Classification Challenge (BIG 2015) used for predicting 9 classes of polymorphic malware. Experiments show that, with minimal adaptation, the method achieves accuracy levels well above 95% requiring only a fraction of the training times used by the winning team's method.", "title": "" }, { "docid": "068df85fd09061ebcdd599974c865675", "text": "The use of RFID (radio-frequency identification) in the retail supply chain and at the point of sale (POS) holds much promise to revolutionize the process by which products pass from manufacturer to retailer to consumer. The basic idea of RFID is a tiny computer chip placed on pallets, cases, or items. The data on the chip can be read using a radio beam. RFID is a newer technology than bar codes, which are read using a laser beam. RFID is also more effective than bar codes at tracking moving objects in environments where bar code labels would be suboptimal or could not be used as no direct line of sight is available, or where information needs to be automatically updated. RFID is based on wireless (radio) systems, which allows for noncontact reading of data about products, places, times, or transactions, thereby giving retailers and manufacturers alike timely and accurate data about the flow of products through their factories, warehouses, and stores. Background", "title": "" }, { "docid": "3e850a45249f45e95d1a7413e7b142f1", "text": "In our increasingly “data-abundant” society, remote sensing big data perform massive, high dimension and heterogeneity features, which could result in “dimension disaster” to various extent. It is worth mentioning that the past two decades have witnessed a number of dimensional reductions to weak the spatiotemporal redundancy and simplify the calculation in remote sensing information extraction, such as the linear learning methods or the manifold learning methods. However, the “crowding” and mixing when reducing dimensions of remote sensing categories could degrade the performance of existing techniques. Then in this paper, by analyzing probability distribution of pairwise distances among remote sensing datapoints, we use the 2-mixed Gaussian model(GMM) to improve the effectiveness of the theory of t-Distributed Stochastic Neighbor Embedding (t-SNE). A basic reducing dimensional model is given to test our proposed methods. The experiments show that the new probability distribution capable retains the local structure and significantly reveals differences between categories in a global structure.", "title": "" }, { "docid": "261da9a52120b9f845cc7e20809e02f0", "text": "Touch-based fingerprint technology causes distortions to the fingerprint features due to contact between finger and sensor device. Touch-less fingerprint technique is introduced in an effort to solve this problem by avoiding contact between the finger and the surface of the sensor. However, single contact-less images of the finger leads to less captured features and less overlap between the different views of the fingerprint. In this paper, a new touchless approach for fingerprints based on multiple views images is proposed. Three fingerprint images are captured from the left, center and right side of finger using mobile camera. These three images are combined together using the mosaic method in order to construct a large usable area and increase the overlap area. The proposed method has been compared with other proposed touchless methods. Our touchless mosaic method has offered better performance and achieves more fingerprint features compare to single view touchless method. The proposed method has been evaluated using our touchless database that consists of 480 fingerprint images.", "title": "" }, { "docid": "058a4f93fb5c24c0c9967fca277ee178", "text": "We report on the SUM project which applies automatic summarisation techniques to the legal domain. We describe our methodology whereby sentences from the text are classified according to their rhetorical role in order that particular types of sentence can be extracted to form a summary. We describe some experiments with judgments of the House of Lords: we have performed automatic linguistic annotation of a small sample set and then hand-annotated the sentences in the set in order to explore the relationship between linguistic features and argumentative roles. We use state-of-the-art NLP techniques to perform the linguistic annotation using XML-based tools and a combination of rule-based and statistical methods. We focus here on the predictive capacity of tense and aspect features for a classifier.", "title": "" }, { "docid": "45e625e1d04ed249074f40e1254dbd91", "text": "speech compression is the digital signal which is compressed by using various techniques for transmission. This paper explains a transform methodology for compression of the speech signal. In this paper speech is compressed by discrete wavelet transform technique afterward compressed signal is again compressed by discrete cosine transform afterward compressed signal is decompressed by discrete wavelet transform. The performance of speech signal is measure on the basis of peak signal to noise ratio (PSNR) and mean square error (MSE) by using different filters of wavelet family. Keywords— DCT, DWT, SPEECH COMPRESSION AND DECOMPRESSION", "title": "" }, { "docid": "a4e9d39a3ab7339e40958ad6df97adac", "text": "Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper. TYPE OF PAPER AND", "title": "" }, { "docid": "0a2e59ab99b9666d8cf3fb31be9fa40c", "text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.", "title": "" }, { "docid": "fec4f80f907d65d4b73480b9c224d98a", "text": "This paper presents a novel finite position set-phase locked loop (FPS-PLL) for sensorless control of surface-mounted permanent-magnet synchronous generators (PMSGs) in variable-speed wind turbines. The proposed FPS-PLL is based on the finite control set-model predictive control concept, where a finite number of rotor positions are used to estimate the back electromotive force of the PMSG. Then, the estimated rotor position, which minimizes a certain cost function, is selected to be the optimal rotor position. This eliminates the need of a fixed-gain proportional-integral controller, which is commonly utilized in the conventional PLL. The performance of the proposed FPS-PLL has been experimentally investigated and compared with that of the conventional one using a 14.5 kW PMSG with a field-oriented control scheme utilized as the generator control strategy. Furthermore, the robustness of the proposed FPS-PLL is investigated against PMSG parameters variations.", "title": "" }, { "docid": "ac4d208a022717f6389d8b754abba80b", "text": "This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.", "title": "" }, { "docid": "fac9465df30dd5d9ba5bc415b2be8172", "text": "In the Railway System, Railway Signalling System is the vital control equipment responsible for the safe operation of trains. In Railways, the system of communication from railway stations and running trains is by the means of signals through wired medium. Once the train leaves station, there is no communication between the running train and the station or controller. Hence, in case of failures or in emergencies in between stations, immediate information cannot be given and a particular problem will escalate with valuable time lost. Because of this problem only a single train can run in between two nearest stations. Now a days, Railway all over the world is using Optical Fiber cable for communication between stations and to send signals to trains. The usage of optical fibre cables does not lend itself for providing trackside communication as in the case of copper cable. Hence, another transmission medium is necessary for communication outside the station limits with drivers, guards, maintenance gangs, gateman etc. Obviously the medium of choice for such communication is wireless. With increasing speed and train density, adoption of train control methods such as Automatic warning system, (AWS) or, Automatic train stop (ATS), or Positive train separation (PTS) is a must. Even though, these methods traditionally pick up their signals from track based beacons, Wireless Sensor Network based systems will suit the Railways much more. In this paper, we described a new and innovative medium for railways that is Wireless Sensor Network (WSN) based Railway Signalling System and conclude that Introduction of WSN in Railways will not only achieve economy but will also improve the level of safety and efficiency of train operations.", "title": "" }, { "docid": "febf797870da28d6492885095b92ef1f", "text": "Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.", "title": "" }, { "docid": "6d2efd95c2b3486bec5b4c2ab2db18ad", "text": "The goal of this work is to replace objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene using the approach from Gupta et al. [13]. We use a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel normals in images containing rendered synthetic objects. When tested on real data, it outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place the model that fits the best into the scene. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [33], while being an order of magnitude faster at the same time.", "title": "" }, { "docid": "baf6d03a37e28182719d46b798b0f2de", "text": "We have acquired a set of audio-visual recordings of induced emotions. A collage of comedy clips and clips of disgusting content were shown to a number of participants, who displayed mostly expressions of disgust, happiness, and surprise in response. While displays of induced emotions may differ from those shown in everyday life in aspects such as the frequency with which they occur, they are regarded as highly naturalistic and spontaneous. We recorded 25 participants for approximately 5 minutes each. This collection of recordings has been added to the MMI Facial Expression Database, an online accessible, easily searchable resource that is freely available to the scientific community.", "title": "" }, { "docid": "dba5777004cf43d08a58ef3084c25bd3", "text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.", "title": "" }, { "docid": "340a506b8968efa5f775c26fd5841599", "text": "One of the teaching methods available to teachers in the ‘andragogic’ model of teaching is the method of ‘Socratic Seminars’. This is a teacher-directed form of instruction in which questions are used as the sole method of teaching, placing students in the position of having to recognise the limits of their knowledge, and hopefully, motivating them to learn. This paper aims at initiating the discussion on the strengths and drawbacks of this method. Based on empirical research, the paper suggests that the Socratic method seems to be a very effective method for teaching adult learners, but should be used with caution depending on the personality of the learners.", "title": "" } ]
scidocsrr
115d8c87e1624510606e01c8605d8aab
High Sensitivity and Wide Dynamic Range Analog Front-End Circuits for Pulsed TOF 4-D Imaging LADAR Receiver
[ { "docid": "c5b9053b1b22d56dd827009ef529004d", "text": "An integrated receiver with high sensitivity and low walk error for a military purpose pulsed time-of-flight (TOF) LADAR system is proposed. The proposed receiver adopts a dual-gain capacitive-feedback TIA (C-TIA) instead of widely used resistive-feedback TIA (R-TIA) to increase the sensitivity. In addition, a new walk-error improvement circuit based on a constant-delay detection method is proposed. Implemented in 0.35 μm CMOS technology, the receiver achieves an input-referred noise current of 1.36 pA/√Hz with bandwidth of 140 MHz and minimum detectable signal (MDS) of 10 nW with a 5 ns pulse at SNR=3.3, maximum walk-error of 2.8 ns, and a dynamic range of 1:12,000 over the operating temperature range of -40 °C to +85 °C.", "title": "" } ]
[ { "docid": "acd93c6b041a975dcf52c7bafaf05b16", "text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.", "title": "" }, { "docid": "545d566dff3d4c4ace8dcd26040db3a2", "text": "In this paper, we first define our research problem as to detect collusive spammers in online review communities. Next we present our current progress on this topic, in which we have spotted anomalies by evaluating 15 behavioral features proposed in the state-of-the-art approaches. Then we propose a novel hybrid classification/clustering method to detect colluders in our dataset based on selected informative features. Experimental results show that our method promisingly improve the performance of traditional classifiers by incorporating clustering for the smoothing. Finally, possible extensions of our current work and challenges in achieving them are discussed as our future directions.", "title": "" }, { "docid": "d1513bdee495f972bc3ec97542809e25", "text": "Assessing software security involves steps such as code review, risk analysis, penetration testing and fuzzing. During the fuzzing phase, the tester \" s goal is to find flaws in software by sending unexpected input to the target application and monitoring its behavior. In this paper we introduce the AutoFuzz [1]-extendable, open source framework used for testing network protocol implementations. AutoFuzz is a \" smart \" , man-in-the-middle, semi-deterministic network protocol fuzzing framework. AutoFuzz learns a protocol implementation by constructing a Finite State Automaton (FSA) which captures the observed communications between a client and a server [5]. In addition, AutoFuzz learns individual message syntax, including fields and probable types, by applying the bioinformatics techniques of [2]. Finally, AutoFuzz can fuzz client or server protocol implementations by intelligently modifying the communication sessions between them using the FSA as a guide. AutoFuzz was applied to a variety of File Transfer Protocol (FTP) server implementations, confirming old and discovering new vulnerabilities.", "title": "" }, { "docid": "0d706058ff906f643d35295075fa4199", "text": "[Purpose] The present study examined the effects of treatment using PNF extension techniques on the pain, pressure pain, and neck and shoulder functions of the upper trapezius muscles of myofascial pain syndrome (MPS) patients. [Subjects] Thirty-two patients with MPS in the upper trapezius muscle were divided into two groups: a PNF group (n=16), and a control group (n=16) [Methods] The PNF group received upper trapezius muscle relaxation therapy and shoulder joint stabilizing exercises. Subjects in the control group received only the general physical therapies for the upper trapezius muscles. Subjects were measured for pain on a visual analog scale (VAS), pressure pain threshold (PPT), the neck disability index (NDI), and the Constant-Murley scale (CMS). [Results] None of the VAS, PPT, and NDI results showed significant differences between the groups, while performing postures, internal rotation, and external rotation among the CMS items showed significant differences between the groups. [Conclusion] Exercise programs that apply PNF techniques can be said to be effective at improving the function of MPS patients.", "title": "" }, { "docid": "99d76fafe2a238a061e67e4c5e5bea52", "text": "F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.", "title": "" }, { "docid": "4b28f538a21348f0cb7741d03c76081f", "text": "Differential privacy is a strong notion for protecting individual privacy in privacy preserving data analysis or publishing. In this paper, we study the problem of differentially private histogram release based on an interactive differential privacy interface. We propose two multidimensional partitioning strategies including a baseline cell-based partitioning and an innovative kd-tree based partitioning. In addition to providing formal proofs for differential privacy and usefulness guarantees for linear distributive queries , we also present a set of experimental results and demonstrate the feasibility and performance of our method.", "title": "" }, { "docid": "34ab0d523054aaf0ad0731c11e137cd0", "text": "Although a large number of WiFi fingerprinting based indoor localization systems have been proposed, our field experience with Google Maps Indoor (GMI), the only system available for public testing, shows that it is far from mature for indoor navigation. In this paper, we first report our field studies with GMI, as well as experiment results aiming to explain our unsatisfactory GMI experience. Then motivated by the obtained insights, we propose GROPING as a self-contained indoor navigation system independent of any infrastructural support. GROPING relies on geomagnetic fingerprints that are far more stable than WiFi fingerprints, and it exploits crowdsensing to construct floor maps rather than expecting individual venues to supply digitized maps. Based on our experiments with 20 participants in various floors of a big shopping mall, GROPING is able to deliver a sufficient accuracy for localization and thus provides smooth navigation experience.", "title": "" }, { "docid": "409f3b2768a8adf488eaa6486d1025a2", "text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.", "title": "" }, { "docid": "7a8f431b6635d9f2957957af7cd9de09", "text": "In this paper a genetic algorithm for solving timetable scheduling problem is described. The algorithm was tested on small and large instances of the problem. Algorithm performance was significantly enhanced with modification of basic genetic operators, which restrain the creation of new conflicts in the", "title": "" }, { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" }, { "docid": "f9b3813d806e93cc0a88143c89cd1379", "text": "Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made up of layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), for fixed parameters one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes/no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition.", "title": "" }, { "docid": "21b6598a08238659635d1c449057c1ab", "text": "In information field we have huge amount of data available that need to be turned into useful information. So we used Data reduction and its techniques. A process in which amount of data is minimized and that minimized data are stored in a data storage environment is known as data reduction. By this process of reducing data various advantages have been achieved in computer networks such as increasing storage efficiency and reduced computational costs. In this paper we have applied data reduction algorithms on NSL-KDD dataset. The output of each data reduction algorithm is given as an input to two classification algorithms i.e. J48 and Naïve Bayes. Our main is to find out which data reduction technique proves to be useful in enhancing the performance of the classification algorithm. Results are compared on the bases of accuracy, specificity and sensitivity.", "title": "" }, { "docid": "1cb8ef23f39a64f089299c4e4d9e4590", "text": "We compared four automated methods for hippocampal segmentation using different machine learning algorithms: 1) hierarchical AdaBoost, 2) support vector machines (SVM) with manual feature selection, 3) hierarchical SVM with automated feature selection (Ada-SVM), and 4) a publicly available brain segmentation package (FreeSurfer). We trained our approaches using T1-weighted brain MRIs from 30 subjects [10 normal elderly, 10 mild cognitive impairment (MCI), and 10 Alzheimer's disease (AD)], and tested on an independent set of 40 subjects (20 normal, 20 AD). Manually segmented gold standard hippocampal tracings were available for all subjects (training and testing). We assessed each approach's accuracy relative to manual segmentations, and its power to map AD effects. We then converted the segmentations into parametric surfaces to map disease effects on anatomy. After surface reconstruction, we computed significance maps, and overall corrected p-values, for the 3-D profile of shape differences between AD and normal subjects. Our AdaBoost and Ada-SVM segmentations compared favorably with the manual segmentations and detected disease effects as well as FreeSurfer on the data tested. Cumulative p-value plots, in conjunction with the false discovery rate method, were used to examine the power of each method to detect correlations with diagnosis and cognitive scores. We also evaluated how segmentation accuracy depended on the size of the training set, providing practical information for future users of this technique.", "title": "" }, { "docid": "84bc7106c6bcf9b0490906154a87b34f", "text": "The problem of optimal grasping of an object by a multifingered robot hand is discussed. Using screw theory and elementary differential geometry, the concept of a grasp is axiomated and its stability characterized. Three quality measures for evaluating a grasp are then proposed. The last quality measure is task oriented and needs the development of a procedure for modeling tasks as ellipsoids in the wrench space of the object. Numerical computations of these quality measures and the selection of an optimal grasp are addressed in detail. Several examples are given using these quality measures to show that they are consistent with measurements yielded by our experiments on grasping.", "title": "" }, { "docid": "5635f52c3e02fd9e9ea54c9ea1ff0329", "text": "As a digital version of word-of-mouth, online review has become a major information source for consumers and has very important implications for a wide range of management activities. While some researchers focus their studies on the impact of online product review on sales, an important assumption remains unexamined, that is, can online product review reveal the true quality of the product? To test the validity of this key assumption, this paper first empirically tests the underlying distribution of online reviews with data from Amazon. The results show that 53% of the products have a bimodal and non-normal distribution. For these products, the average score does not necessarily reveal the product's true quality and may provide misleading recommendations. Then this paper derives an analytical model to explain when the mean can serve as a valid representation of a product's true quality, and discusses its implication on marketing practices.", "title": "" }, { "docid": "15f37b546fdd93874cbc1d5a36ab3e4e", "text": "Although the positive association between religiosity and life satisfaction is well documented, much theoretical and empirical controversy surrounds the question of how religion actually shapes life satisfaction. Using a new panel dataset, this study offers strong evidence for social and participatory mechanisms shaping religion’s impact on life satisfaction. Our findings suggest that religious people are more satisfied with their lives because they regularly attend religious services and build social networks in their congregations. The effect of within-congregation friendship is contingent, however, on the presence of a strong religious identity. We find little evidence that other private or subjective aspects of religiosity affect life satisfaction independent of attendance and congregational friendship.", "title": "" }, { "docid": "23b0756f3ad63157cff70d4973c9e6bd", "text": "A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset1.", "title": "" }, { "docid": "776726fe88c24dff0b726a71f0f94d67", "text": "The application of remote sensing technology and precision agriculture in the oil palm industry is in development. This study investigated the potential of high resolution QuickBird satellite imagery, which has a synoptic overview, for detecting oil palms infected by basal stem rot disease and for mapping the disease. Basal stem rot disease poses a major threat to the oil palm industry, especially in Indonesia. It is caused by Ganoderma boninense and the symptoms can be seen on the leaf and basal stem. At present there is no effective control for this disease and early detection of the infection is essential. A detailed, accurate and rapid method of monitoring the disease is needed urgently. This study used QuickBird imagery to detect the disease and its spatial pattern. Initially, oil palm and non oil palm object segmentation based on the red band was used to map the spatial pattern of the disease. Secondly, six vegetation indices derived from visible and near infrared bands (NIR) were used for to identify palms infected by the disease. Finally, ground truth from field sampling in four fields with different ages of plant and degrees of infection was used to assess the accuracy of the remote sensing approach. The results show that image segmentation effectively delineated areas infected by the disease with a mapping accuracy of 84%. The resulting maps showed two patterns of the disease; a sporadic pattern in fields with older palms and a dendritic pattern in younger palms with medium to low infection. Ground truth data showed that oil palms infected by basal stem rot had a higher reflectance in the visible bands and a lower reflectance in the near infrared band. Different vegetation indices performed differently in each field. The atmospheric resistant vegetation index and green blue normalized difference vegetation index identified the disease with an accuracy of 67% in a field with 21 year old palms and high infection rates. In the field of 10 year old palms with medium rates of infection, the simple ratio (NIR/red) was effective with an accuracy of 62% for identifying the disease. The green blue normalized difference vegetation index was effective in the field of 10 years old palms with low infection rates with an accuracy of 59%. In the field of 15 and 18 years old palms with low infection rates, all the indices showed low levels of accuracy for identifying the disease. This study suggests that high resolution QuickBird imagery offers a quick, detailed and accurate way of estimating the location and extent of basal stem rot disease infections in oil palm plantations.", "title": "" }, { "docid": "d45b23d061e4387f45a0dad03f237f5a", "text": "Cultural appropriation is often mentioned but undertheorized in critical rhetorical and media studies. Defined as the use of a culture’s symbols, artifacts, genres, rituals, or technologies by members of another culture, cultural appropriation can be placed into 4 categories: exchange, dominance, exploitation, and transculturation. Although each of these types can be understood as relevant to particular contexts or eras, transculturation questions the bounded and proprietary view of culture embedded in other types of appropriation. Transculturation posits culture as a relational phenomenon constituted by acts of appropriation, not an entity that merely participates in appropriation. Tensions exist between the need to challenge essentialism and the use of essentialist notions such as ownership and degradation to criticize the exploitation of colonized cultures.", "title": "" }, { "docid": "dcdaeb7c1da911d0b1a2932be92e0fb4", "text": "As computational agents are increasingly used beyond research labs, their success will depend on their ability to learn new skills and adapt to their dynamic, complex environments. If human users—without programming skills— can transfer their task knowledge to agents, learning can accelerate dramatically, reducing costly trials. The tamer framework guides the design of agents whose behavior can be shaped through signals of approval and disapproval, a natural form of human feedback. More recently, tamer+rl was introduced to enable human feedback to augment a traditional reinforcement learning (RL) agent that learns from a Markov decision process’s (MDP) reward signal. We address limitations of prior work on tamer and tamer+rl, contributing in two critical directions. First, the four successful techniques for combining human reward with RL from prior tamer+rl work are tested on a second task, and these techniques’ sensitivities to parameter changes are analyzed. Together, these examinations yield more general and prescriptive conclusions to guide others who wish to incorporate human knowledge into an RL algorithm. Second, tamer+rl has thus far been limited to a sequential setting, in which training occurs before learning from MDP reward. In this paper, we introduce a novel algorithm that shares the same spirit as tamer+rl but learns simultaneously from both reward sources, enabling the human feedback to come at any time during the reinforcement learning process. We call this algorithm simultaneous tamer+rl. To enable simultaneous learning, we introduce a new technique that appropriately determines the magnitude of the human model’s influence on the RL algorithm throughout time and state-action space.", "title": "" } ]
scidocsrr
7549675ac5d792fa60728680b032e7fa
The Effects of Organizational Learning Culture and Job Satisfaction on Motivation to Transfer Learning and Turnover Intention
[ { "docid": "ec788f48207b0a001810e1eabf6b2312", "text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.", "title": "" } ]
[ { "docid": "689c2bac45b0933994337bd28ce0515d", "text": "Jealousy is a powerful emotional force in couples' relationships. In just seconds it can turn love into rage and tenderness into acts of control, intimidation, and even suicide or murder. Yet it has been surprisingly neglected in the couples therapy field. In this paper we define jealousy broadly as a hub of contradictory feelings, thoughts, beliefs, actions, and reactions, and consider how it can range from a normative predicament to extreme obsessive manifestations. We ground jealousy in couples' basic relational tasks and utilize the construct of the vulnerability cycle to describe processes of derailment. We offer guidelines on how to contain the couple's escalation, disarm their ineffective strategies and power struggles, identify underlying vulnerabilities and yearnings, and distinguish meanings that belong to the present from those that belong to the past, or to other contexts. The goal is to facilitate relational and personal changes that can yield a better fit between the partners' expectations.", "title": "" }, { "docid": "4b32c5355dffc5ff900e5b8b18a4b7d8", "text": "Grid-connected photovoltaic energy conversion systems are among the fastest growing energy systems of the last five years. Multilevel converters, and particularly the Cascaded H-Bridge (CHB), have attracted much attention for this application due to medium-voltage operation, improved power quality and higher efficiency. The CHB enables the connection of individual PV strings to the dc side of each power cell with independent maximum power point tracking (MPPT). This advantage is in turn a great challenge from a control point of view, since each cell will provide different instantaneous active power, for which modified modulation and control schemes are necessary to avoid voltage drift of the dc-link capacitors. This paper explores a model predictive control method, in which predictions for each possible switching state are computed and evaluated in a cost function, in order to select the appropriate control action. The proposed control scheme is capable of controlling the dc-link voltages to the desired MPPT reference voltage, while injecting sinusoidal current to the grid. Preliminary validation through simulation results are included for a five level CHB interfaced PV system.", "title": "" }, { "docid": "032fb65ac300c477d82ccbe6918115f4", "text": "Three concepts (a) network programmability by clear separation of data and control planes and (b) sharing of network infrastructure to provide multitenancy, including traffic and address isolation, in large data center networks and (c) replacing the functions that traditionally run on a specialized hardware, with the software-realizations that run on commodity servers have gained lot of attention by both Industry and research-community over past few years. These three concepts are broadly referred as software defined networking (SDN), network virtualization (NV) and network functions virtualization (NFV). This paper presents a thorough study of these three concepts, including how SDN technology can complement the network virtualization and network functions virtualization. SDN, is about applying modularity to network control, which gives network designer the freedom to re-factor the control plane. This modularity has found its application in various areas including network virtualization. This work begins with the survey of software defined networking, considering various perspectives. The survey of SDN is followed by discussing how SDN plays a significant role in NV and NFV. Finally, this work also attempts to explore future directions in SDN based on current trends. Keywords—Software defined networking, Network Virtualization, Network Functions Virtualization, OpenFlow, Data Center, Overlay, Underlay, Network Planes, Programmable networks.", "title": "" }, { "docid": "e0c6b8310defd7dd9fb760b71ca01bcb", "text": "A neural network recognition and tracking system is proposed for classification of radar pulses in autonomous Electronic Support Measure systems. Radar type information is considered with position-specific information from active emitters in a scene. Type-specific parameters of the input pulse stream are fed to a neural network classifier trained on samples of data collected in the field. Meanwhile, a clustering algorithm is used to separate pulses from different emitters according to position-specific parameters of the input pulse stream. Classifier responses corresponding to different emitters are separated into tracks, or trajectories, one per active emitter, allowing for more accurate identification of radar types based on multiple views of emitter data along each emitter trajectory. Such a What-and-Where fusion strategy is motivated by a similar subdivision of labor in the brain. The fuzzy ARTMAP neural network is used to classify streams of pulses according to radar type using their functional parameters. Simulation results obtained with a radar pulse data set indicate that fuzzy ARTMAP compares favorably to several other approaches when performance is measured in terms of accuracy and computational complexity. Incorporation into fuzzy ARTMAP of negative match tracking (from ARTMAP-IC) facilitated convergence during training with this data set. Other modifications improved classification of data that include missing input pattern components and missing training classes. Fuzzy ARTMAP was combined with a bank of Kalman filters to group pulses transmitted from different emitters based on their position-specific parameters, and with a module to accumulate evidence from fuzzy ARTMAP responses corresponding to the track defined for each emitter. Simulation results demonstrate that the system provides a high level of performance on complex, incomplete and overlapping radar data.", "title": "" }, { "docid": "12d5480f42ef606a049047ee5f4d2d26", "text": "The authors investigated the development of a disposition toward empathy and its genetic and environmental origins. Young twins' (N = 409 pairs) cognitive (hypothesis testing) and affective (empathic concern) empathy and prosocial behavior in response to simulated pain by mothers and examiners were observed at multiple time points. Children's mean level of empathy and prosociality increased from 14 to 36 months. Positive concurrent and longitudinal correlations indicated that empathy was a relatively stable disposition, generalizing across ages, across its affective and cognitive components, and across mother and examiner. Multivariate genetic analyses showed that genetic effects increased, and that shared environmental effects decreased, with age. Genetic effects contributed to both change and continuity in children's empathy, whereas shared environmental effects contributed to stability and nonshared environmental effects contributed to change. Empathy was associated with prosocial behavior, and this relationship was mainly due to environmental effects.", "title": "" }, { "docid": "2a057079c544b97dded598b6f0d750ed", "text": "Introduction Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions:", "title": "" }, { "docid": "aa234355d0b0493e1d8c7a04e7020781", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" }, { "docid": "3e54834b8e64bbdf25dd0795e770d63c", "text": "Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.", "title": "" }, { "docid": "ff4e70808991281aeddc2352c6af50e3", "text": "Mobile Cloud Computing (MCC) is an emerging technology which attempts to combine the storage and processing resources of cloud environment with the dynamicity and accessibility of mobile devices. Security, particularly authentication, is fast evolving as a focal area in mobile cloud computing research. This paper comprehensively surveys the various authentication mechanisms proposed so far for mobile cloud computing. We propose a novel classification system for existing authentication methods in MCC. Further, the pros and cons of the various methods are discussed. We present a comparative analysis and recommends future research in improving the surveyed implicit authentication by establishing cryptographic security of stored usage context and actions.", "title": "" }, { "docid": "e4606c387322b07a10f03ad6db31c62a", "text": "Data fusion is an important issue for object tracking in autonomous systems such as robotics and surveillance. In this paper, we present a multiple-object tracking system whose design is based on multiple Kalman filters dealing with observations from two different kinds of physical sensors. Hardware integration which combines a cheap radar module and a CCD camera has been developed and data fusion method has been proposed to process measurements from those modules for multi-object tracking. Due to the limited resolution of bearing angle measurements of the cheap radar module, CCD measurements are used to compensate for the low angle resolution. Conversely, the radar module provides radial distance information which cannot be measured easily by the CCD camera. The proposed data fusion enables the tracker to efficiently utilize the radial measurements of objects from the cheap radar module and 2D location measurements of objects in image space of the CCD camera. To achieve the multi-object tracking we combine the proposed data fusion method with the integrated probability data association (IPDA) technique underlying the multiple-Kalman filter framework. The proposed complementary system based on the radar and CCD camera is experimentally evaluated through a multi-person tracking scenario. The experimental results demonstrate that the implemented system with fused observations considerably enhances tracking performance over a single sensor system.", "title": "" }, { "docid": "86d58f4196ceb48e29cb143e6a157c22", "text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.", "title": "" }, { "docid": "8a24f9d284507765e0026ae8a70fc482", "text": "The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results. The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were found to be predictive of smear negative but culture positive pulmonary tuberculosis. The current practices of establishing pulmonary tuberculosis diagnosis are not sensitive and specific enough to establish the diagnosis of Acid Fast Bacilli smear negative pulmonary tuberculosis and over treat people with no pulmonary tuberculosis.", "title": "" }, { "docid": "e063764cff8c01b218e26fa06c3ba507", "text": "The aim of this paper is to show how multimodal learning analytics (MMLA) can help understand how elementary students explore the concept of feedback loops while controlling an embodied simulation of a predator-prey ecosystem using hand movements as an interface with the computer simulation. We represent student motion patterns from fine-grained logs of hands and gaze data, and then map these observed motion patterns against levels of student performance to make inferences about how embodiment plays a role in the learning process. Results show five distinct motion sequences in students' embodied interactions, and these motion patterns are statistically associated with initial and post-tutorial levels of students' understanding of feedback loops. Analysis of student gaze also shows distinctive patterns as to how low- and high-performing students attended to information presented in the simulation. Using MMLA, we show how students' explanations of feedback loops look differently according to cluster membership, which provides evidence that embodiment interacts with conceptual understanding.", "title": "" }, { "docid": "9709064022fd1ab5ef145ce58d2841c1", "text": "Enforcing security in Internet of Things environments has been identified as one of the top barriers for realizing the vision of smart, energy-efficient homes and buildings. In this context, understanding the risks related to the use and potential misuse of information about homes, partners, and end-users, as well as, forming methods for integrating security-enhancing measures in the design is not straightforward and thus requires substantial investigation. A risk analysis applied on a smart home automation system developed in a research project involving leading industrial actors has been conducted. Out of 32 examined risks, 9 were classified as low and 4 as high, i.e., most of the identified risks were deemed as moderate. The risks classified as high were either related to the human factor or to the software components of the system. The results indicate that with the implementation of standard security features, new, as well as, current risks can be minimized to acceptable levels albeit that the most serious risks, i.e., those derived from the human factor, need more careful consideration, as they are inherently complex to handle. A discussion of the implications of the risk analysis results points to the need for a more general model of security and privacy included in the design phase of smart homes. With such a model of security and privacy in design in place, it will contribute to enforcing system security and enhancing user privacy in smart homes, and thus helping to further realize the potential in such IoT environments.", "title": "" }, { "docid": "c75c8461134f3ad5855ef30a49f377fb", "text": "Suspicious human activity recognition from surveillance video is an active research area of image processing and computer vision. Through the visual surveillance, human activities can be monitored in sensitive and public areas such as bus stations, railway stations, airports, banks, shopping malls, school and colleges, parking lots, roads, etc. to prevent terrorism, theft, accidents and illegal parking, vandalism, fighting, chain snatching, crime and other suspicious activities. It is very difficult to watch public places continuously, therefore an intelligent video surveillance is required that can monitor the human activities in real-time and categorize them as usual and unusual activities; and can generate an alert. Recent decade witnessed a good number of publications in the field of visual surveillance to recognize the abnormal activities. Furthermore, a few surveys can be seen in the literature for the different abnormal activities recognition; but none of them have addressed different abnormal activities in a review. In this paper, we present the state-of-the-art which demonstrates the overall progress of suspicious activity recognition from the surveillance videos in the last decade. We include a brief introduction of the suspicious human activity recognition with its issues and challenges. This paper consists of six abnormal activities such as abandoned object detection, theft detection, fall detection, accidents and illegal parking detection on road, violence activity detection, and fire detection. In general, we have discussed all the steps those have been followed to recognize the human activity from the surveillance videos in the literature; such as foreground object extraction, object detection based on tracking or non-tracking methods, feature extraction, classification; activity analysis and recognition. The objective of this paper is to provide the literature review of six different suspicious activity recognition systems with its general framework to the researchers of this field.", "title": "" }, { "docid": "788beb721cb4197a036f4ce207fcf36b", "text": "This paper presents the requirements, design criteria and methodology used to develop the design of a new selfcontained prosthetic hand to be used by transradial amputees. The design is based on users’ needs, on authors background and knowledge of the state of the art, and feasible fabrication technology with the aim of replicating as much as possible the functionality of the human hand. The paper focuses on the design approach and methodology which is divided into three steps: (i) the mechanical actuation units, design and actuation distribution; (ii) the mechatronic development and finally (iii) the controller architecture design. The design is presented here and compared with significant commercial devices and research prototypes.", "title": "" }, { "docid": "87bded10bc1a29a3c0dead2958defc2e", "text": "Context: Web applications are trusted by billions of users for performing day-to-day activities. Accessibility, availability and omnipresence of web applications have made them a prime target for attackers. A simple implementation flaw in the application could allow an attacker to steal sensitive information and perform adversary actions, and hence it is important to secure web applications from attacks. Defensive mechanisms for securing web applications from the flaws have received attention from both academia and industry. Objective: The objective of this literature review is to summarize the current state of the art for securing web applications from major flaws such as injection and logic flaws. Though different kinds of injection flaws exist, the scope is restricted to SQL Injection (SQLI) and Cross-site scripting (XSS), since they are rated as the top most threats by different security consortiums. Method: The relevant articles recently published are identified from well-known digital libraries, and a total of 86 primary studies are considered. A total of 17 articles related to SQLI, 35 related to XSS and 34 related to logic flaws are discussed. Results: The articles are categorized based on the phase of software development life cycle where the defense mechanism is put into place. Most of the articles focus on detecting the flaws and preventing attacks against web applications. Conclusion: Even though various approaches are available for securing web applications from SQLI and XSS, they are still prevalent due to their impact and severity. Logic flaws are gaining attention of the researchers since they violate the business specifications of applications. There is no single solution to mitigate all the flaws. More research is needed in the area of fixing flaws in the source code of applications.", "title": "" }, { "docid": "bab502d14640004ffcaee96bec149d3e", "text": "Social media has changed how software developers collaborate, how they coordinate their work, and where they find information. Social media sites, such as the Question and Answer (Q&A) portal Stack Overflow, fill archives with millions of entries that contribute to what we know about software development, covering a wide range of topics. For today’s software developers, reusable code snippets, introductory usage examples, and pertinent libraries are often just a web search away. In this position paper, we discuss the opportunities and challenges for software developers that rely on web content curated by the crowd, and we envision the future of an industry where individual developers benefit from and contribute to a body of knowledge maintained by the crowd using social media.", "title": "" }, { "docid": "bd18a2a92781344dc9821f98559a9c69", "text": "The increasing complexity of Database Management Systems (DBMSs) and the dearth of their experienced administrators make an urgent call for an Autonomic DBMS that is capable of managing and maintaining itself. In this paper, we examine the characteristics that a DBMS should have in order to be considered autonomic and assess the position of today’s commercial DBMSs such as DB2, SQL Server, and Oracle.", "title": "" }, { "docid": "a64b0763172d2141337bbccb9407fe8a", "text": "UNLABELLED\nType B malleolar fractures (AO/ASIF classification) are usually stable ankle joint fractures. Nonetheless, some show a residual instability after internal fixation requiring further stabilization. How often does such a situation occur and can these unstable fractures be recognized beforehand?From 1995 to 1997, 111 malleolar fractures (three type A, 90 type B, 18 type C) were operated on. Seventeen out of 90 patients (19%) with a type B fracture showed residual instability after internal fixation (one unilateral, four bimalleolar and 12 trimalleolar fractures). Five of these patients showed a dislocation in the sagittal plane (anteroposterior) clinically or on the radiographs, five a dislocation in the coronal plane with dislocation of the tibia on the medial aspect of the ankle joint, and four an incongruency on the medial aspect of the joint. In three cases, no preoperative abnormality indicating instability was found. The fractures were all fixed using an additional positioning screw. In 11 patients, the positioning screw was removed after 8-12 weeks, in six patients removal was performed after 1 year along with removal of the plate. All 17 patients were reviewed 1 year after internal fixation, 16/17 showed a good or excellent result with identical or only minor impairment of range of motion of the ankle joint.\n\n\nCONCLUSION\nUnstable ankle joints after internal fixation of type B malleolar fractures exist. Residual instability most often occurs after trimalleolar fractures with initial joint dislocation. Treatment with an additional positioning screw generally produced a satisfactory result.", "title": "" } ]
scidocsrr
d7f023365c9efac74d0a4d3399f6a887
Learning Regularized LDA by Clustering
[ { "docid": "0fa7896efb6dcacbd2823c8d323f89b0", "text": "Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called localitypreserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick.", "title": "" }, { "docid": "ab01dc16d6f31a423b68fca2aeb8e109", "text": "Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.", "title": "" } ]
[ { "docid": "687ac21bd828ae6d559ef9f55064dec0", "text": "We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments—active user modelling with preferences, and hierarchical reinforcement learning— and a discussion of the pros and cons of Bayesian optimization based on our experiences.", "title": "" }, { "docid": "1e5925569492956c4330d6c260c453e2", "text": "A simple low loss H-shape hybrid coupler based on the substrate integrated waveguide technology is presented for millimeter-wave applications. The coupler operation is based on the excitation of two different modes, TE10 and TE20. The coupler S-matrix is calculated by using a full-wave solver that uses the even/odd mode (symmetry) analysis to minimize the computational time and provides more physical insight. The simulated return and insertion losses are better than -20 dB and -3.90 dB, respectively over the operating frequency bandwidth of 39-40.50 GHz.", "title": "" }, { "docid": "af69cdae1b331c012dab38c47e2c786c", "text": "A 44 μW self-powered power line monitoring sensor node is implemented in 65 nm CMOS. A 450 kHz 30 kbps BPSK-modulated transceiver allows for 1.5-meter node-to-node powerline communication at 10E-6 BER. The node has a 3.354 ENOB 50 kSps SAR ADC for current measurement and a 440 Sps time-to-digital converter capable of measuring temperature from 0-100 °C in 1.12 °C steps. All components operate at a nominal supply voltage of 0.5 V, and are powered by dedicated regulators enabling fine-grained power management.", "title": "" }, { "docid": "e8459c80dc392cac844b127bc5994a5d", "text": "Database security has become a vital issue in modern Web applications. Critical business data in databases is an evident target for attack. Therefore, ensuring the confidentiality, privacy and integrity of data is a major issue for the security of database systems. Recent high profile data thefts have shown that perimeter defenses are insufficient to secure sensitive data. This paper studies security of the databases shared between many parties from a cryptographic perspective. We propose Mixed Cryptography Database (MCDB), a novel framework to encrypt databases over untrusted networks in a mixed form using many keys owned by different parties. The encryption process is based on a new data classification according to the data owner. The proposed framework is very useful in strengthening the protection of sensitive data even if the database server is attacked at multiple points from the inside or outside.", "title": "" }, { "docid": "350c899dbd0d9ded745b70b6f5e97d19", "text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "title": "" }, { "docid": "0f9e15b890aa9c1e7cf7276fb54f83f3", "text": "While image inpainting has recently become widely available in image manipulation tools, existing approaches to video inpainting typically do not even achieve interactive frame rates yet as they are highly computationally expensive. Further, they either apply severe restrictions on the movement of the camera or do not provide a high-quality coherent video stream. In this paper we will present our approach to high-quality real-time capable image and video inpainting. Our PixMix approach even allows for the manipulation of live video streams, providing the basis for real Diminished Reality (DR) applications. We will show how our approach generates coherent video streams dealing with quite heterogeneous background environments and non-trivial camera movements, even applying constraints in real-time.", "title": "" }, { "docid": "413d407b4e2727d18419c9537f2e556f", "text": "This paper describes the design of an automated triage and emergency management information system. The prototype system is capable of monitoring and assessing physiological parameters of individuals, transmitting pertinent medical data to and from multiple echelons of medical service, and providing filtered data for command and control applications. The system employs wireless networking, portable computing devices, and reliable messaging technology as a framework for information analysis, information movement, and decision support capabilities. The embedded medical model and physiological status assessment are based on input from humans and a pulse oximetry device. The physiological status determination methodology follows NATO defined guidelines for remote triage and is implemented using an approach based on fuzzy logic. The approach described can be used in both military and civilian", "title": "" }, { "docid": "d76e649c6daeb71baf377c2b36623e29", "text": "The somatic marker hypothesis proposes that decision-making is a process that depends on emotion. Studies have shown that damage of the ventromedial prefrontal (VMF) cortex precludes the ability to use somatic (emotional) signals that are necessary for guiding decisions in the advantageous direction. However, given the role of the amygdala in emotional processing, we asked whether amygdala damage also would interfere with decision-making. Furthermore, we asked whether there might be a difference between the roles that the amygdala and VMF cortex play in decision-making. To address these two questions, we studied a group of patients with bilateral amygdala, but not VMF, damage and a group of patients with bilateral VMF, but not amygdala, damage. We used the \"gambling task\" to measure decision-making performance and electrodermal activity (skin conductance responses, SCR) as an index of somatic state activation. All patients, those with amygdala damage as well as those with VMF damage, were (1) impaired on the gambling task and (2) unable to develop anticipatory SCRs while they pondered risky choices. However, VMF patients were able to generate SCRs when they received a reward or a punishment (play money), whereas amygdala patients failed to do so. In a Pavlovian conditioning experiment the VMF patients acquired a conditioned SCR to visual stimuli paired with an aversive loud sound, whereas amygdala patients failed to do so. The results suggest that amygdala damage is associated with impairment in decision-making and that the roles played by the amygdala and VMF in decision-making are different.", "title": "" }, { "docid": "068321516540ed9f5f05638bdfb7235a", "text": "Cloud of Things (CoT) is a computing model that combines the widely popular cloud computing with Internet of Things (IoT). One of the major problems with CoT is the latency of accessing distant cloud resources from the devices, where the data is captured. To address this problem, paradigms such as fog computing and Cloudlets have been proposed to interpose another layer of computing between the clouds and devices. Such a three-layered cloud-fog-device computing architecture is touted as the most suitable approach for deploying many next generation ubiquitous computing applications. Programming applications to run on such a platform is quite challenging because disconnections between the different layers are bound to happen in a large-scale CoT system, where the devices can be mobile. This paper presents a programming language and system for a three-layered CoT system. We illustrate how our language and system addresses some of the key challenges in the three-layered CoT. A proof-of-concept prototype compiler and runtime have been implemented and several example applications are developed using it.", "title": "" }, { "docid": "54ceed51f750eadda3038b42eb9977a5", "text": "Starting from the revolutionary Retinex by Land and McCann, several further perceptually inspired color correction models have been developed with different aims, e.g. reproduction of color sensation, robust features recognition, enhancement of color images. Such models have a differential, spatially-variant and non-linear nature and they can coarsely be distinguished between white-patch (WP) and gray-world (GW) algorithms. In this paper we show that the combination of a pure WP algorithm (RSR: random spray Retinex) and an essentially GW one (ACE) leads to a more robust and better performing model (RACE). The choice of RSR and ACE follows from the recent identification of a unified spatially-variant approach for both algorithms. Mathematically, the originally distinct non-linear and differential mechanisms of RSR and ACE have been fused using the spray technique and local average operations. The investigation of RACE allowed us to put in evidence a common drawback of differential models: corruption of uniform image areas. To overcome this intrinsic defect, we devised a local and global contrast-based and image-driven regulation mechanism that has a general applicability to perceptually inspired color correction algorithms. Tests, comparisons and discussions are presented.", "title": "" }, { "docid": "aefa4559fa6f8e0c046cd7e02d3e1b92", "text": "The concept of smart city is considered as the new engine for economic and social growths since it is supported by the rapid development of information and communication technologies. However, each technology not only brings its advantages, but also the challenges that cities have to face in order to implement it. So, this paper addresses two research questions : « What are the most important technologies that drive the development of smart cities ?» and « what are the challenges that cities will face when adopting these technologies ? » Relying on a literature review of studies published between 1990 and 2017, the ensuing results show that Artificial Intelligence and Internet of Things represent the most used technologies for smart cities. So, the focus of this paper will be on these two technologies by showing their advantages and their challenges.", "title": "" }, { "docid": "76262c43c175646d7a00e02a7a49ab81", "text": "Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns.", "title": "" }, { "docid": "a406c230c994bfdfdfa951229b5aa248", "text": "It is commonly suggested that a female preponderance in depression is universal and substantial. This review considers that proposition and explanatory factors. The view that depression rates are universally higher in women is challenged with exceptions to the proposition helping clarify candidate explanations. 'Real' and artefactual explanations for any such phenomenon are considered, and the contribution of sex role changes, social factors and biological determinants are overviewed. While artefactual factors make some contribution, it is concluded that there is a higher order biological factor (variably determined neuroticism, 'stress responsiveness' or 'limbic system hyperactivity') that principally contributes to the gender differentiation in some expressions of both depression and anxiety, and reflects the impact of gonadal steroid changes at puberty. Rather than conclude that 'anatomy is destiny' we favour a diathesis stress model, so accounting for differential epidemiological findings. Finally, the impact of gender on response to differing antidepressant therapies is considered briefly.", "title": "" }, { "docid": "0bc61c7a334d5888aee825f2933d7219", "text": "This paper introduces a novel unsupervised outlier detection method, namely Coupled Biased Random Walks (CBRW), for identifying outliers in categorical data with diversified frequency distributions and many noisy features. Existing pattern-based outlier detection methods are ineffective in handling such complex scenarios, as they misfit such data. CBRW estimates outlier scores of feature values by modelling feature value level couplings, which carry intrinsic data characteristics, via biased random walks to handle this complex data. The outlier scores of feature values can either measure the outlierness of an object or facilitate the existing methods as a feature weighting and selection indicator. Substantial experiments show that CBRW can not only detect outliers in complex data significantly better than the state-of-the-art methods, but also greatly improve the performance of existing methods on data sets with many noisy features.", "title": "" }, { "docid": "0a416d52025ed5e59a6f247d474838ba", "text": "The main contribution of this paper is to introduce and describe a new recommender-systems dataset (RARD II). It is based on data from a recommender-system in the digital library and reference management software domain. As such, it complements datasets from other domains such as books, movies, and music. The RARD II dataset encompasses 89m recommendations, covering an item-space of 24m unique items. RARD II provides a range of rich recommendation data, beyond conventional ratings. For example, in addition to the usual ratings matrices, RARD II includes the original recommendation logs, which provide a unique insight into many aspects of the algorithms that generated the recommendations. In this paper, we summarise the key features of this dataset release, describing how it was generated and discussing some of its unique features.", "title": "" }, { "docid": "84dee4781f7bc13711317d0594e97294", "text": "We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2-orthogonal basis of Krylov subspaces. It can be considered as a generalization of Paige and Saunders' MINRES algorithm and is theoretically equivalent to the Generalized Conjugate Residual (GCR) method and to ORTHODIR. The new algorithm presents several advantages over GCR and ORTHODIR.", "title": "" }, { "docid": "f7ac17169072f3db03db36709bdd76fd", "text": "The Unit Commitment problem in energy management aims at finding the optimal productions schedule of a set of generation units while meeting various system-wide constraints. It has always been a large-scale, non-convex difficult problem, especially in view of the fact that operational requirements imply that it has to be solved in an unreasonably small time for its size. Recently, the ever increasing capacity for renewable generation has strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex, uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focusing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, also providing entry points to the relevant literature on optimization under uncertainty.", "title": "" }, { "docid": "693b07ee12e83aae9f7f9f0c5a637403", "text": "Many consumer-centric industries provide products and services to millions of consumers. These industries include healthcare and wellness, retail, hospitality and travel, sports and entertainment, legal services, financial services, residential real estate and many more. IT professionals and business executives are used to thinking about enterprise-centric ERP systems as the IT center of gravity, but increasingly the focus of IT activity is shifting from the enterprise Center to the Edge of the enterprise as consumers are digitally connected and activated. Enabling this shift requires managing both IT deployment and organizational transformation at the Center of the enterprise, as well as accommodating consumers’ digital interactions at the Edge and understanding how to realize new strategic value through the shift. This article examines the phenomenon of Center-Edge digital transformation in consumercentric industries through a case study in the healthcare industry. It provides guidelines for IT and business executives in any consumer-centric industry who would like to understand how to", "title": "" }, { "docid": "b34bc241b9bc6260bff92d66715d5651", "text": "Recently, cross-modal search has attracted considerable attention but remains a very challenging task because of the integration complexity and heterogeneity of the multi-modal data. To address both challenges, in this paper, we propose a novel method termed hetero-manifold regularisation (HMR) to supervise the learning of hash functions for efficient cross-modal search. A hetero-manifold integrates multiple sub-manifolds defined by homogeneous data with the help of cross-modal supervision information. Taking advantages of the hetero-manifold, the similarity between each pair of heterogeneous data could be naturally measured by three order random walks on this hetero-manifold. Furthermore, a novel cumulative distance inequality defined on the hetero-manifold is introduced to avoid the computational difficulty induced by the discreteness of hash codes. By using the inequality, cross-modal hashing is transformed into a problem of hetero-manifold regularised support vector learning. Therefore, the performance of cross-modal search can be significantly improved by seamlessly combining the integrated information of the hetero-manifold and the strong generalisation of the support vector machine. Comprehensive experiments show that the proposed HMR achieve advantageous results over the state-of-the-art methods in several challenging cross-modal tasks.", "title": "" }, { "docid": "2ba975af095effcbbc4e98d7dc2172ec", "text": "People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.", "title": "" } ]
scidocsrr
e0c26fbe72a4d9b4af5b69a853ccdc7a
Key Lengths Contribution to The Handbook of Information Security
[ { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" } ]
[ { "docid": "aa904ddbc0419b3ab424159eedf6044e", "text": "Online dating is unique in the pursuit of romance. The bond created between potential partners takes a different path than normal dating relationships. Online dating usually begins with a flurry of e-mail messages, each more intimate than the last. Traditional dating relationships that might take months to develop in the real world, take weeks or even days online. Much has been written about cyber-dating, but little research has been done. This series of four studies examines the online dating process, similarities and differences between online and traditional dating, and the impact of emotionality and self-disclosure on first (e-mail) impressions of a potential partner. Results indicate that the amount of emotionality and self-disclosure affected a person’s perception of a potential partner. An e-mail with strong emotional words (e.g., excited, wonderful) led to more positive impressions than an e-mail with fewer strong emotional words (e.g., happy, fine) and resulted in nearly three out of four subjects selecting the e-mailer with strong emotional words for the fictitious dater of the opposite sex. Results for self-disclosure e-mails were complex, but indicate that levels of self-disclosure led to different impressions. Low levels of self-disclosure were generally preferred in choosing for the fictitious dater, although these preferences differed by gender, education, and ethnic background. Results were discussed in terms of theories of computer-mediated communication. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "31fc886990140919aabce17aa7774910", "text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.", "title": "" }, { "docid": "432e7ae2e76d76dbb42d92cd9103e3d2", "text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.", "title": "" }, { "docid": "41efa33421f8f4dc0852d968a02ca015", "text": "Internet-based mobile ad hoc network (IMANET) is an emerging technique that combines a mobile ad hoc network (MANET) and the Internet to provide universal information accessibility. Although caching frequently accessed data items in mobile terminals (MTs) improves the communication performance in an IMANET, it brings a critical design issue when data updates. In this paper, we analyze several push and pull-based cache invalidation strategies for IMANETS. A global positioning system (GPS) based connectivity estimation (GPSCE) scheme is first proposed to assess the connectivity of an MT for supporting cache invalidation mechanisms. Then, we propose a pull-based approach, called aggregate cache based on demand (ACOD) scheme that uses an efficient search algorithm for finding the queried data items. In addition, we modify two push-based cache invalidation strategies, proposed for cellular networks, to work in IMANETS. They are called modified timestamp (MTS) scheme and MTS with updated invalidation report (MTS + UIR) scheme, respectively. We compare the performance of all these schemes as a function of query interval, cache update interval, and cache size through extensive simulation. Simulation results indicate that the ACOD scheme provides high throughput, low query latency, and low communication overhead, and thus, is a viable approach for implementation in IMANETS. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6655a8886137b73e6ed81296871c34f9", "text": "This research significantly achieved the construction of a teaching evaluation sentiment lexicon and an automated sentiment orientation polarity definition in teaching evaluation. The Teaching Senti-lexicon will compute the weights of terms and phrases obtained from student opinions, which are stored in teaching evaluation suggestions in the form of open-ended questions. This Teaching Senti-lexicon consists of three main attributes, including: teaching corpus, category and sentiment weight score. The sentiment orientation polarity was computed with its mean function being sentiment class definitions. A number of 175 instances were randomised using teaching feedback responses which were posted by students studying at Loei Raja hat University. The contributions of this paper propose an effective teaching sentiment analysis method, especially for teaching evaluation. In this paper, the experimented model employed SVM, ID3 and Naïve Bayes algorithms, which were implemented in order to analyse sentiment classifications with a 97% highest accuracy of SVM. This model is also applied to improve upon their teaching as well.", "title": "" }, { "docid": "edf52710738647f7ebd4c017ddf56c2c", "text": "Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.", "title": "" }, { "docid": "d7ec6d060760e1c80459277f3f663743", "text": "a r t i c l e i n f o Keywords: Supply chain management Analytical capabilities Information systems Business process management Performance SCOR The paper investigates the relationship between analytical capabilities in the plan, source, make and deliver area of the supply chain and its performance using information system support and business process orientation as moderators. Structural equation modeling employs a sample of 310 companies from different industries from the USA, Europe, Canada, Brazil and China. The findings suggest the existence of a statistically significant relationship between analytical capabilities and performance. The moderation effect of information systems support is considerably stronger than the effect of business process orientation. The results provide a better understanding of the areas where the impact of business analytics may be the strongest. In the modern world competition is no longer between organizations , but among supply chains ('SCs'). Effective supply chain management ('SCM') has therefore become a potentially valuable way of securing a competitive advantage and improving organizational performance [47,79]. However, the understanding of the why and how SCM affects firm performance, which areas are especially important and which are the important moderator effects is still incomplete. This paper thus analyses the impact of business analytics ('BA') in a SC on the improvement of SC performance. The topic is important since enhancing the effectiveness and efficiency of SC analytics is a critical component of a chain's ability to achieve its competitive advantage [68]. BA have been identified as an important \" tool \" for SCM [44] and optimization techniques have become an integral part of organizational business processes [80]. A correct relevant business decision based on bundles of very large volumes of both internal and external data is only possible with BA [68]. It is therefore not surprising that research interest in BA use has been increasing [43]. However, despite certain anecdotic evidence (see for instance the examples given in [19]) or optimistic reports of return-on-investment exceeding 100% (see e.g. [25]) a systematic and structured analysis of the impact of BA use on SC performance has not yet been conducted. Accordingly, the main contribution of our paper is its analysis of the impact of the use of BA in different areas of the SC (based on the Supply Chain Operations Reference ('SCOR') model) on the performance of the chain. Further, the mediating effects of two important constructs, namely information systems ('IS') support and business …", "title": "" }, { "docid": "b47b06f8548716e0ef01a0e113d48e5d", "text": "This paper proposes a framework to automatically construct taxonomies from a corpus of text documents. This framework first extracts terms from documents using a part-of-speech parser. These terms are then filtered using domain pertinence, domain consensus, lexical cohesion, and structural relevance. The remaining terms represent concepts in the taxonomy. These concepts are arranged in a hierarchy with either the extended subsumption method that accounts for concept ancestors in determining the parent of a concept or a hierarchical clustering algorithm that uses various text-based window and document scopes for concept co-occurrences. Our evaluation in the field of management and economics indicates that a trade-off between taxonomy quality and depth must be made when choosing one of these methods. The subsumption method is preferable for shallow taxonomies, whereas the hierarchical clustering algorithm is recommended for deep taxonomies.", "title": "" }, { "docid": "36f835fbe41520f42c2eed57bcaf496f", "text": "Recent studies focus primarily on low energy consumption or execution time for task scheduling with precedence constraints in heterogeneous computing systems. In most cases, system reliability is more important than other performance metrics. In addition, energy consumption and system reliability are two conflicting objectives. A novel bi-objective genetic algorithm (BOGA) to pursue low energy consumption and high system reliability for workflow scheduling is presented in this paper. The proposed BOGA offers users more flexibility when jobs are submitted to a data center. On the basis of real-world and randomly generated application graphs, numerous experiments are conducted to evaluate the performance of the proposed algorithm. In comparison with excellent algorithms such as multi-objective heterogeneous earliest finish time (MOHEFT) and multi-objective differential evolution (MODE), BOGA performs significantly better in terms of finding the spread of compromise solutions. © 2016 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e5b125bdb5a17cbe926c03c3bac6935c", "text": "We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.", "title": "" }, { "docid": "2ddcca1b70ac5fa4a7135d8537dc1187", "text": "Extracting edge points from an image and fitting ellipses to them is a fundamental technique for computer vision applications. However, since the extracted edge points sometimes contain non-elliptic arcs such as line segments, it is a very difficult to extract only elliptic arcs from them. In this paper, we propose a new method for extracting elliptic arcs from a spatially connected point sequence. We first fit an ellipse to an input point sequence and segment the sequence into partial arcs at the intersection points of the fitted ellipse. Next, we compute residuals of the fitted ellipse for all input points and select elliptic arcs among the segmented arcs by checking the curvatures of the residual graph. Then, we fit an ellipse to the selected arcs and repeat the above process until the selected arcs do not change. By using simulated data and real images, we compare the performance of our method with existing methods and show the efficiency of our proposed method.", "title": "" }, { "docid": "3dcfd937b9c1ae8ccc04c6a8a99c71f5", "text": "Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with experienced users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level α = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible.", "title": "" }, { "docid": "db0e61e6988106203f6780023ba6902b", "text": "In first stage of each microwave receiver there is Low Noise Amplifier (LNA) circuit, and this stage has important rule in quality factor of the receiver. The design of a LNA in Radio Frequency (RF) circuit requires the trade-off many importance characteristics such as gain, Noise Figure (NF), stability, power consumption and complexity. This situation Forces desingners to make choices in the desing of RF circuits. In this paper the aim is to design and simulate a single stage LNA circuit with high gain and low noise using MESFET for frequency range of 5 GHz to 6 GHz. The desing simulation process is down using Advance Design System (ADS). A single stage LNA has successfully designed with 15.83 dB forward gain and 1.26 dB noise figure in frequency of 5.3 GHz. Also the designed LNA should be working stably In a frequency range of 5 GHz to 6 GHz. Keywords—Advance Design System, Low Noise Amplifier, Radio Frequency, Noise Figure.", "title": "" }, { "docid": "38ce333e333927777e14d4766ed78c43", "text": "Attributed networks are pervasive in different domains, ranging from social networks, gene regulatory networks to financial transaction networks. This kind of rich network representation presents challenges for anomaly detection due to the heterogeneity of two data representations. A vast majority of existing algorithms assume certain properties of anomalies are given a prior. Since various types of anomalies in real-world attributed networks coexist, the assumption that priori knowledge regarding anomalies is available does not hold. In this paper, we investigate the problem of anomaly detection in attributed networks generally from a residual analysis perspective, which has been shown to be effective in traditional anomaly detection problems. However, it is a non-trivial task in attributed networks as interactions among instances complicate the residual modeling process. Methodologically, we propose a learning framework to characterize the residuals of attribute information and its coherence with network information for anomaly detection. By learning and analyzing the residuals, we detect anomalies whose behaviors are singularly different from the majority. Experiments on real datasets show the effectiveness and generality of the proposed framework.", "title": "" }, { "docid": "57716923e4c2f5e40647dd3b30a8640e", "text": "An objective of a warm-up prior to an athletic event is to optimize performance. Warm-ups are typically composed of a submaximal aerobic activity, stretching and a sport-specific activity. The stretching portion traditionally incorporated static stretching. However, there are a myriad of studies demonstrating static stretch-induced performance impairments. More recently, there are a substantial number of articles with no detrimental effects associated with prior static stretching. The lack of impairment may be related to a number of factors. These include static stretching that is of short duration (<90 s total) with a stretch intensity less than the point of discomfort. Other factors include the type of performance test measured and implemented on an elite athletic or trained middle aged population. Static stretching may actually provide benefits in some cases such as slower velocity eccentric contractions, and contractions of a more prolonged duration or stretch-shortening cycle. Dynamic stretching has been shown to either have no effect or may augment subsequent performance, especially if the duration of the dynamic stretching is prolonged. Static stretching used in a separate training session can provide health related range of motion benefits. Generally, a warm-up to minimize impairments and enhance performance should be composed of a submaximal intensity aerobic activity followed by large amplitude dynamic stretching and then completed with sport-specific dynamic activities. Sports that necessitate a high degree of static flexibility should use short duration static stretches with lower intensity stretches in a trained population to minimize the possibilities of impairments.", "title": "" }, { "docid": "2098191fad9a065bcb117f6cd7299dd7", "text": "The growth of both IT technology and the Internet Communication has involved the development of lot of encrypted information. Among others techniques of message hiding, stenography is one them but more suspicious as no one cannot see the secret message. As we always use the MS Office, there are many ways to hide secret messages by using PowerPoint as normal file. In this paper, we propose a new technique to find a hidden message by analysing the in PowerPoint file using EnCase Transcript. The result analysis shows that Steganography technique had hidden a certain number of message which are invisible to naked eye.", "title": "" }, { "docid": "fcfe9a40b99110e8de40939b62743f91", "text": "In the context of a parallel manipulator, inverse and direct Jacobian matrices are known to contain information which helps us identify some of the singular configurations. In this article, we employ kinematic analysis for the Delta robot to derive the velocity of the end-effector in terms of the angular joint velocities, thus yielding the Jacobian matrices. Setting their determinants to zero, several undesirable postures of themanipulator have been extracted. The analysis of the inverse Jacobian matrix reveals that singularities are encountered when the limbs belonging to the same kinematic chain lie in a plane. Two of the possible configurations which correspond to this condition are when the robot is completely extended or contracted, indicating the boundaries of the workspace. Singularities associated with the direct Jacobian matrix, which correspond to relatively more complicated configurations of the manipulator, have also been derived and commented on. Moreover, the idea of intermediate Jacobian matrices have been introduced that are simpler to evaluate but still contain the information of the singularities mentioned earlier in addition to architectural singularities not contemplated in conventional Jacobians.", "title": "" }, { "docid": "310b8159894bc88b74a907c924277de6", "text": "We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graph-theoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers.", "title": "" } ]
scidocsrr
ef3c20dc9ab787e25e77ba60675f2ca6
A Memetic Fingerprint Matching Algorithm
[ { "docid": "0e2d6ebfade09beb448e9c538dadd015", "text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "b21c6ab3b97fd23f8fe1f8645608b29f", "text": "Daily activity recognition can help people to maintain a healthy lifestyle and robot to better interact with users. Robots could therefore use the information coming from the activities performed by users to give them some custom hints to improve lifestyle and daily routine. The pervasiveness of smart things together with advances in cloud robotics can help the robot to perceive and collect more information about the users and the environment. In particular thanks to the miniaturization and low cost of Inertial Measurement Units, in the last years, body-worn activity recognition has gained popularity. In this work, we investigated the performances with an unsupervised approach to recognize eight different gestures performed in daily living wearing a system composed of two inertial sensors placed on the hand and on the wrist. In this context our aim is to evaluate whether the system is able to recognize the gestures in more realistic applications, where is not possible to have a training set. The classification problem was analyzed using two unsupervised approaches (K-Mean and Gaussian Mixture Model), with an intra-subject and an inter-subject analysis, and two supervised approaches (Support Vector Machine and Random Forest), with a 10-fold cross validation analysis and with a Leave-One-Subject-Out analysis to compare the results. The outcomes show that even in an unsupervised context the system is able to recognize the gestures with an averaged accuracy of 0.917 in the K-Mean inter-subject approach and 0.796 in the Gaussian Mixture Model inter-subject one.", "title": "" }, { "docid": "7021db9b0e77b2df2576f0cc5eda8d7d", "text": "Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.", "title": "" }, { "docid": "2d30ed139066b025dcb834737d874c99", "text": "Considerable advances have occurred in recent years in the scientific knowledge of the benefits of breastfeeding, the mechanisms underlying these benefits, and in the clinical management of breastfeeding. This policy statement on breastfeeding replaces the 1997 policy statement of the American Academy of Pediatrics and reflects this newer knowledge and the supporting publications. The benefits of breastfeeding for the infant, the mother, and the community are summarized, and recommendations to guide the pediatrician and other health care professionals in assisting mothers in the initiation and maintenance of breastfeeding for healthy term infants and high-risk infants are presented. The policy statement delineates various ways in which pediatricians can promote, protect, and support breastfeeding not only in their individual practices but also in the hospital, medical school, community, and nation.", "title": "" }, { "docid": "92fdbab17be68e94b2033ef79b41cf0c", "text": "Areas of convergence and divergence between the Narcissistic Personality Inventory (NPI; Raskin & Terry, 1988) and the Pathological Narcissism Inventory (PNI; Pincus et al., 2009) were evaluated in a sample of 586 college students. Summary scores for the NPI and PNI were not strongly correlated (r = .22) but correlations between certain subscales of these two inventories were larger (e.g., r = .71 for scales measuring Exploitativeness). Both measures had a similar level of correlation with the Narcissistic Personality Disorder scale from the Personality Diagnostic Questionnaire-4 (Hyler, 1994) (r = .40 and .35, respectively). The NPI and PNI diverged, however, with respect to their associations with Explicit Self-Esteem. Selfesteem was negatively associated with the PNI but positively associated with the NPI (r = .34 versus r = .26). Collectively, the results highlight the need for precision when discussing the personality characteristics associated with narcissism. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e", "text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.", "title": "" }, { "docid": "818c075d79a51fcab4c38031f14a98ef", "text": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.", "title": "" }, { "docid": "83e50a2c76217f60057d8bf680a12b92", "text": "[1] Luo, Z. X., Zhou, X. C., David XianFeng, G. U. (2014). From a projective invariant to some new properties of algebraic hypersurfaces.Science China Mathematics, 57(11), 2273-2284. [2] Fan, B., Wu, F., Hu, Z. (2010). Line matching leveraged by point correspondences. IEEE Conference on Computer Vision & Pattern Recognition (Vol.238, pp.390-397). [3] Fan, B., Wu, F., & Hu, Z. (2012). Robust line matching through line–point invariants. Pattern Recognition, 45(2), 794-805. [4] López, J., Santos, R., Fdez-Vidal, X. R., & Pardo, X. M. (2015). Two-view line matching algorithm based on context and appearance in low-textured images. Pattern Recognition, 48(7), 2164-2184. Dalian University of Technology Qi Jia, Xinkai Gao, Xin Fan*, Zhongxuan Luo, Haojie Li,and Ziyao Chen Novel Coplanar Line-points Invariants for Robust Line Matching Across Views", "title": "" }, { "docid": "61dcc07734c98bf0ad01a98fe0c55bf4", "text": "The system includes terminal fingerprint acquisitio n module and attendance module. It can realize automatically such functions as information acquisi tion of fingerprint, processing, and wireless trans mission, fingerprint matching and making an attendance repor t. After taking the attendance, this system sends t he attendance of every student to their parent’s mobil e through GSM and also stored the attendance of res pective student to calculate the percentage of attendance a d alerts to class in charge. Attendance system fac ilitates access to the attendance of a particular student in a particular class. This system eliminates the nee d for stationary materials and personnel for the keeping of records and efforts of class in charge.", "title": "" }, { "docid": "5a91b2d8611b14e33c01390181eb1891", "text": "Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task.", "title": "" }, { "docid": "e93517eb28df17dddfc63eb7141368f9", "text": "Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.", "title": "" }, { "docid": "77cea98467305b9b3b11de8d3cec6ec2", "text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.", "title": "" }, { "docid": "8405b35a36235ba26444655a3619812d", "text": "Studying the reason why single-layer molybdenum disulfide (MoS2) appears to fall short of its promising potential in flexible nanoelectronics, we find that the nature of contacts plays a more important role than the semiconductor itself. In order to understand the nature of MoS2/metal contacts, we perform ab initio density functional theory calculations for the geometry, bonding, and electronic structure of the contact region. We find that the most common contact metal (Au) is rather inefficient for electron injection into single-layer MoS2 and propose Ti as a representative example of suitable alternative electrode materials.", "title": "" }, { "docid": "64e2b73e8a2d12a1f0bbd7d07fccba72", "text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.", "title": "" }, { "docid": "a5cb288b5a2f29c22a9338be416a27f7", "text": "L ^ N C O U R A G I N G CHILDREN'S INTRINSIC MOTIVATION CAN HELP THEM TO ACHIEVE ACADEMIC SUCCESS (ADELMAN, 1978; ADELMAN & TAYLOR, 1986; GOTTFRIED, 1 9 8 3 , 1 9 8 5 ) . TO HELP STUDENTS WITH AND WITHOUT LEARNING DISABILITIES TO DEVELOP ACADEMIC INTRINSIC MOTIVATION, IT IS IMPORTANT TO DEFINE THE FACTORS THAT AFFECT MOTIVATION (ADELMAN & CHANEY, 1 9 8 2 ; ADELMAN & TAYLOR, 1983). T H I S ARTICLE OFFERS EDUCATORS AN INSIGHT INTO THE EFFECTS OF DIFFERENT MOTIVATIONAL ORIENTATIONS ON THE SCHOOL LEARNING OF STUDENTS WITH LEARNING DISABILITIES, AS W E L L AS INTO THE VARIABLES AFFECTING INTRINSIC AND EXTRINSIC MOTIVATION. ALSO INCLUDED ARE RECOMMENDATIONS, BASED ON EMPIRICAL EVIDENCE, FOR ENHANCING ACADEMIC INTRINSIC MOTIVATION IN LEARNERS OF VARYING ABIL IT IES AT A L L GRADE LEVELS. I .NTEREST IN THE VARIOUS ASPECTS OF INTRINSIC and extrinsic motivation has accelerated in recent years. Motivational orientation is considered to be an important factor in determining the academic success of children with and without disabilities (Adelman & Taylor, 1986; Calder & Staw, 1975; Deci, 1975; Deci & Chandler, 1986; Schunk, 1991). Academic intrinsic motivation has been found to be significantly correlated with academic achievement in students with learning disabilities (Gottfried, 1985) and without learning disabilities (Adelman, 1978; Adelman & Taylor, 1983). However, children with learning disabilities (LD) are less likely than their nondisabled peers to be intrinsically motivated (Adelman & Chaney, 1982; Adelman & Taylor, 1986; Mastropieri & Scruggs, 1994; Smith, 1994). Students with LD have been found to have more positive attitudes toward school than toward school learning (Wilson & David, 1994). Wilson and David asked 89 students with LD to respond to items on the School Attitude Measures (SAM; Wick, 1990) and on the Children's Academic Intrinsic Motivation Inventory (CAIMI; Gottfried, 1986). The students with L D were found to have a more positive attitude toward the school environment than toward academic tasks. Research has also shown that students with LD may derive their self-perceptions from areas other than school, and do not see themselves as less competent in areas of school learning (Grolnick & Ryan, 1990). Although there is only a limited amount of research available on intrinsic motivation in the population with special needs (Adelman, 1978; Adelman & Taylor, 1986; Grolnick & Ryan, 1990), there is an abundance of research on the general school-age population. This article is an at tempt to use existing research to identify variables pertinent to the academic intrinsic motivation of children with learning disabilities. The first part of the article deals with the definitions of intrinsic and extrinsic motivation. The next part identifies some of the factors affecting the motivational orientation and subsequent academic achievement of school-age children. This is followed by empirical evidence of the effects of rewards on intrinsic motivation, and suggestions on enhancing intrinsic motivation in the learner. At the end, several strategies are presented that could be used by the teacher to develop and encourage intrinsic motivation in children with and without LD. l O R E M E D I A L A N D S P E C I A L E D U C A T I O N Volume 18. Number 1, January/February 1997, Pages 12-19 D E F I N I N G M O T I V A T I O N A L A T T R I B U T E S Intrinsic Motivation Intrinsic motivation has been defined as (a) participation in an activity purely out of curiosity, that is, from a need to know more about something (Deci, 1975; Gottfried, 1983; Woolfolk, 1990); (b) the desire to engage in an activity purely for the sake of participating in and completing a task (Bates, 1979; Deci, Vallerand, Pelletier, & Ryan, 1991); and (c) the desire to contribute (Mills, 1991). Academic intrinsic motivation has been measured by (a) the ability of the learner to persist with the task assigned (Brophy, 1983; Gottfried, 1983); (b) the amount of time spent by the student on tackling the task (Brophy, 1983; Gottfried, 1983); (c) the innate curiosity to learn (Gottfried, 1983); (d) the feeling of efficacy related to an activity (Gottfried, 1983; Schunk, 1991; Smith, 1994); (e) the desire to select an activity (Brophy, 1983); and (f) a combination of all these variables (Deci, 1975; Deci & Ryan, 1985). A student who is intrinsically motivated will persist with the assigned task, even though it may be difficult (Gottfried, 1983; Schunk, 1990), and will not need any type of reward or incentive to initiate or complete a task (Beck, 1978; Deci, 1975; Woolfolk, 1990). This type of student is more likely to complete the chosen task and be excited by the challenging nature of an activity. The intrinsically motivated student is also more likely to retain the concepts learned and to feel confident about tackling unfamiliar learning situations, like new vocabulary words. However, the amount of interest generated by the task also plays a role in the motivational orientation of the learner. An assigned task with zero interest value is less likely to motivate the student than is a task that arouses interest and curiosity. Intrinsic motivation is based in the innate, organismic needs for competence and self-determination (Deci & Ryan, 1985; Woolfolk, 1990), as well as the desire to seek and conquer challenges (Adelman & Taylor, 1990). People are likely to be motivated to complete a task on the basis of their level of interest and the nature of the challenge. Research has suggested that children with higher academic intrinsic motivation function more effectively in school (Adelman & Taylor, 1990; Boggiano & Barrett, 1992; Gottfried, 1990; Soto, 1988). Besides innate factors, there are several other variables that can affect intrinsic motivation. Extrinsic Motivation Adults often give the learner an incentive to participate in or to complete an activity. The incentive might be in the form of a tangible reward, such as money or candy. Or, it might be the likelihood of a reward in the future, such as a good grade. Or, it might be a nontangible reward, for example, verbal praise or a pat on the back. The incentive might also be exemption from a less liked activity or avoidance of punishment. These incentives are extrinsic motivators. A person is said to be extrinsically motivated when she or he undertakes a task purely for the sake of attaining a reward or for avoiding some punishment (Adelman & Taylor, 1990; Ball, 1984; Beck, 1978; Deci, 1975; Wiersma, 1992; Woolfolk, 1990). Extrinsic motivation can, especially in learning and other forms of creative work, interfere with intrinsic motivation (Benninga et al., 1991; Butler, 1989; Deci, 1975; McCullers, Fabes, & Moran, 1987). In such cases, it might be better not to offer rewards for participating in or for completing an activity, be it textbook learning or an organized play activity. Not only teachers but also parents have been found to negatively influence the motivational orientation of the child by providing extrinsic consequences contingent upon their school performance (Gottfried, Fleming, & Gottfried, 1994). The relationship between rewards (and other extrinsic factors) and the intrinsic motivation of the learner is outlined in the following sections. MOTIVATION AND THE LEARNER In a classroom, the student is expected to tackle certain types of tasks, usually with very limited choices. Most of the research done on motivation has been done in settings where the learner had a wide choice of activities, or in a free-play setting. In reality, the student has to complete tasks that are compulsory as well as evaluated (Brophy, 1983). Children are expected to complete a certain number of assignments that meet specified criteria. For example, a child may be asked to complete five multiplication problems and is expected to get correct answers to at least three. Teachers need to consider how instructional practices are designed from the motivational perspective (Schunk, 1990). Development of skills required for academic achievement can be influenced by instructional design. If the design undermines student ability and skill level, it can reduce motivation (Brophy, 1983; Schunk, 1990). This is especially applicable to students with disabilities. Students with LD have shown a significant increase in academic learning after engaging in interesting tasks like computer games designed to enhance learning (Adelman, Lauber, Nelson, & Smith, 1989). A common aim of educators is to help all students enhance their learning, regardless of the student's ability level. To achieve this outcome, the teacher has to develop a curriculum geared to the individual needs and ability levels of the students, especially the students with special needs. If the assigned task is within the child's ability level as well as inherently interesting, the child is very likely to be intrinsically motivated to tackle the task. The task should also be challenging enough to stimulate the child's desire to attain mastery. The probability of success or failure is often attributed to factors such as ability, effort, difficulty level of the task, R E M E D I A L A N D S P E C I A L E D U C A T I O N 1 O Volume 18, Number 1, January/February 1997 and luck (Schunk, 1990). One or more of these attributes might, in turn, affect the motivational orientation of a student. The student who is sure of some level of success is more likely to be motivated to tackle the task than one who is unsure of the outcome (Adelman & Taylor, 1990). A student who is motivated to learn will find school-related tasks meaningful (Brophy, 1983, 1987). Teachers can help students to maximize their achievement by adjusting the instructional design to their individual characteristics and motivational orientation. The personality traits and motivational tendency of learners with mild handicaps can either help them to compensate for their inadequate learning abilities and enhance performanc", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "82c9c8a7a9dccfa59b09df595de6235c", "text": "Honeypots are closely monitored decoys that are employed in a network to study the trail of hackers and to alert network administrators of a possible intrusion. Using honeypots provides a cost-effective solution to increase the security posture of an organization. Even though it is not a panacea for security breaches, it is useful as a tool for network forensics and intrusion detection. Nowadays, they are also being extensively used by the research community to study issues in network security, such as Internet worms, spam control, DoS attacks, etc. In this paper, we advocate the use of honeypots as an effective educational tool to study issues in network security. We support this claim by demonstrating a set of projects that we have carried out in a network, which we have deployed specifically for running distributed computer security projects. The design of our projects tackles the challenges in installing a honeypot in academic institution, by not intruding on the campus network while providing secure access to the Internet. In addition to a classification of honeypots, we present a framework for designing assignments/projects for network security courses. The three sample honeypot projects discussed in this paper are presented as examples of the framework.", "title": "" }, { "docid": "da4b2452893ca0734890dd83f5b63db4", "text": "Diabetic retinopathy is when damage occurs to the retina due to diabetes, which affects up to 80 percent of all patients who have had diabetes for 10 years or more. The expertise and equipment required are often lacking in areas where diabetic retinopathy detection is most needed. Most of the work in the field of diabetic retinopathy has been based on disease detection or manual extraction of features, but this paper aims at automatic diagnosis of the disease into its different stages using deep learning. This paper presents the design and implementation of GPU accelerated deep convolutional neural networks to automatically diagnose and thereby classify high-resolution retinal images into 5 stages of the disease based on severity. The single model accuracy of the convolutional neural networks presented in this paper is 0.386 on a quadratic weighted kappa metric and ensembling of three such similar models resulted in a score of 0.3996.", "title": "" }, { "docid": "6fdb3ae03e6443765c72197eb032f4a0", "text": "This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of environmental variability are introduced by the use of desk-top microphones and different training and testing conditions: additive noise and spectral tilt introduced by linear filtering. An important attribute of the novel compensation algorithms described in this thesis is that they provide joint rather than independent compensation for these two types of degradation. Acoustical compensation is applied in our algorithms as an additive correction in the cepstral domain. This allows a higher degree of integration within SPHINX, the Carnegie Mellon speech recognition system, that uses the cepstrum as its feature vector. Therefore, these algorithms can be implemented very efficiently. Processing in many of these algorithms is based on instantaneous signal-to-noise ratio (SNR), as the appropriate compensation represents a form of noise suppression at low SNRs and spectral equalization at high SNRs. The compensation vectors for additive noise and spectral transformations are estimated by minimizing the differences between speech feature vectors obtained from a \"standard\" training corpus of speech and feature vectors that represent the current acoustical environment. In our work this is accomplished by a minimizing the distortion of vector-quantized cepstra that are produced by the feature extraction module in SPHINX. In this dissertation we describe several algorithms including the SNR-Dependent Cepstral Normalization, (SDCN) and the Codeword-Dependent Cepstral Normalization (CDCN). With CDCN, the accuracy of SPHINX when trained on speech recorded with a close-talking microphone and tested on speech recorded with a desk-top microphone is essentially the same obtained when the system is trained and tested on speech from the desk-top microphone. An algorithm for frequency normalization has also been proposed in which the parameter of the bilinear transformation that is used by the signal-processing stage to produce frequency warping is adjusted for each new speaker and acoustical environment. The optimum value of this parameter is again chosen to minimize the vector-quantization distortion between the standard environment and the current one. In preliminary studies, use of this frequency normalization produced a moderate additional decrease in the observed error rate.", "title": "" }, { "docid": "cc5d183cae6251b73e5302b81e4589db", "text": "Digital images in the real world are created by a variety of means and have diverse properties. A photographical natural scene image (NSI) may exhibit substantially different characteristics from a computer graphic image (CGI) or a screen content image (SCI). This casts major challenges to objective image quality assessment, for which existing approaches lack effective mechanisms to capture such content type variations, and thus are difficult to generalize from one type to another. To tackle this problem, we first construct a cross-content-type (CCT) database, which contains 1,320 distorted NSIs, CGIs, and SCIs, compressed using the high efficiency video coding (HEVC) intra coding method and the screen content compression (SCC) extension of HEVC. We then carry out a subjective experiment on the database in a well-controlled laboratory environment. Moreover, we propose a unified content-type adaptive (UCA) blind image quality assessment model that is applicable across content types. A key step in UCA is to incorporate the variations of human perceptual characteristics in viewing different content types through a multi-scale weighting framework. This leads to superior performance on the constructed CCT database. UCA is training-free, implying strong generalizability. To verify this, we test UCA on other databases containing JPEG, MPEG-2, H.264, and HEVC compressed images/videos, and observe that it consistently achieves competitive performance.", "title": "" }, { "docid": "d9bd23208ab6eb8688afea408a4c9eba", "text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.", "title": "" } ]
scidocsrr
cf0fe5c9c997d68774acdd4659d308ac
Accurate and Novel Recommendations: An Algorithm Based on Popularity Forecasting
[ { "docid": "45f8c4e3409f8b27221e45e6c3485641", "text": "In recent years, time information is more and more important in collaborative filtering (CF) based recommender system because many systems have collected rating data for a long time, and time effects in user preference is stronger. In this paper, we focus on modeling time effects in CF and analyze how temporal features influence CF. There are four main types of time effects in CF: (1) time bias, the interest of whole society changes with time; (2) user bias shifting, a user may change his/her rating habit over time; (3) item bias shifting, the popularity of items changes with time; (4) user preference shifting, a user may change his/her attitude to some types of items. In this work, these four time effects are used by factorized model, which is called TimeSVD. Moreover, many other time effects are used by simple methods. Our time-dependent models are tested on Netflix data from Nov. 1999 to Dec. 2005. Experimental results show that prediction accuracy in CF can be improved significantly by using time information.", "title": "" }, { "docid": "af7584c0067de64024d364e321af133b", "text": "Recommendation systems have wide-spread applications in both academia and industry. Traditionally, performance of recommendation systems has been measured by their precision. By introducing novelty and diversity as key qualities in recommender systems, recently increasing attention has been focused on this topic. Precision and novelty of recommendation are not in the same direction, and practical systems should make a trade-off between these two quantities. Thus, it is an important feature of a recommender system to make it possible to adjust diversity and accuracy of the recommendations by tuning the model. In this paper, we introduce a probabilistic structure to resolve the diversity–accuracy dilemma in recommender systems. We propose a hybrid model with adjustable level of diversity and precision such that one can perform this by tuning a single parameter. The proposed recommendation model consists of two models: one for maximization of the accuracy and the other one for specification of the recommendation list to tastes of users. Our experiments on two real datasets show the functionality of the model in resolving accuracy–diversity dilemma and outperformance of the model over other classic models. The proposed method could be extensively applied to real commercial systems due to its low computational complexity and significant performance.", "title": "" } ]
[ { "docid": "6e07a006d4e34f35330c74116762a611", "text": "Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest.", "title": "" }, { "docid": "541075ddb29dd0acdf1f0cf3784c220a", "text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the stateof-the-arts performance. 1", "title": "" }, { "docid": "a4f2a82daf86314363ceeac34cba7ed9", "text": "As a vital task in natural language processing, relation classification aims to identify relation types between entities from texts. In this paper, we propose a novel Att-RCNN model to extract text features and classify relations by combining recurrent neural network (RNN) and convolutional neural network (CNN). This network structure utilizes RNN to extract higher level contextual representations of words and CNN to obtain sentence features for the relation classification task. In addition to this network structure, both word-level and sentence-level attention mechanisms are employed in Att-RCNN to strengthen critical words and features to promote the model performance. Moreover, we conduct experiments on four distinct datasets: SemEval-2010 task 8, SemEval-2018 task 7 (two subtask datasets), and KBP37 dataset. Compared with the previous public models, Att-RCNN has the overall best performance and achieves the highest $F_{1}$ score, especially on the KBP37 dataset.", "title": "" }, { "docid": "ed0b19511e0c8fa14a9a089a72bb5145", "text": "We leverage crowd wisdom for multiple-choice question answering, and employ lightweight machine learning techniques to improve the aggregation accuracy of crowdsourced answers to these questions. In order to develop more effective aggregation methods and evaluate them empirically, we developed and deployed a crowdsourced system for playing the “Who wants to be a millionaire?” quiz show. Analyzing our data (which consist of more than 200,000 answers), we find that by just going with the most selected answer in the aggregation, we can answer over 90% of the questions correctly, but the success rate of this technique plunges to 60% for the later/harder questions in the quiz show. To improve the success rates of these later/harder questions, we investigate novel weighted aggregation schemes for aggregating the answers obtained from the crowd. By using weights optimized for reliability of participants (derived from the participants’ confidence), we show that we can pull up the accuracy rate for the harder questions by 15%, and to overall 95% average accuracy. Our results provide a good case for the benefits of applying machine learning techniques for building more accurate crowdsourced question answering systems.", "title": "" }, { "docid": "9b8a9c94e626e3932dd4a19cb6a5cf4c", "text": "Most existing computer and network systems authenticate a user only at the initial login session. This could be a critical security weakness, especially for high-security systems because it enables an impostor to access the system resources until the initial user logs out. This situation is encountered when the logged in user takes a short break without logging out or an impostor coerces the valid user to allow access to the system. To address this security flaw, we propose a continuous authentication scheme that continuously monitors and authenticates the logged in user. Previous methods for continuous authentication primarily used hard biometric traits, specifically fingerprint and face to continuously authenticate the initial logged in user. However, the use of these biometric traits is not only inconvenient to the user, but is also not always feasible due to the user's posture in front of the sensor. To mitigate this problem, we propose a new framework for continuous user authentication that primarily uses soft biometric traits (e.g., color of user's clothing and facial skin). The proposed framework automatically registers (enrolls) soft biometric traits every time the user logs in and fuses soft biometric matching with the conventional authentication schemes, namely password and face biometric. The proposed scheme has high tolerance to the user's posture in front of the computer system. Experimental results show the effectiveness of the proposed method for continuous user authentication.", "title": "" }, { "docid": "a9f8c6d1d10bedc23b100751c607f7db", "text": "Successful efforts in hand gesture recognition research within the last two decades paved the path for natural human–computer interaction systems. Unresolved challenges such as reliable identification of gesturing phase, sensitivity to size, shape, and speed variations, and issues due to occlusion keep hand gesture recognition research still very active. We provide a review of vision-based hand gesture recognition algorithms reported in the last 16 years. The methods using RGB and RGB-D cameras are reviewed with quantitative and qualitative comparisons of algorithms. Quantitative comparison of algorithms is done using a set of 13 measures chosen from different attributes of the algorithm and the experimental methodology adopted in algorithm evaluation. We point out the need for considering these measures together with the recognition accuracy of the algorithm to predict its success in real-world applications. The paper also reviews 26 publicly available hand gesture databases and provides the web-links for their download. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "7d7412baa5f23d4e710e6be26eee2b20", "text": "Result diversification has recently attracted much attention as a means of increasing user satisfaction in recommender systems and web search. Many different approaches have been proposed in the related literature for the diversification problem. In this paper, we survey, classify and comparatively study the various definitions, algorithms and metrics for result diversification.", "title": "" }, { "docid": "e6c0aa517c857ed217fc96aad58d7158", "text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.", "title": "" }, { "docid": "9bc681a751d8fe9e2c93204ea06786b8", "text": "In this paper, a complimentary split ring resonator (CSRR) enhanced wideband log-periodic antenna with coupled microstrip line feeding is presented. Here in this work, coupled line feeding to the patches is proposed to avoid individual microstrip feed matching complexities. Three CSRR elements were etched in the ground plane. Individual patches were designed according to the conventional log-periodic design rules. FR4 dielectric substrate is used to design a five-element log-periodic patch with CSRR printed on the ground plane. The result shows a wide operating band ranging from 4.5 GHz to 9 GHz. Surface current distribution of the antenna shows a strong resonance of CSRR's placed in the ground plane. The design approach of the antenna is reported and performance of the proposed antenna has been evaluated through three dimensional electromagnetic simulation validating performance enhancement of the antenna due to presence of CSRRs. Antennas designed in this work may be used in satellite and indoor wireless communication.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "b69aae02d366b75914862f5bc726c514", "text": "Nitrification in commercial aquaculture systems has been accomplished using many different technologies (e.g. trickling filters, fluidized beds and rotating biological contactors) but commercial aquaculture systems have been slow to adopt denitrification. Denitrification (conversion of nitrate, NO3 − to nitrogen gas, N2) is essential to the development of commercial, closed, recirculating aquaculture systems (B1 water turnover 100 day). The problems associated with manually operated denitrification systems have been incomplete denitrification (oxidation–reduction potential, ORP\\−200 mV) with the production of nitrite (NO2 ), nitric oxide (NO) and nitrous oxide (N2O) or over-reduction (ORPB−400 mV), resulting in the production of hydrogen sulfide (H2S). The need for an anoxic or anaerobic environment for the denitrifying bacteria can also result in lowered dissolved oxygen (DO) concentrations in the rearing tanks. These problems have now been overcome by the development of a computer automated denitrifying bioreactor specifically designed for aquaculture. The prototype bioreactor (process control version) has been in operation for 4 years and commercial versions of the bioreactor are now in continuous use; these bioreactors can be operated in either batch or continuous on-line modes, maintaining NO3 − concentrations below 5 ppm. The bioreactor monitors DO, ORP, pH and water flow rate and controls water pump rate and carbon feed rate. A fuzzy logic-based expert system replaced the classical process control system for operation of the bioreactor, continuing to optimize denitrification rates and eliminate discharge of toxic by-products (i.e. NO2 , NO, N2O or www.elsevier.nl/locate/aqua-online * Corresponding author. Tel.: +1-409-7722133; fax: +1-409-7726993. E-mail address: pglee@utmb.edu (P.G. Lee) 0144-8609/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S0144 -8609 (00 )00046 -7 38 P.G. Lee et al. / Aquacultural Engineering 23 (2000) 37–59 H2S). The fuzzy logic rule base was composed of \\40 fuzzy rules; it took into account the slow response time of the system. The fuzzy logic-based expert system maintained nitrate-nitrogen concentration B5 ppm while avoiding any increase in NO2 or H2S concentrations. © 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "946c81bc2361e826904c8370fc00167f", "text": "This paper describes the CMCRC systems entered in the TAC 2010 entity linking challenge. The best performing system we describe implements the document-level entity linking system from Cucerzan (2007), with several additions that exploit global information. Our implementation of Cucerzan’s method achieved a score of 74.9% in development experiments. Additional global information improves performance to 78.4%. On the TAC 2010 test data, our best system achieves a score of 84.4%, which is second in the overall rankings of submitted systems.", "title": "" }, { "docid": "56a96e6052e04121cfc7fb9008775d15", "text": "We consider the level of information security provided by random linear network coding in network scenarios in which all nodes comply with the communication protocols yet are assumed to be potential eavesdroppers (i.e. \"nice but curious\"). For this setup, which differs from wiretapping scenarios considered previously, we develop a natural algebraic security criterion, and prove several of its key properties. A preliminary analysis of the impact of network topology on the overall network coding security, in particular for complete directed acyclic graphs, is also included.", "title": "" }, { "docid": "9458b13e5a87594140d7ee759e06c76c", "text": "Digital ecosystem, as a neoteric terminology, has emerged along with the appearance of Business Ecosystem which is a form of naturally existing business network of small and medium enterprises. However, few researches have been found in the field of defining digital ecosystem. In this paper, by means of ontology technology as our research methodology, we propose to develop a conceptual model for digital ecosystem. By introducing an innovative ontological notation system, we create the hierarchical framework of digital ecosystem form up to down, based on the related theories form Digital ecosystem and business intelligence institute.", "title": "" }, { "docid": "e2d0a4d2c2c38722d9e9493cf506fc1c", "text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.", "title": "" }, { "docid": "8e878e5083d922d97f8d573c54cbb707", "text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>, Quanzheng Li <Li.Quanzheng@mgh.harvard.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "title": "" }, { "docid": "e5ecbd3728e93badd4cfbf5eef6957f9", "text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.", "title": "" }, { "docid": "49d3548babbc17cf265c60745dbea1a0", "text": "OBJECTIVE\nTo evaluate the role of transabdominal three-dimensional (3D) ultrasound in the assessment of the fetal brain and its potential for routine neurosonographic studies.\n\n\nMETHODS\nWe studied prospectively 202 consecutive fetuses between 16 and 24 weeks' gestation. A 3D ultrasound volume of the fetal head was acquired transabdominally. The entire brain anatomy was later analyzed using the multiplanar images by a sonologist who was expert in neonatal cranial sonography. The quality of the conventional planes obtained (coronal, sagittal and axial, at different levels) and the ability of the 3D multiplanar neuroscan to visualize properly the major anatomical structures of the brain were evaluated.\n\n\nRESULTS\nAcceptable cerebral multiplanar images were obtained in 92% of the cases. The corpus callosum could be seen in 84% of the patients, the fourth ventricle in 78%, the lateral sulcus (Sylvian fissure) in 86%, the cingulate sulcus in 75%, the cerebellar hemispheres in 98%, the cerebellar vermis in 92%, the medulla oblongata in 97% and the cavum vergae in 9% of them. The thalami and the cerebellopontine cistern (cisterna magna) were identified in all cases. At or beyond 20 weeks, superior visualization (in > 90% of cases) was achieved of the cerebral fissures, the corpus callosum (97%), the supracerebellar cisterns (92%) and the third ventricle (93%). Some cerebral fissures were seen initially at 16-17 weeks.\n\n\nCONCLUSION\nMultiplanar images obtained by transabdominal 3D ultrasound provide a simple and effective approach for detailed evaluation of the fetal brain anatomy. This technique has the potential to be used in the routine fetal anomaly scan.", "title": "" } ]
scidocsrr
b0ac6e8e8245cb27b08261ffaaf0bd19
3D Convolutional Neural Network Based on Face Anti-spoofing
[ { "docid": "5ce4f8227c5eebfb8b7b1dffc5557712", "text": "In this paper, we propose a novel approach for face spoofing detection using the high-order Local Derivative Pattern from Three Orthogonal Planes (LDP-TOP). The proposed method is not only simple to derive and implement, but also highly efficient, since it takes into account both spatial and temporal information in different directions of subtle face movements. According to experimental results, the proposed approach outperforms state-of-the-art methods on three reference datasets, namely Idiap REPLAY-ATTACK, CASIA-FASD, and MSU MFSD. Moreover, it requires only 25 video frames from each video, i.e., only one second, and thus potentially can be performed in real time even on low-cost devices.", "title": "" }, { "docid": "109efc8fbed0f2a5bab120a2d7a25c81", "text": "Spoofing face recognition systems with photos or videos of someone else is not difficult. Sometimes, all one needs is to display a picture on a laptop monitor or a printed photograph to the biometric system. In order to detect this kind of spoofs, in this paper we present a solution that works either with printed or LCD displayed photographs, even under bad illumination conditions without extra-devices or user involvement. Tests conducted on large databases show good improvements of classification accuracy as well as true positive and false positive rates compared to the state-of-the-art.", "title": "" } ]
[ { "docid": "870159d500da7a415bac4ce6184c9556", "text": "We propose a versatile framework in which one can employ different machine learning algorithms to successfully distinguish between malware files and clean files, while aiming to minimise the number of false positives. In this paper we present the ideas behind our framework by working firstly with cascade one-sided perceptrons and secondly with cascade kernelized one-sided perceptrons. After having been successfully tested on medium-size datasets of malware and clean files, the ideas behind this framework were submitted to a scaling-up process that enable us to work with very large datasets of malware and clean files.", "title": "" }, { "docid": "9a4afd76319987b37edec26ca79038b2", "text": "Overlapped fingerprints are commonly encountered in latent fingerprints lifted from crime scenes. Such overlapped fingerprints can hardly be processed by state-of-the-art fingerprint matchers. Several methods have been proposed to separate the overlapped fingerprints. However, these methods neither provide robust separation results, nor could be generalized for most overlapped fingerprints. In this paper, we propose a novel latent overlapped fingerprints separation algorithm based on adaptive orientation model fitting. Different from existing methods, our algorithm estimates the initial orientation fields in a more accurate way and then separates the orientation fields for component fingerprints through an iterative correction process. Global orientation field models are used to predict and correct the orientations in overlapped regions. Experimental results on the latent overlapped fingerprints database show that the proposed algorithm outperforms the state-of-the-art algorithm in terms of accuracy.", "title": "" }, { "docid": "7bb079fd51771a9dc45a73bc53a797ee", "text": "This paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [O'Neil et al. 1993]. The algorithm is called the LRU-<italic>K</italic> method, and reduces to the well-known LRU (Least Recently Used) method for <italic>K</italic> = 1. Previous work [O'Neil et al. 1993; Weikum et al. 1994; Johnson and Shasha 1994] has shown the effectiveness for <italic>K</italic> > 1 by simulation, especially in the most common case of <italic>K</italic> = 2. The basic idea in LRU-<italic>K</italic> is to keep track of the times of the last <italic>K</italic> references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. Based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. In the current paper, we prove, under the assumptions of the independent reference model, that LRU-<italic>K</italic> is optimal. Specifically we show: given the times of the (up to) <italic>K</italic> most recent references to each disk page, no other algorithm <italic>A</italic> making decisions to keep pages in a memory buffer holding <italic>n</italic> - 1 pages based on this infomation can improve on the expected number of I/Os to access pages over the LRU-<italic>K</italic> algorithm using a memory buffer holding <italic>n</italic> pages. The proof uses the Bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made.", "title": "" }, { "docid": "14d5fe4a4af7c6d2e530eae57d359a9f", "text": "The new formulation of the stochastic vortex particle method has been presented. Main elements of the algorithms: the construction of the particles, governing equations, stretching modeling and boundary condition enforcement are described. The test case is the unsteady flow past a spherical body. Sample results concerning patterns in velocity and vorticity fields, streamlines, pressure and aerodynamic forces are presented.", "title": "" }, { "docid": "10298bbeb9e361b9a841175590c8be7f", "text": "BACKGROUND\nPregnant women with an elevated viral load of hepatitis B virus (HBV) have a risk of transmitting infection to their infants, despite the infants' receiving hepatitis B immune globulin.\n\n\nMETHODS\nIn this multicenter, double-blind clinical trial performed in Thailand, we randomly assigned hepatitis B e antigen (HBeAg)-positive pregnant women with an alanine aminotransferase level of 60 IU or less per liter to receive tenofovir disoproxil fumarate (TDF) or placebo from 28 weeks of gestation to 2 months post partum. Infants received hepatitis B immune globulin at birth and hepatitis B vaccine at birth and at 1, 2, 4, and 6 months. The primary end point was a hepatitis B surface antigen (HBsAg)-positive status in the infant, confirmed by the HBV DNA level at 6 months of age. We calculated that a sample of 328 women would provide the trial with 90% power to detect a difference of at least 9 percentage points in the transmission rate (expected rate, 3% in the TDF group vs. 12% in the placebo group).\n\n\nRESULTS\nFrom January 2013 to August 2015, we enrolled 331 women; 168 women were randomly assigned to the TDF group and 163 to the placebo group. At enrollment, the median gestational age was 28.3 weeks, and the median HBV DNA level was 8.0 log10 IU per milliliter. Among 322 deliveries (97% of the participants), there were 319 singleton births, two twin pairs, and one stillborn infant. The median time from birth to administration of hepatitis B immune globulin was 1.3 hours, and the median time from birth to administration of hepatitis B vaccine was 1.2 hours. In the primary analysis, none of the 147 infants (0%; 95% confidence interval [CI], 0 to 2) in the TDF group were infected, as compared with 3 of 147 (2%; 95% CI, 0 to 6) in the placebo group (P=0.12). The rate of adverse events did not differ significantly between groups. The incidence of a maternal alanine aminotransferase level of more than 300 IU per liter after discontinuation of the trial regimen was 6% in the TDF group and 3% in the placebo group (P=0.29).\n\n\nCONCLUSIONS\nIn a setting in which the rate of mother-to-child HBV transmission was low with the administration of hepatitis B immune globulin and hepatitis B vaccine in infants born to HBeAg-positive mothers, the additional maternal use of TDF did not result in a significantly lower rate of transmission. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development; ClinicalTrials.gov number, NCT01745822 .).", "title": "" }, { "docid": "57d34fd75067e98ac10b26a7f5c92f66", "text": "OBJECTIVE\nMeeting the complex needs of patients with chronic common mental health disorders (CMHDs) may be the greatest challenge facing organized medical practice. On the basis of a well-established and proven theoretical foundation for controlled respiration as a behavioral intervention for CMHDs, as well as preliminary evidence that gamification can improve health outcomes through increasing patient engagement, this randomized controlled pilot study evaluated the feasibility and clinical efficacy of a mobile health game called \"Flowy\" ( www.flowygame.com ) that digitally delivered breathing retraining exercises for anxiety, panic, and hyperventilation symptom management.\n\n\nMATERIALS AND METHODS\nWe designed an unblinded, Web-based, parallel-group randomized controlled trial focusing on feasibility, clinical efficacy, and design proof of concept. In the intervention condition (n = 31), participants received free access to \"Flowy\" for 4 weeks. In the control condition (n = 32), participants were placed on a waitlist for 4 weeks before being offered free access to \"Flowy.\" Online measurements using psychological self-report questionnaires were made at 2 and 4 weeks post-baseline.\n\n\nRESULTS\nAt trial conclusion, participants found \"Flowy\" acceptable as an anxiety management intervention. \"Flowy\" engaged participants sufficiently to endorse proactive gameplay. Intent-to-treat analysis revealed a reduction in anxiety, panic, and self-report hyperventilation scores in both trial arms, with the intervention arm experiencing greater quality of life. Participants perceived \"Flowy\" as a fun and useful intervention, proactively used \"Flowy\" as part of their care, and would recommend \"Flowy\" to family and friends.\n\n\nCONCLUSIONS\nOur results suggest that a digital delivery of breathing retraining exercises through a mobile health game can manage anxiety, panic, and hyperventilation symptoms associated with CMHDs.", "title": "" }, { "docid": "2f1ad82127aa6fb65b712d395c31f690", "text": "This paper presents a 100-300-GHz quasi-optical network analyzer using compact transmitter and receiver modules. The transmitter includes a wideband double bow-tie slot antenna and employs a Schottky diode as a frequency harmonic multiplier. The receiver includes a similar antenna, a Schottky diode used as a subharmonic mixer, and an LO/IF diplexer. The 100-300-GHz RF signals are the 5th-11th harmonics generated by the frequency multiplier when an 18-27-GHz LO signal is applied. The measured transmitter conversion gain with Pin = 18$ dBm is from -35 to -59 dB for the 5th-11th harmonic, respectively, and results in a transmitter EIRP from +3 to -20 dBm up to 300 GHz. The measured mixer conversion gain is from -30 to -47 dB at the 5th-11th harmonic, respectively. The system has a dynamic range > 60 dB at 200 GHz in a 100-Hz bandwidth for a transmit and receive system based on 12-mm lenses and spaced 60 cm from each other. Frequency-selective surfaces at 150 and 200 GHz are tested by the proposed design and their measured results agree with simulations. Application areas are low-cost scalar network analyzers for wideband quasi-optical 100 GHz-1 THz measurements.", "title": "" }, { "docid": "7e6eab1db77c8404720563d0eed1b325", "text": "With the success of Open Data a huge amount of tabular data sources became available that could potentially be mapped and linked into the Web of (Linked) Data. Most existing approaches to “semantically label” such tabular data rely on mappings of textual information to classes, properties, or instances in RDF knowledge bases in order to link – and eventually transform – tabular data into RDF. However, as we will illustrate, Open Data tables typically contain a large portion of numerical columns and/or non-textual headers; therefore solutions that solely focus on textual “cues” are only partially applicable for mapping such data sources. We propose an approach to find and rank candidates of semantic labels and context descriptions for a given bag of numerical values. To this end, we apply a hierarchical clustering over information taken from DBpedia to build a background knowledge graph of possible “semantic contexts” for bags of numerical values, over which we perform a nearest neighbour search to rank the most likely candidates. Our evaluation shows that our approach can assign fine-grained semantic labels, when there is enough supporting evidence in the background knowledge graph. In other cases, our approach can nevertheless assign high level contexts to the data, which could potentially be used in combination with other approaches to narrow down the search space of possible labels.", "title": "" }, { "docid": "8685e00d94d2362a5d6cfab51b61ed99", "text": "In the late 1980s and early 1990s, object-oriented programming revolutionized software development, popularizing the approach of building of applications as collections of modular components. Today we are seeing a similar revolution in distributed system development, with the increasing popularity of microservice architectures built from containerized software components. Containers [15] [22] [1] [2] are particularly well-suited as the fundamental “object” in distributed systems by virtue of the walls they erect at the container boundary. As this architectural style matures, we are seeing the emergence of design patterns, much as we did for objectoriented programs, and for the same reason – thinking in terms of objects (or containers) abstracts away the lowlevel details of code, eventually revealing higher-level patterns that are common to a variety of applications and algorithms. This paper describes three types of design patterns that we have observed emerging in container-based distributed systems: single-container patterns for container management, single-node patterns of closely cooperating containers, and multi-node patterns for distributed algorithms. Like object-oriented patterns before them, these patterns for distributed computation encode best practices, simplify development, and make the systems where they are used more reliable.", "title": "" }, { "docid": "4348c83744962fcc238e7f73abecfa5e", "text": "We introduce MeSys, a meaning-based approach, for solving English math word problems (MWPs) via understanding and reasoning in this paper. It first analyzes the text, transforms both body and question parts into their corresponding logic forms, and then performs inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating an extracted math quantity with its associated context information (i.e., the physical meaning of this quantity). Statistical models are proposed to select the operator and operands. A noisy dataset is designed to assess if a solver solves MWPs mainly via understanding or mechanical pattern matching. Experimental results show that our approach outperforms existing systems on both benchmark datasets and the noisy dataset, which demonstrates that the proposed approach understands the meaning of each quantity in the text more.", "title": "" }, { "docid": "2e6fcd8781e2f4cd7944ce0732e38d7c", "text": "Hashing has been widely used for approximate nearest neighbor (ANN) search in big data applications because of its low storage cost and fast retrieval speed. The goal of hashing is to map the data points from the original space into a binary-code space where the similarity (neighborhood structure) in the original space is preserved. By directly exploiting the similarity to guide the hashing code learning procedure, graph hashing has attracted much attention. However, most existing graph hashing methods cannot achieve satisfactory performance in real applications due to the high complexity for graph modeling. In this paper, we propose a novel method, called scalable graph hashing with feature transformation (SGH), for large-scale graph hashing. Through feature transformation, we can effectively approximate the whole graph without explicitly computing the similarity graph matrix, based on which a sequential learning method is proposed to learn the hash functions in a bit-wise manner. Experiments on two datasets with one million data points show that our SGH method can outperform the state-of-the-art methods in terms of both accuracy and scalability.", "title": "" }, { "docid": "3cbc035529138be1d6f8f66a637584dd", "text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.", "title": "" }, { "docid": "f5ac489e8e387321abd9d3839d7d8ba2", "text": "Online social networks like Slashdot bring valuable information to millions of users - but their accuracy is based on the integrity of their user base. Unfortunately, there are many “trolls” on Slashdot who post misinformation and compromise system integrity. In this paper, we develop a general algorithm called TIA (short for Troll Identification Algorithm) to classify users of an online “signed” social network as malicious (e.g. trolls on Slashdot) or benign (i.e. normal honest users). Though applicable to many signed social networks, TIA has been tested on troll detection on Slashdot Zoo under a wide variety of parameter settings. Its running time is faster than many past algorithms and it is significantly more accurate than existing methods.", "title": "" }, { "docid": "4d1f7ca631304e03b720c501d7e9a227", "text": "Due to the open and distributed characteristics of web service, its access control becomes a challenging problem which has not been addressed properly. In this paper, we show how semantic web technologies can be used to build a flexible access control system for web service. We follow the Role-based Access Control model and extend it with credential attributes. The access control model is represented by a semantic ontology, and specific semantic rules are constructed to implement such as dynamic roles assignment, separation of duty constraints and roles hierarchy reasoning, etc. These semantic rules can be verified and executed automatically by the reasoning engine, which can simplify the definition and enhance the interoperability of the access control policies. The basic access control architecture based on the semantic proposal for web service is presented. Finally, a prototype of the system is implemented to validate the proposal.", "title": "" }, { "docid": "a6e52c6ab38bad4124cf9205720625a2", "text": "We describe the first direct brain-to-brain interface in humans and present results from experiments involving six different subjects. Our non-invasive interface, demonstrated originally in August 2013, combines electroencephalography (EEG) for recording brain signals with transcranial magnetic stimulation (TMS) for delivering information to the brain. We illustrate our method using a visuomotor task in which two humans must cooperate through direct brain-to-brain communication to achieve a desired goal in a computer game. The brain-to-brain interface detects motor imagery in EEG signals recorded from one subject (the \"sender\") and transmits this information over the internet to the motor cortex region of a second subject (the \"receiver\"). This allows the sender to cause a desired motor response in the receiver (a press on a touchpad) via TMS. We quantify the performance of the brain-to-brain interface in terms of the amount of information transmitted as well as the accuracies attained in (1) decoding the sender's signals, (2) generating a motor response from the receiver upon stimulation, and (3) achieving the overall goal in the cooperative visuomotor task. Our results provide evidence for a rudimentary form of direct information transmission from one human brain to another using non-invasive means.", "title": "" }, { "docid": "3e3aee0dc9b21c19335a0d01ed43116d", "text": "Blockchain is a distributed system with efficient transaction recording and has been widely adopted in sharing economy. Although many existing privacy-preserving methods on the blockchain have been proposed, finding a trade-off between keeping speed and preserving privacy of transactions remain challenging. To address this limitation, we propose a novel Fast and Privacy-preserving method based on the Permissioned Blockchain (FPPB) for fair transactions in sharing economy. Without breaking the verifying protocol and bringing additional off-blockchain interactive communication, FPPB protects the privacy and fairness of transactions. Additionally, experiments are implemented in EthereumJ (a Java implementation of the Ethereum protocol) to measure the performance of FPPB. Compared with normal transactions without cryptographic primitives, FPPB only slows down transactions slightly.", "title": "" }, { "docid": "fdf95905dd8d3d8dcb4388ac921b3eaa", "text": "Relation classification is associated with many potential applications in the artificial intelligence area. Recent approaches usually leverage neural networks based on structure features such as syntactic or dependency features to solve this problem. However, high-cost structure features make such approaches inconvenient to be directly used. In addition, structure features are probably domaindependent. Therefore, this paper proposes a bidirectional long-short-term-memory recurrent-neuralnetwork (Bi-LSTM-RNN) model based on low-cost sequence features to address relation classification. This model divides a sentence or text segment into five parts, namely two target entities and their three contexts. It learns the representations of entities and their contexts, and uses them to classify relations. We evaluate our model on two standard benchmark datasets in different domains, namely SemEval-2010 Task 8 and BioNLP-ST 2016 Task BB3. In the former dataset, our model achieves comparable performance compared with other models using sequence features. In the latter dataset, our model obtains the third best results compared with other models in the official evaluation. Moreover, we find that the context between two target entities plays the most important role in relation classification. Furthermore, statistic experiments show that the context between two target entities can be used as an approximate replacement of the shortest dependency path when dependency parsing is not used.", "title": "" }, { "docid": "4a1b12e72cbdeffd2052c14fa571ab94", "text": "Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a \"zooming\" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).", "title": "" }, { "docid": "5d8c4cda10b47030e2a892a38abc7a2d", "text": "Visual emotion recognition aims to associate images with appropriate emotions. There are different visual stimuli that can affect human emotion from low-level to high-level, such as color, texture, part, object, etc. However, most existing methods treat different levels of features as independent entity without having effective method for feature fusion. In this paper, we propose a unified CNNRNN model to predict the emotion based on the fused features from different levels by exploiting the dependency among them. Our proposed architecture leverages convolutional neural network (CNN) with multiple layers to extract different levels of features within a multi-task learning framework, in which two related loss functions are introduced to learn the feature representation. Considering the dependencies within the low-level and high-level features, a bidirectional recurrent neural network (RNN) is proposed to integrate the learned features from different layers in the CNN model. Extensive experiments on both Internet images and art photo datasets demonstrate that our method outperforms the state-of-the-art methods with at least 7% performance improvement.", "title": "" }, { "docid": "07a1d62b56bd1e2acf4282f69e85fb93", "text": "Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available subject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improvement in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale structural similarity measures.", "title": "" } ]
scidocsrr
3de1c46ea69556580bc3e111cfb7e5ff
Parallel Clustering Algorithms : Survey
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" } ]
[ { "docid": "0614f84f0a5d62f707d545943b936667", "text": "A new input-output coupled inductor (IOCI) is proposed for reducing current ripples and magnetic components. Moreover, a current-source-type circuit using active-clamp mechanism and a current doubler with synchronous rectifier are presented to achieve high efficiency in low input-output voltage applications. The configuration of the IOCI is realized by three windings on a common core, and has the properties of an input inductor at the input-side and two output inductors at the output- side. An active clamped ripple-free dc-dc converter using the proposed IOCI is analyzed in detail and optimized for high power efficiency. Experimental results for 80 W (5 V/16 A) at a constant switching frequency of 100 kHz are obtained to show the performance of the proposed converter.", "title": "" }, { "docid": "1ceab925041160f17163940360354c55", "text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).", "title": "" }, { "docid": "34bd41f7384d6ee4d882a39aec167b3e", "text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.", "title": "" }, { "docid": "fe70c7614c0414347ff3c8bce7da47e7", "text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.", "title": "" }, { "docid": "146746c73471d0a4222267e819c79e85", "text": "Distributed Generation has become a consolidated phenomenon in distribution grids in the last few years. Even though the matter is very articulated and complex, islanding operation of distribution grid is being considered as a possible measure to improve service continuity. In this paper a novel static converter control strategy to obtain frequency and voltage regulation in islanded distribution grid is proposed. Two situations are investigated: in the former one electronic converter and one synchronous generator are present, while in the latter only static generation is available. In both cases, converters are supposed to be powered by DC micro-grids comprising of generation and storage devices. In the first case converter control will realize virtual inertia and efficient frequency regulation by mean of PID regulator; this approach allows to emulate a very high equivalent inertia and to obtain fast frequency regulation, which could not be possible with traditional regulators. In the second situation a Master-Slave approach will be adopted to maximize frequency and voltage stability. Simulation results confirm that the proposed control allows islanded operation with high frequency and voltage stability under heavy load variations.", "title": "" }, { "docid": "8cd4b3fe9ab6f1efdfdcf8500aa10fe6", "text": "Compact explicit feature maps provide a practical framework to scale kernel methods to large-scale learning, but deriving such maps for many types of kernels remains a challenging open problem. Among the commonly used kernels for nonlinear classification are polynomial kernels, for which low approximation error has thus far necessitated explicit feature maps of large dimensionality, especially for higher-order polynomials. Meanwhile, because polynomial kernels are unbounded, they are frequently applied to data that has been normalized to unit `2 norm. The question we address in this work is: if we know a priori that data is normalized, can we devise a more compact map? We show that a putative affirmative answer to this question based on Random Fourier Features is impossible in this setting, and introduce a new approximation paradigm, Spherical Random Fourier (SRF) features, which circumvents these issues and delivers a compact approximation to polynomial kernels for data on the unit sphere. Compared to prior work, SRF features are less rank-deficient, more compact, and achieve better kernel approximation, especially for higher-order polynomials. The resulting predictions have lower variance and typically yield better classification accuracy.", "title": "" }, { "docid": "0772992b4c5a57b1c8e03fdabfa60218", "text": "Investigation of the cryptanalytic strength of RSA cryptography requires computing many GCDs of two long integers (e.g., of length 1024 bits). This paper presents a high throughput parallel algorithm to perform many GCD computations concurrently on a GPU based on the CUDA architecture. The experiments with an NVIDIA GeForce GTX285 GPU and a single core of 3.0 GHz Intel Core2 Duo E6850 CPU show that the proposed GPU algorithm runs 11.3 times faster than the corresponding CPU algorithm.", "title": "" }, { "docid": "2ba69997f51aa61ffeccce33b2e69054", "text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.", "title": "" }, { "docid": "e576b8677816ec54c7dcf52e633e6c9f", "text": "OBJECTIVE\nThe objective of this study was to determine the level of knowledge, comfort, and training related to the medical management of child abuse among pediatrics, emergency medicine, and family medicine residents.\n\n\nMETHODS\nSurveys were administered to program directors and third-year residents at 67 residency programs. The resident survey included a 24-item quiz to assess knowledge regarding the medical management of physical and sexual child abuse. Sites were solicited from members of a network of child abuse physicians practicing at institutions with residency programs.\n\n\nRESULTS\nAnalyzable surveys were received from 53 program directors and 462 residents. Compared with emergency medicine and family medicine programs, pediatric programs were significantly larger and more likely to have a medical provider specializing in child abuse pediatrics, have faculty primarily responsible for child abuse training, use a written curriculum for child abuse training, and offer an elective rotation in child abuse. Exposure to child abuse training and abused patients was highest for pediatric residents and lowest for family medicine residents. Comfort with managing child abuse cases was lowest among family medicine residents. On the knowledge quiz, pediatric residents significantly outperformed emergency medicine and family medicine residents. Residents with high knowledge scores were significantly more likely to come from larger programs and programs that had a center, provider, or interdisciplinary team that specialized in child abuse pediatrics; had a physician on faculty responsible for child abuse training; used a written curriculum for child abuse training; and had a required rotation in child abuse pediatrics.\n\n\nCONCLUSIONS\nBy analyzing the relationship between program characteristics and residents' child abuse knowledge, we found that pediatric programs provide far more training and resources for child abuse education than emergency medicine and family medicine programs. As leaders, pediatricians must establish the importance of this topic in the pediatric education of residents of all specialties.", "title": "" }, { "docid": "dc84e401709509638a1a9e24d7db53e1", "text": "AIM AND OBJECTIVES\nExocrine pancreatic insufficiency caused by inflammation or pancreatic tumors results in nutrient malfunction by a lack of digestive enzymes and neutralization compounds. Despite satisfactory clinical results with current enzyme therapies, a normalization of fat absorption in patients is rare. An individualized therapy is required that includes high dosage of enzymatic units, usage of enteric coating, and addition of gastric proton pump inhibitors. The key goal to improve this therapy is to identify digestive enzymes with high activity and stability in the gastrointestinal tract.\n\n\nMETHODS\nWe cloned and analyzed three novel ciliate lipases derived from Tetrahymena thermophila. Using highly precise pH-STAT-titration and colorimetric methods, we determined stability and lipolytic activity under physiological conditions in comparison with commercially available porcine and fungal digestive enzyme preparations. We measured from pH 2.0 to 9.0, with different bile salts concentrations, and substrates such as olive oil and fat derived from pig diet.\n\n\nRESULTS\nCiliate lipases CL-120, CL-130, and CL-230 showed activities up to 220-fold higher than Creon, pancreatin standard, and rizolipase Nortase within a pH range from pH 2.0 to 9.0. They are highly active in the presence of bile salts and complex pig diet substrate, and more stable after incubation in human gastric juice compared with porcine pancreatic lipase and rizolipase.\n\n\nCONCLUSIONS\nThe newly cloned and characterized lipases fulfilled all requirements for high activity under physiological conditions. These novel enzymes are therefore promising candidates for an improved enzyme replacement therapy for exocrine pancreatic insufficiency.", "title": "" }, { "docid": "1c915d0ffe515aa2a7c52300d86e90ba", "text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.", "title": "" }, { "docid": "77f7644a5e2ec50b541fe862a437806f", "text": "This paper describes SRM (Scalable Reliable Multicast), a reliable multicast framework for application level framing and light-weight sessions. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The framework has been prototyped in wb, a distributed whiteboard application, and has been extensively tested on a global scale with sessions ranging from a few to more than 1000 participants. The paper describes the principles that have guided our design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies.", "title": "" }, { "docid": "7b215780b323aa3672d34ca243b1cf46", "text": "In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parametrizing kernels in the spectral domain spanned by graph Laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strives to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parametrization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested SyncSpecCNN on various tasks, including 3D shape part segmentation and keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets.", "title": "" }, { "docid": "0f529d7db34417f248a3174ef9feb507", "text": "The purpose of this research is to conduct a comprehensive and systematic review of the literature in the field of `Supply Chain Risk Management' and identify important research gaps for potential research. Furthermore, a conceptual risk management framework is also proposed that encompasses holistic view of the field. `Systematic Literature Review' method is used to examine quality articles published over a time period of almost 15 years (2000 - June, 2014). The findings of the study are validated through text mining software. Systematic literature review has identified the progress of research based on various descriptive and thematic typologies. The review and text mining analysis have also provided an insight into major research gaps. Based on the identified gaps, a framework is developed that can help researchers model interdependencies between risk factors.", "title": "" }, { "docid": "2e09cce98d095904dd486a99b955cea0", "text": "We construct a large scale of causal knowledge in term of Fabula elements by extracting causal links from existing common sense ontology ConceptNet5. We design a Constrained Monte Carlo Tree Search (cMCTS) algorithm that allows users to specify positive and negative concepts to appear in the generated stories. cMCTS can find a believable causal story plot. We show the merits by experiments and discuss the remedy strategies in cMCTS that may generate incoherent causal plots. keywords: Fabula elements, causal story plots, constrained Monte Carlo Tree Search, user preference, believable story generation", "title": "" }, { "docid": "b44f24b54e45974421f799527391a9db", "text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.", "title": "" }, { "docid": "a81e4b95dfaa7887f66066343506d35f", "text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.", "title": "" }, { "docid": "3d390bed1ca485abd79073add7e781ba", "text": "Predicting the future to anticipate the outcome of events and actions is a critical attribute of autonomous agents; particularly for agents which must rely heavily on real time visual data for decision making. Working towards this capability, we address the task of predicting future frame segmentation from a stream of monocular video by leveraging the 3D structure of the scene. Our framework is based on learnable sub-modules capable of predicting pixel-wise scene semantic labels, depth, and camera ego-motion of adjacent frames. We further propose a recurrent neural network based model capable of predicting future ego-motion trajectory as a function of a series of past ego-motion steps. Ultimately, we observe that leveraging 3D structure in the model facilitates successful prediction, achieving state of the art accuracy in future semantic segmentation.", "title": "" }, { "docid": "737dfbd7637337c294ee70c05c62acb1", "text": "T he Pirogoff amputation, removal of the forefoot and talus followed by calcaneotibial arthrodesis, produces a lower extremity with a minimum loss of length that is capable of bearing full weight. Although the technique itself is not new, patients who have already undergone amputation of the contralateral leg may benefit particularly from this littleused amputation. Painless weight-bearing is essential for the patient who needs to retain the ability to make indoor transfers independently of helpers or a prosthesis. As the number of patients with peripheral vascular disease continues to increase, this amputation should be in the armamentarium of the treating orthopaedic surgeon. Our primary indication for a Pirogoff amputation is a forefoot lesion that is too extensive for reconstruction or nonoperative treatment because of gangrene or infection, as occurs in patients with diabetes or arteriosclerosis. Other causes, such as trauma, malignancy, osteomyelitis, congenital abnormalities, and rare cases of frostbite, are also considered. To enhance the success rate, we only perform surgery if four criteria are met: (1) the blood supply to the soft tissues and the calcaneal region should support healing, (2) there should be no osteomyelitis of the distal part of the tibia or the calcaneus, (3) the heel pad should be clinically viable and painless, and (4) the patient should be able to walk with two prostheses after rehabilitation. Warren mentioned uncontrolled diabetes mellitus, severe Charcot arthropathy of the foot, and smoking as relative contraindications. There are other amputation options. In developed countries, the most common indication for transtibial amputation is arteriosclerosis (>90%). Although the results of revascularization operations and interventional radiology are promising, amputation remains the only option for 40% of all patients with severe ischemia. Various types of amputation of the lower extremity have been described. The advantages and disadvantages have to be considered and discussed with the patient. For the Syme ankle disarticulation, amputation is performed at the level of the talocrural joint and the plantar fat pad is dissected from the calcaneus and is preserved. Woundhealing and proprioception are good, but patients have an inconvenient leg-length discrepancy and in some cases the heel is not pain-free on weight-bearing. Prosthetic fitting can be difficult because of a bulbous distal end or shift of the plantar fat pad. However, the latter complication can be prevented in most cases by anchoring the heel pad to the distal aspect of", "title": "" }, { "docid": "3e8de1702f4fd5da19175c29ad2b27ad", "text": "In this work we formulate the problem of image captioning as a multimodal translation task. Analogous to machine translation, we present a sequence-to-sequence recurrent neural networks (RNN) model for image caption generation. Different from most existing work where the whole image is represented by convolutional neural network (CNN) feature, we propose to represent the input image as a sequence of detected objects which feeds as the source sequence of the RNN model. In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model. To represent the image in a sequential way, we extract the objects features in the image and arrange them in a order using convolutional neural networks. To further leverage the visual information from the encoded objects, a sequential attention layer is introduced to selectively attend to the objects that are related to generate corresponding words in the sentences. Extensive experiments are conducted to validate the proposed approach on popular benchmark dataset, i.e., MS COCO, and the proposed model surpasses the state-of-the-art methods in all metrics following the dataset splits of previous work. The proposed approach is also evaluated by the evaluation server of MS COCO captioning challenge, and achieves very competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).", "title": "" } ]
scidocsrr
32b63f6811f973662d2f6e568c5781dd
A Multi-dimensional Comparison of Toolkits for Machine Learning with Big Data
[ { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "af47d1cc068467eaee7b6129682c9ee3", "text": "Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.", "title": "" }, { "docid": "d0e2f8c9c7243f5a67e73faeb78038d1", "text": "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "title": "" }, { "docid": "07cb8967d6d347cbc8dd0645e5c1f4b0", "text": "Obtaining reliable data describing local poverty metrics at a granularity that is informative to policy-makers requires expensive and logistically difficult surveys, particularly in the developing world. Not surprisingly, the poverty stricken regions are also the ones which have a high probability of being a war zone, have poor infrastructure and sometimes have governments that do not cooperate with internationally funded development efforts. We train a CNN on free and publicly available daytime satellite images of the African continent from Landsat 7 to build a model for predicting local economic livelihoods. Only 5% of the satellite images can be associated with labels (which are obtained from DHS Surveys) and thus a semi-supervised approach using a GAN [33], albeit with a more stable-totrain flavor of GANs called the Wasserstein GAN regularized with gradient penalty [15] is used. The method of multitask learning is employed to regularize the network and also create an end-to-end model for the prediction of multiple poverty metrics.", "title": "" }, { "docid": "b7390d19beb199e21dac200f2f7021f3", "text": "In this paper, we propose a workflow and a machine learning model for recognizing handwritten characters on form document. The learning model is based on Convolutional Neural Network (CNN) as a powerful feature extraction and Support Vector Machines (SVM) as a high-end classifier. The proposed method is more efficient than modifying the CNN with complex architecture. We evaluated some SVM and found that the linear SVM using L1 loss function and L2 regularization giving the best performance both of the accuracy rate and the computation time. Based on the experiment results using data from NIST SD 192nd edition both for training and testing, the proposed method which combines CNN and linear SVM using L1 loss function and L2 regularization achieved a recognition rate better than only CNN. The recognition rate achieved by the proposed method are 98.85% on numeral characters, 93.05% on uppercase characters, 86.21% on lowercase characters, and 91.37% on the merger of numeral and uppercase characters. While the original CNN achieves an accuracy rate of 98.30% on numeral characters, 92.33% on uppercase characters, 83.54% on lowercase characters, and 88.32% on the merger of numeral and uppercase characters. The proposed method was also validated by using ten folds cross-validation, and it shows that the proposed method still can improve the accuracy rate. The learning model was used to construct a handwriting recognition system to recognize a more challenging data on form document automatically. The pre-processing, segmentation and character recognition are integrated into one system. The output of the system is converted into an editable text. The system gives an accuracy rate of 83.37% on ten different test form document.", "title": "" }, { "docid": "425270bbfd1290a0692afeea95fa090f", "text": "This paper introduces a bounding gait control algorithm that allows a successful implementation of duty cycle modulation in the MIT Cheetah 2. Instead of controlling leg stiffness to emulate a `springy leg' inspired from the Spring-Loaded-Inverted-Pendulum (SLIP) model, the algorithm prescribes vertical impulse by generating scaled ground reaction forces at each step to achieve the desired stance and total stride duration. Therefore, we can control the duty cycle: the percentage of the stance phase over the entire cycle. By prescribing the required vertical impulse of the ground reaction force at each step, the algorithm can adapt to variable duty cycles attributed to variations in running speed. Following linear momentum conservation law, in order to achieve a limit-cycle gait, the sum of all vertical ground reaction forces must match vertical momentum created by gravity during a cycle. In addition, we added a virtual compliance control in the vertical direction to enhance stability. The stiffness of the virtual compliance is selected based on the eigenvalue analysis of the linearized Poincaré map and the chosen stiffness is 700 N/m, which corresponds to around 12% of the stiffness used in the previous trotting experiments of the MIT Cheetah, where the ground reaction forces are purely caused by the impedance controller with equilibrium point trajectories. This indicates that the virtual compliance control does not significantly contributes to generating ground reaction forces, but to stability. The experimental results show that the algorithm successfully prescribes the duty cycle for stable bounding gaits. This new approach can shed a light on variable speed running control algorithm.", "title": "" }, { "docid": "bc4d41ba58f703da48ff202a9006f4bd", "text": "Today, Smart Home monitoring services have attracted much attention from both academia and industry. However, in the conventional monitoring mechanism the remote camera can not be accessed for remote monitoring anywhere and anytime. Besides, traditional approaches might have the limitation in local storage due to lack of device elasticity. In this paper, we proposed a Cloud-based monitoring framework to implement the remote monitoring services of Smart Home. The main technical issues considered include Data-Cloud storage, Local-Cache mechanism, Media device control, NAT traversal, etc. The implementation shows three use scenarios: (a) operating and controlling video cameras for remote monitoring through mobile devices or sound sensors; (b) streaming live video from cameras and sending captured image to mobile devices; (c) recording videos and images on a cloud computing platform for future playback. This system framework could be extended to other applications of Smart Home.", "title": "" }, { "docid": "e74573560a8da7be758c619ba85202df", "text": "This paper proposes two hybrid connectionist structural acoustical models for robust context independent phone like and word like units for speaker-independent recognition system. Such structure combines strength of Hidden Markov Models (HMM) in modeling stochastic sequences and the non-linear classification capability of Artificial Neural Networks (ANN). Two kinds of Neural Networks (NN) are investigated: Multilayer Perceptron (MLP) and Elman Recurrent Neural Networks (RNN). The hybrid connectionist-HMM systems use discriminatively trained NN to estimate the a posteriori probability distribution among subword units given the acoustic observations. We efficiently tested the performance of the conceived systems using the TIMIT database in clean and noisy environments with two perceptually motivated features: MFCC and PLP. Finally, the robustness of the systems is evaluated by using a new preprocessing stage for denoising based on wavelet transform. A significant improvement in performance is obtained with the proposed method.", "title": "" }, { "docid": "46200c35a82b11d989c111e8398bd554", "text": "A physics-based compact gallium nitride power semiconductor device model is presented in this work, which is the first of its kind. The model derivation is based on the classical drift-diffusion model of carrier transport, which expresses the channel current as a function of device threshold voltage and externally applied electric fields. The model is implemented in the Saber® circuit simulator using the MAST hardware description language. The model allows the user to extract the parameters from the dc I-V and C-V characteristics that are also available in the device datasheets. A commercial 80 V EPC GaN HEMT is used to demonstrate the dynamic validation of the model against the transient device characteristics in a double-pulse test and a boost converter circuit configuration. The simulated versus measured device characteristics show good agreement and validate the model for power electronics design and applications using the next generation of GaN HEMT devices.", "title": "" }, { "docid": "3bb4666a27f6bc961aa820d3f9301560", "text": "The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.", "title": "" }, { "docid": "00b98536f0ecd554442a67fb31f77f4c", "text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.", "title": "" }, { "docid": "e0a8035f9e61c78a482f2e237f7422c6", "text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University", "title": "" }, { "docid": "d5cc92aad3e7f1024a514ff4e6379c86", "text": "This chapter describes the convergence of two of the most influential technologies in the last decade, namely business intelligence (BI) and the Semantic Web (SW). Business intelligence is used by almost any enterprise to derive important business-critical knowledge from both internal and (increasingly) external data. When using external data, most often found on the Web, the most important issue is knowing the precise semantics of the data. Without this, the results cannot be trusted. Here, Semantic Web technologies come to the rescue, as they allow semantics ranging from very simple to very complex to be specified for any web-available resource. SW technologies do not only support capturing the “passive” semantics, but also support active inference and reasoning on the data. The chapter first presents a motivating running example, followed by an introduction to the relevant SW foundation concepts. The chapter then goes on to survey the use of SW technologies for data integration, including semantic DOI: 10.4018/978-1-61350-038-5.ch014", "title": "" }, { "docid": "b82b5ebf186220f8bdb41b7631fd475d", "text": "Fraudulent activity on the Internet, in particular the practice known as ‘Phishing’, is on the increase. Although a number of technology focussed counter measures have been explored user behaviour remains fundamental to increased online security. Encouraging users to engage in secure online behaviour is difficult with a number of different barriers to change. Guided by a model adapted from health psychology this paper reports on a study designed to encourage secure behaviour online. The study aimed to investigate the effects of education via a training program and the effects of risk level manipulation on subsequent self-reported behaviour online. The training program ‘Anti-Phishing Phil’ informed users of the common types of phishing threats and how to identify them whilst the risk level manipulation randomly allocated participants to either high risk or low risk of becoming a victim of online fraud. Sixty-four participants took part in the study, which comprised of 9 males and 55 females with an age range of 18– 43 years. Participants were randomly allocated to one of four experimental groups. High threat information and/or the provision of phishing education were expected to increase self-reports of secure behaviour. Secure behaviour was measured at three stages, a baseline measure stage, an intention measure stage, and a 7-day follow-up measure stage. The results showed that offering a seemingly tailored risk message increased users’ intentions to act in a secure manner online regardless of whether the risk message indicated they were at high or low risk of fraud. There was no effect of the training programme on secure behaviour in general. The findings are discussed in relation to the model of behaviour change, information provision and the transferability of training. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0f3ce3e7467f9c61e40fca28ccd7f86b", "text": "This paper provides insight into a failure mechanism that impacts a broad range of industrial equipment. Voltage surges have often been blamed for unexplained equipment failure in the field. Extensive voltage monitoring data suggests that voltage sags occur much more frequently than voltage surges, and that current surges that accompany voltage sag recovery may be the actual culprit causing equipment damage. A serious limitation in equipment specification is highlighted, pointing to what is possibly the root-cause for a large percentage of unexplained equipment field failures. This paper also outlines the need for a standard governing the behavior of equipment under voltage sags, and suggests solutions to protect existing equipment in the field.", "title": "" }, { "docid": "ca2258408035374cd4e7d1519e24e187", "text": "In this paper we propose a novel application of Hidden Markov Models to automatic generation of informative headlines for English texts. We propose four decoding parameters to make the headlines appear more like Headlinese, the language of informative newspaper headlines. We also allow for morphological variation in words between headline and story English. Informal and formal evaluations indicate that our approach produces informative headlines, mimicking a Headlinese style generated by humans.", "title": "" }, { "docid": "c7e3fc9562a02818bba80d250241511d", "text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.", "title": "" }, { "docid": "dbf419dabb53f9739a35db14877d2d90", "text": "Investigations in the development of lead-free piezoelectric ceramics have recently claimed properties comparable to that of PZT-based materials. In this work, the dielectric and piezoelectric properties of the various systems were contrasted in relation to their respective Curie temperatures. Though comparable with respect to TC, enhanced properties reported in the K,NaNbO3 family are the result of increased polarizability associated with the Torthor-tetragonal polymorphic phase transition being compositionally shifted downward and not from a morphotropic phase boundary (MPB) as widely reported. As expected, the properties are strongly temperature dependent unlike that observed for MPB systems. Analogous to PZT, enhanced properties are noted for MPB compositions in the Na,BiTiO3-BaTiO3 and the ternary system with K,BiTiO3, but offer properties significantly lower than that of PZTs. The consequence of a ferroelectric to antiferroelectric transition well below TC further limits their usefulness.", "title": "" }, { "docid": "c3af6eae1bd5f2901914d830280eca48", "text": "This paper proposes a novel approach for the classification of 3D shapes exploiting surface and volumetric clues inside a deep learning framework. The proposed algorithm uses three different data representations. The first is a set of depth maps obtained by rendering the 3D object. The second is a novel volumetric representation obtained by counting the number of filled voxels along each direction. Finally NURBS surfaces are fitted over the 3D object and surface curvature parameters are selected as the third representation. All the three data representations are fed to a multi-branch Convolutional Neural Network. Each branch processes a different data source and produces a feature vector by using convolutional layers of progressively reduced resolution. The extracted feature vectors are fed to a linear classifier that combines the outputs in order to get the final predictions. Experimental results on the ModelNet dataset show that the proposed approach is able to obtain a state-of-the-art performance.", "title": "" }, { "docid": "5abc2b1536d989ff77e23ee9db22f625", "text": "Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.", "title": "" }, { "docid": "e6298cd08f89d3cb8a6f8a78c2f4ae49", "text": "We present a fast pattern matching algorithm with a large set of templates. The algorithm is based on the typical template matching speeded up by the dual decomposition; the Fourier transform and the Karhunen-Loeve transform. The proposed algorithm is appropriate for the search of an object with unknown distortion within a short period. Patterns with different distortion differ slightly from each other and are highly correlated. The image vector subspace required for effective representation can be defined by a small number of eigenvectors derived by the Karhunen-Loeve transform. A vector subspace spanned by the eigenvectors is generated, and any image vector in the subspace is considered as a pattern to be recognized. The pattern matching of objects with unknown distortion is formulated as the process to extract the portion of the input image, find the pattern most similar to the extracted portion in the subspace, compute normalized correlation between them at each location in the input image, and find the location with the best score. Searching for objects with unknown distortion requires vast computation. The formulation above makes it possible to decompose highly correlated reference images into eigenvectors, as well as to decompose images in frequency domain, and to speed up the process significantly. Index Terms —Template matching, pattern matching, Karhunen-Loeve transform, Fourier transform, eigenvector. —————————— ✦ ——————————", "title": "" } ]
scidocsrr
9a723925bceb1cb9556a568b31c76ed0
Infusing Creativity and Technology in 21st Century Education: A Systemic View for Change
[ { "docid": "ecddd4f80f417dcec49021065394c89a", "text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.", "title": "" }, { "docid": "1413a01f3c50ff5dfcfbeababfcd267c", "text": "Early studies indicated that teachersâ€TM enacted beliefs, particularly in terms of classroom technology practices, often did not align with their espoused beliefs. Researchers concluded this was due, at least in part, to a variety of external barriers that prevented teachers from using technology in ways that aligned more closely with their beliefs. However, many of these barriers (access, support, etc.) have since been eliminated in the majority of schools. This multiple case-study research was designed to revisit the question, “How do the pedagogical beliefs and classroom technology practices of teachers, recognized for their technology uses, align?â€​ Twelve K-12 classroom teachers were purposefully selected based on their awardwinning technology practices, supported by evidence from personal and/or classroom websites. Follow-up interviews were conducted to examine the correspondence between teachersâ€TM classroom practices and their pedagogical beliefs. Results a c Purchase Export", "title": "" } ]
[ { "docid": "51d8085e6709cf212c0ee3f792548eea", "text": "Sophisticated electronics are within reach of average users. Cooperation between wireless sensor networks and existing consumer electronic infrastructures can assist in the areas of health care and patient monitoring. This will improve the quality of life of patients, provide early detection for certain ailments, and improve doctor-patient efficiency. The goal of our work is to focus on health-related applications of wireless sensor networks. In this paper we detail our experiences building several prototypes and discuss the driving force behind home health monitoring and how current (and future) technologies will enable automated home health monitoring.", "title": "" }, { "docid": "327bbbee0087e15db04780291ded9fe6", "text": "Semantic Reliability is a novel correctness criterion for multicast protocols based on the concept of message obsolescence: A message becomes obsolete when its content or purpose is superseded by a subsequent message. By exploiting obsolescence, a reliable multicast protocol may drop irrelevant messages to find additional buffer space for new messages. This makes the multicast protocol more resilient to transient performance perturbations of group members, thus improving throughput stability. This paper describes our experience in developing a suite of semantically reliable protocols. It summarizes the motivation, definition, and algorithmic issues and presents performance figures obtained with a running implementation. The data obtained experimentally is compared with analytic and simulation models. This comparison allows us to confirm the validity of these models and the usefulness of the approach. Finally, the paper reports the application of our prototype to distributed multiplayer games.", "title": "" }, { "docid": "cb396e80b143c76a5be5aa4cff169ac2", "text": "This article describes a quantitative model, which suggests what the underlying mechanisms of cognitive control in a particular task-switching paradigm are, with relevance to task-switching performance in general. It is suggested that participants dynamically control response accuracy by selective attention, in the particular paradigm being used, by controlling stimulus representation. They are less efficient in dynamically controlling response representation. The model fits reasonably well the pattern of reaction time results concerning task switching, congruency, cue-target interval and response-repetition in a mixed task condition, as well as the differences between mixed task and pure task conditions.", "title": "" }, { "docid": "e2f300ad1450ac93c75ad1fd4b4cc02e", "text": "Understanding how appliances in a house consume power is important when making intelligent and informed decisions about conserving energy. Appliances can turn ON and OFF either by the actions of occupants or by automatic sensing and actuation (e.g., thermostat). It is also difficult to understand how much a load consumes at any given operational state. Occupants could buy sensors that would help, but this comes at a high financial cost. Power utility companies around the world are now replacing old electro-mechanical meters with digital meters (smart meters) that have enhanced communication capabilities. These smart meters are essentially free sensors that offer an opportunity to use computation to infer what loads are running and how much each load is consuming (i.e., load disaggregation). We present a new load disaggregation algorithm that uses a super-state hidden Markov model and a new Viterbi algorithm variant which preserves dependencies between loads and can disaggregate multi-state loads, all while performing computationally efficient exact inference. Our sparse Viterbi algorithm can efficiently compute sparse matrices with a large number of super-states. Additionally, our disaggregator can run in real-time on an inexpensive embedded processor using low sampling rates.", "title": "" }, { "docid": "14b9aaa9ff0be3ed0a8d420fb63f54dd", "text": "Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and in this article we analyse its growth. In the first part, we trace the main results obtained so far, by presenting the most prominent studies. We start by an overview of the most relevant studies developed in the context of semantic web, and then we extend the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: in the second part, we present a set of open challenges and issues that stream reasoning will face in the next future.", "title": "" }, { "docid": "dde4300fb4f29b5ee15bb5e2ef8fe44f", "text": "In this paper, we propose a static scheduling algorithm for allocating task graphs to fullyconnected multiprocessors. We discuss six recently reported scheduling algorithms and show that they possess one drawback or the other which can lead to poor performance. The proposed algorithm, which is called the Dynamic Critical-Path (DCP) scheduling algorithm, is different from the previously proposed algorithms in a number of ways. First, it determines the critical path of the task graph and selects the next node to be scheduled in a dynamic fashion. Second, it rearranges the schedule on each processor dynamically in the sense that the positions of the nodes in the partial schedules are not fixed until all nodes have been considered. Third, it selects a suitable processor for a node by looking ahead the potential start times of the remaining nodes on that processor, and schedules relatively less important nodes to the processors already in use. A global as well as a pair-wise comparison is carried out for all seven algorithms under various scheduling conditions. The DCP algorithm outperforms the previous algorithms by a considerable margin. Despite having a number of new features, the DCP algorithm has admissible time complexity, is economical in terms of the number of processors used and is suitable for a wide range of graph structures.", "title": "" }, { "docid": "a4e8edda99a01f79372a43f2eebcca1f", "text": "Autophagy occurs prior to apoptosis and plays an important role in cell death regulation during spinal cord injury (SCI). This study aimed to determine the effects and potential mechanism of the glucagon-like peptide-1 (GLP-1) agonist extendin-4 (Ex-4) in SCI. Seventy-two male Sprague Dawley rats were randomly assigned to sham, SCI, 2.5 μg Ex-4, and 10 μg Ex-4 groups. To induce SCI, a 10-g iron rod was dropped from a 20-mm height to the spinal cord surface. Ex-4 was administered via intraperitoneal injection immediately after surgery. Motor function evaluation with the Basso Beattie Bresnahan (BBB) locomotor rating scale indicated significantly increased scores (p < 0.01) in the Ex-4-treated groups, especially 10 μg, which demonstrated the neuroprotective effect of Ex-4 after SCI. The light chain 3-II (LC3-II) and Beclin 1 protein expression determined via western blot and the number of autophagy-positive neurons via immunofluorescence double labeling were increased by Ex-4, which supports promotion of autophagy (p < 0.01). The caspase-3 protein level and neuronal apoptosis via transferase UTP nick end labeling (TUNEL)/NeuN/DAPI double labeling were significantly reduced in the Ex-4-treated groups, which indicates anti-apoptotic effects (p < 0.01). Finally, histological assessment via Nissl staining demonstrated the Ex-4 groups exhibited a significantly greater number of surviving neurons and less cavity (p < 0.01). To our knowledge, this is the first study to indicate that Ex-4 significantly enhances motor function in rats after SCI, and these effects are associated with the promotion of autophagy and inhibition of apoptosis.", "title": "" }, { "docid": "6a8ac2a2786371dcb043d92fa522b726", "text": "We propose a modular reinforcement learning algorithm which decomposes a Markov decision process into independent modules. Each module is trained using Sarsa(λ). We introduce three algorithms for forming global policy from modules policies, and demonstrate our results using a 2D grid world.", "title": "" }, { "docid": "de703c909703b2dcabf7d99a4b5e1493", "text": "The ultimate goal of this paper is to print radio frequency (RF) and microwave structures using a 3-D platform and to pattern metal films on nonplanar structures. To overcome substrate losses, air core substrates that can readily be printed are utilized. To meet the challenge of patterning conductive layers on complex or nonplanar printed structures, two novel self-aligning patterning processes are demonstrated. One is a simple damascene-like process, and the other is a lift-off process using a 3-D printed lift-off mask layer. A range of microwave and RF circuits are designed and demonstrated between 1 and 8 GHz utilizing these processes. Designs are created and simulated using Keysight Advanced Design System and ANSYS High Frequency Structure Simulator. Circuit designs include a simple microstrip transmission line (T-line), coupled-line bandpass filter, circular ring resonator, T-line resonator, resonant cavity structure, and patch antenna. A commercially available 3-D printer and metal sputtering system are used to realize the designs. Both simulated and measured results of these structures are presented.", "title": "" }, { "docid": "52fd6836f24598da6aff5a82dafa6cc0", "text": "The transgender and gender non-conforming (TGNC) community continues to represent a notably marginalized population exposed to pervasive discrimination, microaggressions, and victimization. Congruent with the minority stress model, TGNC individuals persistently experience barriers to wellbeing in contemporary society; however, research uncovering resilience-based pathways to health among this population is sparse. This study aimed to explore the impact and interaction between internalized transphobic stigma and a potential buffer against minority stress-social connectedness-on the self-esteem of TGNC identified adults. Data were collected from 65 TGNC identified adults during a national transgender conference. Multiple regression analysis reveals that self-esteem is negatively impacted by internalized transphobia and positively impacted by social connectedness. Social connectedness did not significantly moderate the relationship between internalized transphobia and self-esteem. Micro and macro interventions aimed at increasing social connectedness and decreasing internalized transphobic stigma may be paramount for enhancing resiliency and wellbeing in the TGNC community.", "title": "" }, { "docid": "fa313356d7267e963f75cd2ba4452814", "text": "INTRODUCTION\nStroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke.\n\n\nMETHOD\nWe conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data.\n\n\nRESULTS\nWe included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408).\n\n\nDISCUSSION\nWe showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.", "title": "" }, { "docid": "5c8ab947856945b32d4d3e0edc89a9e0", "text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.", "title": "" }, { "docid": "042431e96028ed9729e6b174a78d642d", "text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.", "title": "" }, { "docid": "5455e7d53e6de4cbe97cbcdf6eea9806", "text": "OBJECTIVE\nTo evaluate the clinical and radiological results in the surgical treatment of moderate and severe hallux valgus by performing percutaneous double osteotomy.\n\n\nMATERIAL AND METHOD\nA retrospective study was conducted on 45 feet of 42 patients diagnosed with moderate-severe hallux valgus, operated on in a single centre and by the same surgeon from May 2009 to March 2013. Two patients were lost to follow-up. Clinical and radiological results were recorded.\n\n\nRESULTS\nAn improvement from 48.14 ± 4.79 points to 91.28 ± 8.73 points was registered using the American Orthopedic Foot and Ankle Society (AOFAS) scale. A radiological decrease from 16.88 ± 2.01 to 8.18 ± 3.23 was observed in the intermetatarsal angle, and from 40.02 ± 6.50 to 10.51 ± 6.55 in hallux valgus angle. There was one case of hallux varus, one case of non-union, a regional pain syndrome type I, an infection that resolved with antibiotics, and a case of loosening of the osteosynthesis that required an open surgical refixation.\n\n\nDISCUSSION\nPercutaneous distal osteotomy of the first metatarsal when performed as an isolated procedure, show limitations when dealing with cases of moderate and severe hallux valgus. The described technique adds the advantages of minimally invasive surgery by expanding applications to severe deformities.\n\n\nCONCLUSIONS\nPercutaneous double osteotomy is a reproducible technique for correcting severe deformities, with good clinical and radiological results with a complication rate similar to other techniques with the advantages of shorter surgical times and less soft tissue damage.", "title": "" }, { "docid": "a3b680c8c9eb00b6cc66ec24aeadaa66", "text": "With the application of Internet of Things and services to manufacturing, the fourth stage of industrialization, referred to as Industrie 4.0, is believed to be approaching. For Industrie 4.0 to come true, it is essential to implement the horizontal integration of inter-corporation value network, the end-to-end integration of engineering value chain, and the vertical integration of factory inside. In this paper, we focus on the vertical integration to implement flexible and reconfigurable smart factory. We first propose a brief framework that incorporates industrial wireless networks, cloud, and fixed or mobile terminals with smart artifacts such as machines, products, and conveyors.Then,we elaborate the operationalmechanism from the perspective of control engineering, that is, the smart artifacts form a self-organized systemwhich is assistedwith the feedback and coordination blocks that are implemented on the cloud and based on the big data analytics. In addition, we outline the main technical features and beneficial outcomes and present a detailed design scheme. We conclude that the smart factory of Industrie 4.0 is achievable by extensively applying the existing enabling technologies while actively coping with the technical challenges.", "title": "" }, { "docid": "a171aec0d1989afc2e1f09f08b493596", "text": "The internet era has generated a requirement for low cost, anonymous and rapidly verifiable transactions to be used for online barter, and fast settling money have emerged as a consequence. For the most part, e-money has fulfilled this role, but the last few years have seen two new types of money emerge. Centralised virtual currencies, usually for the purpose of transacting in social and gaming economies, and crypto-currencies, which aim to eliminate the need for financial intermediaries by offering direct peer-to-peer online payments. We describe the historical context which led to the development of these currencies and some modern and recent trends in their uptake, in terms of both usage in the real economy and as investment products. As these currencies are purely digital constructs, with no government or local authority backing, we then discuss them in the context of monetary theory, in order to determine how they may be have value under each. Finally, we provide an overview of the state of regulatory readiness in terms of dealing with transactions in these currencies in various regions of the world.", "title": "" }, { "docid": "a697f85ad09699ddb38994bd69b11103", "text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.", "title": "" }, { "docid": "2c1bd88f0fd23c6b63315aea067670b0", "text": "This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [29] of temporal ensembling [14], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.", "title": "" }, { "docid": "dcd116e601c9155d60364c19a1f0dfb7", "text": "The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms.", "title": "" }, { "docid": "a3b4e8b4a54921da210b42e43fc2e7ff", "text": "CONTEXT\nRecent reports show that obesity and diabetes have increased in the United States in the past decade.\n\n\nOBJECTIVE\nTo estimate the prevalence of obesity, diabetes, and use of weight control strategies among US adults in 2000.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThe Behavioral Risk Factor Surveillance System, a random-digit telephone survey conducted in all states in 2000, with 184 450 adults aged 18 years or older.\n\n\nMAIN OUTCOME MEASURES\nBody mass index (BMI), calculated from self-reported weight and height; self-reported diabetes; prevalence of weight loss or maintenance attempts; and weight control strategies used.\n\n\nRESULTS\nIn 2000, the prevalence of obesity (BMI >/=30 kg/m(2)) was 19.8%, the prevalence of diabetes was 7.3%, and the prevalence of both combined was 2.9%. Mississippi had the highest rates of obesity (24.3%) and of diabetes (8.8%); Colorado had the lowest rate of obesity (13.8%); and Alaska had the lowest rate of diabetes (4.4%). Twenty-seven percent of US adults did not engage in any physical activity, and another 28.2% were not regularly active. Only 24.4% of US adults consumed fruits and vegetables 5 or more times daily. Among obese participants who had had a routine checkup during the past year, 42.8% had been advised by a health care professional to lose weight. Among participants trying to lose or maintain weight, 17.5% were following recommendations to eat fewer calories and increase physical activity to more than 150 min/wk.\n\n\nCONCLUSIONS\nThe prevalence of obesity and diabetes continues to increase among US adults. Interventions are needed to improve physical activity and diet in communities nationwide.", "title": "" } ]
scidocsrr
d7210cbc7c3d8abbabf9b53bd608a2f3
QARLA: A Framework for the Evaluation of Text Summarization Systems
[ { "docid": "bd4dde3f5b7ec9dcd711a538b973ef1e", "text": "Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the well-known measures precision, recall, and their average, the F-measure. The unigrambased F-measure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, this standard measure has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The relevant software is publicly available from http://nlp.cs.nyu.edu/GTM/.", "title": "" } ]
[ { "docid": "0f7420282b9e16ef6fd26b87fe40eae2", "text": "This paper presents a robot localization system for indoor environments using WiFi signal strength measure. We analyse the main causes of the WiFi signal strength variation and we experimentally demonstrate that a localization technique based on a propagation model doesn’t work properly in our test-bed. We have carried out a localization system based on a priori radio-map obtained automatically from a robot navigation in the environment in a semi-autonomous way. We analyse the effect of reducing calibration effort in order to diminish practical barriers to wider adoption of this type of location measurement technique. Experimental results using a real robot moving are shown. Finally, the conclusions and future works are", "title": "" }, { "docid": "a4affb4b3a83573571e1af3009b187f6", "text": " Existing path following algorithms for graph matching can be viewed as special cases of the numerical continuation method (NCM), and correspond to particular implementation named generic predictor corrector (GPC).  The GPC approach succeeds at regular points, but may fail at singular points. Illustration of GPC and the proposed method is shown in Fig. 1.  This paper presents a branching path following (BPF) method to exploring potentially better paths at singular points to improve matching performance. Tao Wang , Haibin Ling 1,3, Congyan Lang , Jun Wu 1Meitu HiScene Lab, HiScene Information Technologies, Shanghai, China 2 School of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China 3 Computer & Information Sciences Department, Temple University, Philadelphia 19122, USA Email: twang@bjtu.edu.cn, hbling@temple.edu, cylang@bjtu.edu.cn, wuj@bjtu.edu.cn Branching Path Following for Graph Matching", "title": "" }, { "docid": "7aef0d2adab8400fce3caf6350d2ebdb", "text": "We report a customized gene panel assay based on multiplex long-PCR followed by third generation sequencing on nanopore technology (MinION), designed to analyze five frequently mutated genes in chronic lymphocytic leukemia (CLL): TP53, NOTCH1, BIRC3, SF3B1 and MYD88. For this purpose, 12 patients were selected according to specific cytogenetic and molecular features significantly associated with their mutational status. In addition, simultaneous analysis of the targets genes was performed by molecular assays or Sanger Sequencing. Data analysis included mapping to the GRCh37 human reference genome, variant calling and annotation, and average sequencing depth/error rate analysis. The sequencing depth resulted on average higher for smaller amplicons, and the final breadth of coverage of the panel was 94.1%. The error rate was about 6% and 2% for insertions/deletions and single nucleotide variants, respectively. Our gene panel allows analysis of the prognostically relevant genes in CLL, with two PCRs per patient. This strategy offers an easy and affordable workflow, although further advances are required to improve the accuracy of the technology and its use in the clinical field. Nevertheless, the rapid and constant development of nanopore technology, in terms of chemistry advances, more accurate basecallers and analysis software, offers promise for a wide use of MinION in the future.", "title": "" }, { "docid": "cdd3dd7a367027ebfe4b3f59eca99267", "text": "3 Computation of the shearlet transform 13 3.1 Finite discrete shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 A discrete shearlet frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Inversion of the shearlet transform . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4 Smooth shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.2 Computation of spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Short documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Download & Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.8 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.9 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32", "title": "" }, { "docid": "6951f051c3fe9ab24259dcc6f812fc68", "text": "User Generated Content has become very popular since the birth of web services such as YouTube allowing the distribution of such user-produced media content in an easy manner. YouTube-like services are different from existing traditional VoD services because the service provider has only limited control over the creation of new content. We analyze how the content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2) neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and proxy caching can reduce network traffic significantly and allow faster access to video clips.", "title": "" }, { "docid": "df6a350d33f097dc3fdd1f966ad3a7c1", "text": "This paper presents the design and implementation of SpotFi, an accurate indoor localization system that can be deployed on commodity WiFi infrastructure. SpotFi only uses information that is already exposed by WiFi chips and does not require any hardware or firmware changes, yet achieves the same accuracy as state-of-the-art localization systems. SpotFi makes two key technical contributions. First, SpotFi incorporates super-resolution algorithms that can accurately compute the angle of arrival (AoA) of multipath components even when the access point (AP) has only three antennas. Second, it incorporates novel filtering and estimation techniques to identify AoA of direct path between the localization target and AP by assigning values for each path depending on how likely the particular path is the direct path. Our experiments in a multipath rich indoor environment show that SpotFi achieves a median accuracy of 40 cm and is robust to indoor hindrances such as obstacles and multipath.", "title": "" }, { "docid": "b7c262601367c847a7dce282c7397242", "text": "We present nrgrep (\\nondeterministic reverse grep\"), a new pattern matching tool designed for eecient search of complex patterns. Unlike previous tools of the grep family, such as agrep and Gnu grep, nrgrep is based on a single and uniform concept: the bit-parallel simulation of a nondeterministic suux automaton. As a result, nrgrep can nd from simple patterns to regular expressions, exactly or allowing errors in the matches, with an eeciency that degrades smoothly as the complexity of the searched pattern increases. Another concept fully integrated into nrgrep and that contributes to this smoothness is the selection of adequate subpatterns for fast scanning, which is also absent in many current tools. We show that the eeciency of nrgrep is similar to that of the fastest existing string matching tools for the simplest patterns, and by far unpaired for more complex patterns.", "title": "" }, { "docid": "cfc884f446a878df78b32203d7dfde18", "text": "We consider the problems of motion-compensated frame interpolation (MCFI) and bidirectional prediction in a video coding environment. These applications generally require good motion estimates at the decoder. In this paper, we use a multiscale optical-ow-based motion estimator that provides smooth, natural motion elds under bit-rate constraints. These motion estimates scale well with change in temporal resolution and provide considerable exibility in the design and operation of coders and decoders. In the MCFI application, this estimator provides excellent interpolated frames that are superior to conventional motion estimators, both visually and in terms of PSNR. We also consider the eeect of occlusions in the bidirectional prediction application, and introduce a dense label eld that complements our motion estimator. This label eld enables us to adaptively weight the forward and backward predictions, and gives us substantial visual and PSNR improvements in the covered/uncovered regions of the sequence.", "title": "" }, { "docid": "712636d3a1dfe2650c0568c8f7cf124c", "text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.", "title": "" }, { "docid": "dcedb6bee075c3b0b24bd1475cf5c536", "text": "We study how to learn a semantic parser of state-of-the-art accuracy with less supervised training data. We conduct our study on WikiSQL, the largest hand-annotated semantic parsing dataset to date. First, we demonstrate that question generation is an effective method that empowers us to learn a state-ofthe-art neural network based semantic parser with thirty percent of the supervised training data. Second, we show that applying question generation to the full supervised training data further improves the state-of-the-art model. In addition, we observe that there is a logarithmic relationship between the accuracy of a semantic parser and the amount of training data.", "title": "" }, { "docid": "72ad9915e3f4afb9be4528ac04a9e5aa", "text": "A sensor isolation system was developed to reduce vibrational and noise effects on MEMS IMU sensors. A single degree of freedom model of an isolator was developed and simulated. Then a prototype was constructed for use with a Microstrain 3DM-GX3-25 IMU sensor and experimentally tested on a six DOF motion platform. An order of magnitude noise reduction was observed on the z accelerometer up to seven Hz. The isolator was then deployed on a naval ship along with a DMS TSS-25 IMU used as a truth measurement and a rigid mounted 3DM sensor was used for comparison. Signal quality improvements of the IMU were characterized and engine noise at 20 Hz was reduced by tenfold on x, y, and z accelerometers. A heave estimation algorithm was implemented and several types of filters were evaluated. Lab testing with a six DOF motion platform with pure sinusoidal motion, a fixed frequency four pole bandpass filter provided the least heave error at 12.5% of full scale or 0.008m error. When the experimental sea data was analyzed a fixed three pole highpass filter yielded the most accurate results of the filters tested. A heave period estimator was developed to adjust the filter cutoff frequencies for varying sea conditions. Since the ship motions were small, the errors w.r.t. full scale were rather large at 78% RMS as a worst case and 44% for a best case. In absolute terms when the variable filters and isolator were implemented, the best case peak and RMS errors were 0.015m and 0.050m respectively. The isolator improves the heave accuracy by 200% to 570% when compared with a rigidly mounted 3DM sensor.", "title": "" }, { "docid": "21c4a6bb8fee4e403c6cd384e1e423be", "text": "Fault detection prediction of FAB (wafer fabrication) process in semiconductor manufacturing process is possible that improve product quality and reliability in accordance with the classification performance. However, FAB process is sometimes due to a fault occurs. And mostly it occurs “pass”. Hence, data imbalance occurs in the pass/fail class. If the data imbalance occurs, prediction models are difficult to predict “fail” class because increases the bias of majority class (pass class). In this paper, we propose the SMOTE (Synthetic Minority Oversampling Technique) based over sampling method for solving problem of data imbalance. The proposed method solve the imbalance of the between pass and fail by oversampling the minority class of fail. In addition, by applying the fault detection prediction model to measure the performance.", "title": "" }, { "docid": "ac3223b0590216936cc2f48f6a61dc40", "text": "It is greatly demanded that to develop a kind of stably stair climbing mobile vehicle to assist the physically handicapped in moving outdoors. In this paper, we first propose a novel leg-wheel hybrid stair-climbing vehicle, \"Zero Carrier\", which consists of eight unified prismatic-joint legs, four of which attached active wheels and other four attached passive casters. Zero Carrier can be designed lightweight, compact, powerful, together with its significant stability on stair climbing motion, since its mechanism is mostly concentrated in its eight simplified legs. We discuss the leg mechanism and control method of the first trial model, Zero Carrier I, and verify its performance based on the experiments of stair climbing and moving over obstacles performed by Zero Carrier I", "title": "" }, { "docid": "b74ee9d63787d93411a4b37e4ed6882d", "text": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.", "title": "" }, { "docid": "f0d8d6d1adaa765153f2ec93266889a3", "text": "We present a new approach to localize extensive facial landmarks with a coarse-to-fine convolutional network cascade. Deep convolutional neural networks (DCNN) have been successfully utilized in facial landmark localization for two-fold advantages: 1) geometric constraints among facial points are implicitly utilized, 2) huge amount of training data can be leveraged. However, in the task of extensive facial landmark localization, a large number of facial landmarks (more than 50 points) are required to be located in a unified system, which poses great difficulty in the structure design and training process of traditional convolutional networks. In this paper, we design a four-level convolutional network cascade, which tackles the problem in a coarse-to-fine manner. In our system, each network level is trained to locally refine a subset of facial landmarks generated by previous network levels. In addition, each level predicts explicit geometric constraints (the position and rotation angles of a specific facial component) to rectify the inputs of the current network level. The combination of coarse-to-fine cascade and geometric refinement enables our system to locate extensive facial landmarks (68 points) accurately in the 300-W facial landmark localization challenge.", "title": "" }, { "docid": "476bb80edf6c54f0b6415d19f027ee19", "text": "Spin-transfer torque (STT) switching demonstrated in submicron sized magnetic tunnel junctions (MTJs) has stimulated considerable interest for developments of STT switched magnetic random access memory (STT-MRAM). Remarkable progress in STT switching with MgO MTJs and increasing interest in STTMRAM in semiconductor industry have been witnessed in recent years. This paper will present a review on the progress in the intrinsic switching current density reduction and STT-MRAM prototype chip demonstration. Challenges to overcome in order for STT-MRAM to be a mainstream memory technology in future technology nodes will be discussed. Finally, potential applications of STT-MRAM in embedded and standalone memory markets will be outlined.", "title": "" }, { "docid": "4872da79e7d01e8bb2a70ab17c523118", "text": "In recent years, social media has become a customer touch-point for the business functions of marketing, sales and customer service. We aim to show that intention analysis might be useful to these business functions and that it can be performed effectively on short texts (at the granularity level of a single sentence). We demonstrate a scheme of categorization of intentions that is amenable to automation using simple machine learning techniques that are language-independent. We discuss the grounding that this scheme of categorization has in speech act theory. In the demonstration we go over a number of usage scenarios in an attempt to show that the use of automatic intention detection tools would benefit the business functions of sales, marketing and service. We also show that social media can be used not just to convey pleasure or displeasure (that is, to express sentiment) but also to discuss personal needs and to report problems (to express intentions). We evaluate methods for automatically discovering intentions in text, and establish that it is possible to perform intention analysis on social media with an accuracy of 66.97%± 0.10%.", "title": "" }, { "docid": "7f16ed65f6fd2bcff084d22f76740ff3", "text": "The past few years have witnessed a growth in size and computational requirements for training and inference with neural networks. Currently, a common approach to address these requirements is to use a heterogeneous distributed environment with a mixture of hardware devices such as CPUs and GPUs. Importantly, the decision of placing parts of the neural models on devices is often made by human experts based on simple heuristics and intuitions. In this paper, we propose a method which learns to optimize device placement for TensorFlow computational graphs. Key to our method is the use of a sequence-tosequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices. The execution time of the predicted placements is then used as the reward signal to optimize the parameters of the sequence-to-sequence model. Our main result is that on Inception-V3 for ImageNet classification, and on RNN LSTM, for language modeling and neural machine translation, our model finds non-trivial device placements that outperform hand-crafted heuristics and traditional algorithmic methods.", "title": "" }, { "docid": "2a717b823caaaa0187d25b04305f13ee", "text": "BACKGROUND\nDo peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.\n\n\nMETHODOLOGY\nParticipants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).\n\n\nPRINCIPAL FINDINGS\nComfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.\n\n\nCONCLUSIONS\nThese findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space.", "title": "" }, { "docid": "49568236b0e221053c32b73b896d3dde", "text": "The continuous growth in the size and use of the Internet is creating difficulties in the search for information. A sophisticated method to organize the layout of the information and assist user navigation is therefore particularly important. In this paper, we evaluate the feasibility of using a self-organizing map (SOM) to mine web log data and provide a visual tool to assist user navigation. We have developed LOGSOM, a system that utilizes Kohonen’s self-organizing map to organize web pages into a two-dimensional map. The organization of the web pages is based solely on the users’ navigation behavior, rather than the content of the web pages. The resulting map not only provides a meaningful navigation tool (for web users) that is easily incorporated with web browsers, but also serves as a visual analysis tool for webmasters to better understand the characteristics and navigation behaviors of web users visiting their pages. D 2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
scidocsrr
fc3b076b1c43548122d8913121f567e3
Multi-Agent Systems for the Simulation of Land-Use and LandCover Change : A Review
[ { "docid": "6d1f374686b98106ab4221066607721b", "text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …", "title": "" } ]
[ { "docid": "55304b1a38d49cd65658964c3aea5df5", "text": "In this paper, we take the view that any formalization of commitments has to come together with a formalization of time, events/actions and change. We enrich a suitable formalism for reasoning about time, event/action and change in order to represent and reason about commitments. We employ a three-valued based temporal first-order non-monotonic logic (TFONL) that allows an explicit representation of time and events/action. TFONL subsumes the action languages presented in the literature and takes into consideration the frame, qualification and ramification problems, and incorporates to a domain description the set of rules governing change. It can handle protocols for the different types of dialogues such as information seeking, inquiry and negotiation. We incorporate commitments into TFONL to obtain Com-TFONL. Com-TFONL allows an agent to reason about its commitments and about other agents’ behaviour during a dialogue. Thus, agents can employ social commitments to act on, argue with and reason about during interactions with other agents. Agents may use their reasoning and argumentative capabilities in order to determine the appropriate communicative acts during conversations. Furthermore, Com-TFONL allows for an integration of commitments and arguments which helps in capturing the public aspects of a conversation and the reasoning aspects required in coherent conversations.", "title": "" }, { "docid": "a1465606b2ef01037023bb0660b0bcc1", "text": "Heart rate is one of the most important vital signals for personal health tracking. A number of smartphone-based heart rate estimation systems have been proposed over the years. However, they either depend on special hardware sensors or suffer from the high noise due to the weakness of the heart signals, affecting their accuracy in many practical scenarios.\n Inspired by medical studies about the heart motion mechanics, we propose the HeartSense heart rate estimation system. Specifically, we show that the gyroscope sensor is the most sensitive sensor for measuring the heart rate. To further counter noise and handle different practical scenarios, we introduce a novel quality metric that allows us to fuse the different gyroscope axes in a probabilistic framework to achieve a robust and accurate estimate.\n We have implemented and evaluated our system on different Android phones. Results using 836 experiments on different subjects in practical scenarios with a side-by-side comparison with other systems show that HeartSense can achieve 1.03 bpm median absolute error for heart rate estimation. This is better than the state-of-the-art by more than 147% in median error, highlighting HeartSense promise as a ubiquitous system for medical and personal well-being applications.", "title": "" }, { "docid": "bf085248cf23eb064b10424d08a99d5e", "text": "Standard methods of counting binary ones on a computer with a 704 type instruction code require an inner loop which is carried out once for each bit in the machine word. Program 1 (written in SAP language for purposes of illustration) is an example of such a standard program.", "title": "" }, { "docid": "e2302f7cd00b4c832a6a708dc6775739", "text": "This article provides theoretically and practically grounded assistance to companies that are today engaged primarily in non‐digital industries in the development and implementation of business models that use the Internet of Things. To that end, we investigate the role of the Internet in business models in general in the first section. We conclude that the significance of the Internet in business model innovation has increased steadily since the 1990s, that each new Internet wave has given rise to new digital business model patterns, and that the biggest breakthroughs to date have been made in digital industries. In the second section, we show that digital business model patterns have now become relevant in physical industries as well. The separation between physical and digital industries is now consigned to the past. The key to this transformation is the Internet of Things which makes possible hybrid solutions that merge physical products and digital services. From this, we derive very general business model logic for the Internet of Things and some specific components and patterns for business models. Finally we sketch out the central challenges faced in implementing such hybrid business models and point to possible solutions. The Influence of the Internet on Business Models to Date", "title": "" }, { "docid": "0a432546553ffbb06690495d5c858e19", "text": "Since the first reported death in 1977, scores of seemingly healthy Hmong refugees have died mysteriously and without warning from what has come to be known as Sudden Unexpected Nocturnal Death Syndrome (SUNDS). To date medical research has provided no adequate explanation for these sudden deaths. This study is an investigation into the changing impact of traditional beliefs as they manifest during the stress of traumatic relocation. In Stockton, California, 118 Hmong men and women were interviewed regarding their awareness of and personal experience with a traditional nocturnal spirit encounter. An analysis of this data reveals that the supranormal attack acts as a trigger for Hmong SUNDS.", "title": "" }, { "docid": "6b252d02e013519d1bd12dfcb3641013", "text": "BACKGROUND\nDuplex ultrasound investigation has become the reference standard in assessing the morphology and haemodynamics of the lower limb veins. The project described in this paper was an initiative of the Union Internationale de Phlébologie (UIP). The aim was to obtain a consensus of international experts on the methodology to be used for assessment of anatomy of superficial and perforating veins in the lower limb by ultrasound imaging.\n\n\nMETHODS\nThe authors performed a systematic review of the published literature on duplex anatomy of the superficial and perforating veins of the lower limbs; afterwards they invited a group of experts from a wide range of countries to participate in this project. Electronic submissions from the authors and the experts (text and images) were made available to all participants via the UIP website. The authors prepared a draft document for discussion at the UIP Chapter meeting held in San Diego, USA in August 2003. Following this meeting a revised manuscript was circulated to all participants and further comments were received by the authors and included in subsequent versions of the manuscript. Eventually, all participants agreed the final version of the paper.\n\n\nRESULTS\nThe experts have made detailed recommendations concerning the methods to be used for duplex ultrasound examination as well as the interpretation of images and measurements obtained. This document provides a detailed methodology for complete ultrasound assessment of the anatomy of the superficial and perforating veins in the lower limbs.\n\n\nCONCLUSIONS\nThe authors and a large group of experts have agreed a methodology for the investigation of the lower limbs venous system by duplex ultrasonography, with specific reference to the anatomy of the main superficial veins and perforators of the lower limbs in healthy and varicose subjects.", "title": "" }, { "docid": "28f9a2b2f6f4e90de20c6af78727b131", "text": "The detection and potential removal of duplicates is desirable for a number of reasons, such as to reduce the need for unnecessary storage and computation, and to provide users with uncluttered search results. This paper describes an investigation into the application of scalable simhash and shingle state of the art duplicate detection algorithms for detecting near duplicate documents in the CiteSeerX digital library. We empirically explored the duplicate detection methods and evaluated their performance and application to academic documents and identified good parameters for the algorithms. We also analyzed the types of near duplicates identified by each algorithm. The highest F-scores achieved were 0.91 and 0.99 for the simhash and shingle-based methods respectively. The shingle-based method also identified a larger variety of duplicate types than the simhash-based method.", "title": "" }, { "docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a", "text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.", "title": "" }, { "docid": "70313633b2694adbaea3e82b30b1ca51", "text": "The Global Assessment Scale (GAS) is a rating scale for evaluating the overall functioning of a subject during a specified time period on a continuum from psychological or psychiatric sickness to health. In five studies encompassing the range of population to which measures of overall severity of illness are likely to be applied, the GAS was found to have good reliability. GAS ratings were found to have a greater sensitivity to change over time than did other ratings of overall severity or specific symptom dimensions. Former inpatients in the community with a GAS rating below 40 had a higher probability of readmission to the hospital than did patients with higher GAS scores. The relative simplicity, reliability, and validity of the GAS suggests that it would be useful in a wide variety of clinical and research settings.", "title": "" }, { "docid": "3ddac782fd9797771505a4a46b849b45", "text": "A number of studies have found that mortality rates are positively correlated with income inequality across the cities and states of the US. We argue that this correlation is confounded by the effects of racial composition. Across states and Metropolitan Statistical Areas (MSAs), the fraction of the population that is black is positively correlated with average white incomes, and negatively correlated with average black incomes. Between-group income inequality is therefore higher where the fraction black is higher, as is income inequality in general. Conditional on the fraction black, neither city nor state mortality rates are correlated with income inequality. Mortality rates are higher where the fraction black is higher, not only because of the mechanical effect of higher black mortality rates and lower black incomes, but because white mortality rates are higher in places where the fraction black is higher. This result is present within census regions, and for all age groups and both sexes (except for boys aged 1-9). It is robust to conditioning on income, education, and (in the MSA results) on state fixed effects. Although it remains unclear why white mortality is related to racial composition, the mechanism working through trust that is often proposed to explain the effects of inequality on health is also consistent with the evidence on racial composition and mortality.", "title": "" }, { "docid": "5d04dd7d174cc1b1517035d26785c70f", "text": "Folksonomies have become a powerful tool to describe, discover, search, and navigate online resources (e.g., pictures, videos, blogs) on the Social Web. Unlike taxonomies and ontologies, which impose a hierarchical categorisation on content, folksonomies directly allow end users to freely create and choose the categories (in this case, tags) that best describe a piece of information. However, the freedom afforded to users comes at a cost: as tags are defined informally, the retrieval of information becomes more challenging. Different solutions have been proposed to help users discover content in this highly dynamic setting. However, they have proved to be effective only for users who have already heavily used the system (active users) and who are interested in popular items (i.e., items tagged by many other users). In this thesis we explore principles to help both active users and more importantly new or inactive users (cold starters) to find content they are interested in even when this content falls into the long tail of medium-to-low popularity items (cold start items). We investigate the tagging behaviour of users on content and show how the similarities between users and tags can be used to produce better recommendations. We then analyse how users create new content on social tagging websites and show how preferences of only a small portion of active users (leaders), responsible for the vast majority of the tagged content, can be used to improve the recommender system’s scalability. We also investigate the growth of the number of users, items and tags in the system over time. We then show how this information can be used to decide whether the benefits of an update of the data structures modelling the system outweigh the corresponding cost. In this work we formalize the ideas introduced above and we describe their implementation. To demonstrate the improvements of our proposal in recommendation efficacy and efficiency, we report the results of an extensive evaluation conducted on three different social tagging websites: CiteULike, Bibsonomy and MovieLens. Our results demonstrate that our approach achieves higher accuracy than state-of-the-art systems for cold start users and for users searching for cold start items. Moreover, while accuracy of our technique is comparable to other techniques for active users, the computational cost that it requires is much smaller. In other words our approach is more scalable and thus more suitable for large and quickly growing settings.", "title": "" }, { "docid": "e6912f1b9e6060b452f2313766288e97", "text": "The air-core inductance of power transformers is measured using a nonideal low-power rectifier. Its dc output serves to drive the transformer into deep saturation, and its ripple provides low-amplitude variable excitation. The principal advantage of the proposed method is its simplicity. For validation, the experimental results are compared with 3-D finite-element simulations.", "title": "" }, { "docid": "b576ffcda7637e3c2e45194ab16f8c26", "text": "This paper presents an asynchronous pipelined all-digital 10-b time-to-digital converter (TDC) with fine resolution, good linearity, and high throughput. Using a 1.5-b/stage pipeline architecture, an on-chip digital background calibration is implemented to correct residue subtraction error in the seven MSB stages. An asynchronous clocking scheme realizes pipeline operation for higher throughput. The TDC was implemented in standard 0.13-μm CMOS technology and has a maximum throughput of 300 MS/s and a resolution of 1.76 ps with a total conversion range of 1.8 ns. The measured DNL and INL were 0.6 LSB and 1.9 LSB, respectively.", "title": "" }, { "docid": "62e979cf9787ef2fcd8f317413f3fa94", "text": "Starting from conflictive predictions of hitherto disconnected debates in the natural and social sciences, this article examines the spatial structure of transnational human activity (THA) worldwide (a) across eight types of mobility and communication and (b) in its development over time. It is shown that the spatial structure of THA is similar to that of animal displacements and local-scale human motion in that it can be approximated by Lévy flights with heavy tails that obey power laws. Scaling exponent and power-law fit differ by type of THA, being highest in refuge-seeking and tourism and lowest in student exchange. Variance in the availability of resources and opportunities for satisfying associated needs appears to explain these differences. Over time (1960-2010), the Lévy-flight pattern remains intact and remarkably stable, contradicting the popular notion that socio-technological trends lead to a \"death of distance.\" Humans have not become more \"global\" over time, they rather became more mobile in general, i.e. they move and communicate more at all distances. Hence, it would be more adequate to speak of \"mobilization\" than of \"globalization.\" Longitudinal change occurs only in some types of THA and predominantly at short distances, indicating regional rather than global shifts.", "title": "" }, { "docid": "8e80d8be3b8ccbc4b8b6b6a0dde4136f", "text": "When an event occurs, it attracts attention of information sources to publish related documents along its lifespan. The task of event detection is to automatically identify events and their related documents from a document stream, which is a set of chronologically ordered documents collected from various information sources. Generally, each event has a distinct activeness development so that its status changes continuously during its lifespan. When an event is active, there are a lot of related documents from various information sources. In contrast when it is inactive, there are very few documents, but they are focused. Previous works on event detection did not consider the characteristics of the event's activeness, and used rigid thresholds for event detection. We propose a concept called life profile, modeled by a hidden Markov model, to model the activeness trends of events. In addition, a general event detection framework, LIPED, which utilizes the learned life profiles and the burst-and-diverse characteristic to adjust the event detection thresholds adaptively, can be incorporated into existing event detection methods. Based on the official TDT corpus and contest rules, the evaluation results show that existing detection methods that incorporate LIPED achieve better performance in the cost and F1 metrics, than without.", "title": "" }, { "docid": "0b22d7f6326210f02da44b0fa686f25a", "text": "Current methods learn monolithic attribute predictors, with the assumption that a single model is sufficient to reflect human understanding of a visual attribute. However, in reality, humans vary in how they perceive the association between a named property and image content. For example, two people may have slightly different internal models for what makes a shoe look \"formal\", or they may disagree on which of two scenes looks \"more cluttered\". Rather than discount these differences as noise, we propose to learn user-specific attribute models. We adapt a generic model trained with annotations from multiple users, tailoring it to satisfy user-specific labels. Furthermore, we propose novel techniques to infer user-specific labels based on transitivity and contradictions in the user's search history. We demonstrate that adapted attributes improve accuracy over both existing monolithic models as well as models that learn from scratch with user-specific data alone. In addition, we show how adapted attributes are useful to personalize image search, whether with binary or relative attributes.", "title": "" }, { "docid": "1f4c0407c8da7b5fe685ad9763be937b", "text": "As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called DROIDGUARD, combines static code analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, DROIDGUARD has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.", "title": "" }, { "docid": "94d2c88b11c79e2f4bf9fdc3ed8e1861", "text": "The advent of pulsed power technology in the 1960s has enabled the development of very high peak power sources of electromagnetic radiation in the microwave and millimeter wave bands of the electromagnetic spectrum. Such sources have applications in plasma physics, particle acceleration techniques, fusion energy research, high-power radars, and communications, to name just a few. This article describes recent ongoing activity in this field in both Russia and the United States. The overview of research in Russia focuses on high-power microwave (HPM) sources that are powered using SINUS accelerators, which were developed at the Institute of High Current Electronics. The overview of research in the United States focuses more broadly on recent accomplishments of a multidisciplinary university research initiative on HPM sources, which also involved close interactions with Department of Defense laboratories and industry. HPM sources described in this article have generated peak powers exceeding several gigawatts in pulse durations typically on the order of 100 ns in frequencies ranging from about 1 GHz to many tens of gigahertz.", "title": "" }, { "docid": "dea8b4ee114ca9d1f1a7f481310c502a", "text": "Within this study, chemically modified polymer surfaces were to be developed, which should enhance the subsequent immobilization of various bioactive substances. To improve the hemocompatibility and endothelialization of poly(ε-caprolactone) (PCL) intended as scaffold material for bioartificial vessel prostheses, terminal amino groups via ammonia (NH₃) plasma, oxygen (O₂) plasma/aminopropyltriethoxysilane (APTES), and 4,4'-methylenebis(phenyl isocyanate) (MDI)/water were provided. Then, immobilization of the anti-inflammatory and antithrombogenic model drug acetylsalicylic acid (ASA) and vascular endothelial growth factor (VEGF) were performed by employing N,N-disuccinimidyl carbonate (DSC) as crosslinker. Contact angle and fluorescence measurements, X-ray photoelectron spectroscopy and infrared spectroscopy confirmed the surface modification. Here the highest functionalization was observed for the O₂ plasma/APTES modification. Furthermore, biocompatibility studies demonstrated that the surface reactions have no negative influence, neither on the viability of L929 mouse fibroblasts, nor on primary or secondary hemostasis. Release studies showed that the immobilization of ASA and VEGF on the modified PCL surface via DSC is greatly improved compared to the adsorption-only reference. The advantage of DSC is that it immobilizes both bioactive substances via non-hydrolyzable and/or hydrolyzable covalent bonding. The highest ASA loading and cumulative release was detected using NH₃ plasma-activated PCL samples. For VEGF, the O₂ plasma/APTES-modified PCL samples were most efficient with regard to loading and cumulative release. In conclusion, both modifications are promising methods to optimize PCL as scaffold material for bioartificial vessel prostheses.", "title": "" }, { "docid": "dca1188be7b589fdb8a42c51c49204f5", "text": "BACKGROUND\nEpidermal nevus syndrome is a multi-system disease with a wide spectrum of clinical presentation. Numerous specialists may be required to address its extra cutaneous manifestations.\n\n\nMAIN OBSERVATIONS\nWe report a severe case of epidermal nevus syndrome involving the oral cavity, pharynx, and central nervous system in addition to disfiguring skin lesions.\n\n\nCONCLUSIONS\nDermatologists are in a unique position to first render the diagnosis of epidermal nevus syndrome for young patients and ensure appropriate follow-up.", "title": "" } ]
scidocsrr
5bc573a250fceaa9d862eab5bd3fc697
Monet: A User-Oriented Behavior-Based Malware Variants Detection System for Android
[ { "docid": "55a6353fa46146d89c7acd65bee237b5", "text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.", "title": "" } ]
[ { "docid": "fbc97be77f713a49e5fc6b43cd0204b8", "text": "We describe the architecture of the ILEX system, • which supports opportunistic text generation. In • web-based text generation, the SYstem cannot plan the entire multi-page discourse because the user's browsing path is unpredictable. For this reason, • the system must be ready opportunistically to take • advantage of whatever path the user chooses. We describe both the nature of opportunism in ILEX's museum domain, and then show how ILEX has been designed to function in this environment. The architecture presented addresses opportunism in both content determination and sentenceplanning. 1 E x p l o i t i n g o p p o r t u n i t i e s in t e x t g e n e r a t i o n • Many models of text generation make use of standard patterns (whether expressed as schemas (e.g. [McKeown 85]) or plan operators (e.g. [Moore and Paris 93])) to break down communicative goals in such a way as to produce extended texts. Such models are making two basic assumptions: 1. Text generation is goal directed, in the sense that spans and subspans of text are designed to achieve unitary communicative goals [Grosz and Sidner 86]. 2. Although the details Of the structUre of a text may have to be tuned to particulars of the communicative situation, generally the structure is determined by the goals and their decomposition. That is, a generator •needs strategies for decomposing the achievement of complex • goals into sequences of utterances, rather than ways of combining sequences of utterances into more complex structures. Generation is \"top-down\", rather than\"bottom-up\" [Marcu 97]. Our belief is that there is an important class of NLG problems for which these basic assumptions• are not helpful. These problems all involve situations where semi-fixed explanation strategies are less useful than the ability to exploit opportunities. WordNet gives the following definition of 0pportunity': O p p o r t u n i t y : \"A possibility due to a favorable combination of circumstances\" Because • opportunities involve •combinations of circumstances, they are often unexpected and hard to predict. It may be too expensive or impossible to have complete knowledge about them. Topdown generation strategies may not be able •to exploit opportunities (except at the cost of looking for all opportunities at all• points) because it is difficult to associate classes of opportunities with fixed stages in the explanation •process. We are investigating opportunistic text generation in the Intelligent Labelling Explorer (ILEX) project, which seeks automatically to generate a sequence of commentaries for items in an electronic 180 South Bridge, Edinburgh EH1 1HN, Email: {chrism,miCko}@dai.ecl.ac.uk. 2 Buccleuch Place, Edinburgh EH8 9LW, Email: {alik, jon}@cogsci.ed, ac.uk", "title": "" }, { "docid": "d8b8fa014fc0db066f8bb9b624f31d25", "text": "XCSF is a rule-based on-line learning system that makes use of local learning concepts in conjunction with gradient-based approximation techniques. It is mainly used to learn functions, or rather regression problems, by means of dividing the problem space into smaller subspaces and approximate the function values linearly therein. In this paper, we show how local interpolation can be incorporated to improve the approximation speed and thus to decrease the system error. We describe how a novel interpolation component integrates into the algorithmic structure of XCSF and thereby augments the well-established separation into the performance, discovery and reinforcement component. To underpin the validity of our approach, we present and discuss results from experiments on three test functions of different complexity, i.e. we show that by means of the proposed strategies for integrating the locally interpolated values, the overall performance of XCSF can be improved.", "title": "" }, { "docid": "a10752bb80ad47e18ef7dbcd83d49ff7", "text": "Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier.", "title": "" }, { "docid": "8518dc45e3b0accfc551111489842359", "text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.", "title": "" }, { "docid": "41b92e3e2941175cf6d80bf809d7bd32", "text": "Automated citation analysis (ACA) can be important for many applications including author ranking and literature based information retrieval, extraction, summarization and question answering. In this study, we developed a new compositional attention network (CAN) model to integrate local and global attention representations with a hierarchical attention mechanism. Training on a new benchmark corpus we built, our evaluation shows that the CAN model performs consistently well on both citation classification and sentiment analysis tasks.", "title": "" }, { "docid": "453191a57a9282248b0d5b8a85fa4ce0", "text": "The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.", "title": "" }, { "docid": "0d0f6e946bd9125f87a78d8cf137ba97", "text": "Acute renal failure increases risk of death after cardiac surgery. However, it is not known whether more subtle changes in renal function might have an impact on outcome. Thus, the association between small serum creatinine changes after surgery and mortality, independent of other established perioperative risk indicators, was analyzed. In a prospective cohort study in 4118 patients who underwent cardiac and thoracic aortic surgery, the effect of changes in serum creatinine within 48 h postoperatively on 30-d mortality was analyzed. Cox regression was used to correct for various established demographic preoperative risk indicators, intraoperative parameters, and postoperative complications. In the 2441 patients in whom serum creatinine decreased, early mortality was 2.6% in contrast to 8.9% in patients with increased postoperative serum creatinine values. Patients with large decreases (DeltaCrea <-0.3 mg/dl) showed a progressively increasing 30-d mortality (16 of 199 [8%]). Mortality was lowest (47 of 2195 [2.1%]) in patients in whom serum creatinine decreased to a maximum of -0.3 mg/dl; mortality increased to 6% in patients in whom serum creatinine remained unchanged or increased up to 0.5 mg/dl. Mortality (65 of 200 [32.5%]) was highest in patients in whom creatinine increased > or =0.5 mg/dl. For all groups, increases in mortality remained significant in multivariate analyses, including postoperative renal replacement therapy. After cardiac and thoracic aortic surgery, 30-d mortality was lowest in patients with a slight postoperative decrease in serum creatinine. Any even minimal increase or profound decrease of serum creatinine was associated with a substantial decrease in survival.", "title": "" }, { "docid": "6bdeee1b2dd8a9502558c12dcd270ff6", "text": "In this work, we describe our experiences in developing cloud forensics tools and use them to support three main points: First, we make the argument that cloud forensics is a qualitatively different problem. In the context of SaaS, it is incompatible with long-established acquisition and analysis techniques, and requires a new approach and forensic toolset. We show that client-side techniques, which are an extension of methods used over the last three decades, have inherent limitations that can only be overcome by working directly with the interfaces provided by cloud service providers. Second, we present our results in building forensic tools in the form of three case studies: kumoddea tool for cloud drive acquisition, kumodocsea tool for Google Docs acquisition and analysis, and kumofsea tool for remote preview and screening of cloud drive data. We show that these tools, which work with the public and private APIs of the respective services, provide new capabilities that cannot be achieved by examining client-side", "title": "" }, { "docid": "878617f145544f66e79f7d2d3404cbdf", "text": "In this paper we address the problem of classifying cited work into important and non-important to the developments presented in a research publication. This task is vital for the algorithmic techniques that detect and follow emerging research topics and to qualitatively measure the impact of publications in increasingly growing scholarly big data. We consider cited work as important to a publication if that work is used or extended in some way. If a reference is cited as background work or for the purpose of comparing results, the cited work is considered to be non-important. By employing five classification techniques (Support Vector Machine, Naïve Bayes, Decision Tree, K-Nearest Neighbors and Random Forest) on an annotated dataset of 465 citations, we explore the effectiveness of eight previously published features and six novel features (including context based, cue words based and textual based). Within this set, our new features are among the best performing. Using the Random Forest classifier we achieve an overall classification accuracy of 0.91 AUC.", "title": "" }, { "docid": "368c769f4427c213c68d1b1d7a0e4ca9", "text": "The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.", "title": "" }, { "docid": "2e35483beb568ab514601ba21d70c2d3", "text": "Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long-standing problem in natural language processing. In this paper, we present WSD algorithms which use neural network language models to achieve state-of-the-art precision. Each of these methods learns to disambiguate word senses using only a set of word senses, a few example sentences for each sense taken from a licensed lexicon, and a large unlabeled text corpus. We classify based on cosine similarity of vectors derived from the contexts in unlabeled query and labeled example sentences. We demonstrate state-of-the-art results when using the WordNet sense inventory, and significantly better than baseline performance using the New Oxford American Dictionary inventory. The best performance was achieved by combining an LSTM language model with graph label propagation.", "title": "" }, { "docid": "566b4dbea724fc852264b70ce6cae0df", "text": "On the basis of self-regulation theories, the authors develop an affective shift model of work engagement according to which work engagement emerges from the dynamic interplay of positive and negative affect. The affective shift model posits that negative affect is positively related to work engagement if negative affect is followed by positive affect. The authors applied experience sampling methodology to test the model. Data on affective events, mood, and work engagement was collected twice a day over 9 working days among 55 software developers. In support of the affective shift model, negative mood and negative events experienced in the morning of a working day were positively related to work engagement in the afternoon if positive mood in the time interval between morning and afternoon was high. Individual differences in positive affectivity moderated within-person relationships. The authors discuss how work engagement can be fostered through affect regulation.", "title": "" }, { "docid": "b9bc1b10d144e6680de682273dbced00", "text": "We propose a new and, arguably, a very simple reduction of instance segmentation to semantic segmentation. This reduction allows to train feed-forward non-recurrent deep instance segmentation systems in an end-to-end fashion using architectures that have been proposed for semantic segmentation. Our approach proceeds by introducing a fixed number of labels (colors) and then dynamically assigning object instances to those labels during training (coloring). A standard semantic segmentation objective is then used to train a network that can color previously unseen images. At test time, individual object instances can be recovered from the output of the trained convolutional network using simple connected component analysis. In the experimental validation, the coloring approach is shown to be capable of solving diverse instance segmentation tasks arising in autonomous driving (the Cityscapes benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and high-throughput microscopy image analysis. The source code is publicly available: https://github.com/kulikovv/DeepColoring.", "title": "" }, { "docid": "825bbc624a8e7a8a405b4c453b9f681d", "text": "For enterprise systems running on public clouds in which the servers are outside the control domain of the enterprise, access control that was traditionally executed by reference monitors deployed on the system servers can no longer be trusted. Hence, a self-contained security scheme is regarded as an effective way for protecting outsourced data. However, building such a scheme that can implement the access control policy of the enterprise has become an important challenge. In this paper, we propose a self-contained data protection mechanism called RBAC-CPABE by integrating role-based access control (RBAC), which is widely employed in enterprise systems, with the ciphertext-policy attribute-based encryption (CP-ABE). First, we present a data-centric RBAC (DC-RBAC) model that supports the specification of fine-grained access policy for each data object to enhance RBAC’s access control capabilities. Then, we fuse DC-RBAC and CP-ABE by expressing DC-RBAC policies with the CP-ABE access tree and encrypt data using CP-ABE. Because CP-ABE enforces both access control and decryption, access authorization can be achieved by the data itself. A security analysis and experimental results indicate that RBAC-CPABE maintains the security and efficiency properties of the CP-ABE scheme on which it is based, but substantially improves the access control capability. Finally, we present an implemented framework for RBAC-CPABE to protect privacy and enforce access control for data stored in the cloud.", "title": "" }, { "docid": "8ead9a0e083a65ef5cb5b3f7e9aea5be", "text": "In this paper, a new resonant gate-drive circuit is proposed to recover a portion of the power-MOSFET-gate energy that is typically dissipated in high-frequency converters. The proposed circuit consists of four control switches and a small resonant inductance. The current through the resonant inductance is discontinuous in order to minimize circulating-current conduction loss that is present in other methods. The proposed circuit also achieves quick turn-on and turn-off transition times to reduce switching and conduction losses in power MOSFETs. An analysis, a design procedure, and experimental results are presented for the proposed circuit. Experimental results demonstrate that the proposed driver can recover 51% of the gate energy at 5-V gate-drive voltage.", "title": "" }, { "docid": "d5eb643385b573706c48cbb2cb3262df", "text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.", "title": "" }, { "docid": "7256d6c5bebac110734275d2f985ab31", "text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.", "title": "" }, { "docid": "80f31015c604b95e6682908717e90d44", "text": "ed from specific role-abstraction levels would enable the role-assignment algorithm to incorporate relevant state attributes as rules in the assignment of roles to nodes. It would also allow roles to control or tune to the desired behavior in response to undesirable local node/network events. This is known as role load balancing and it is pursued as role reassignment to repair role failures. We will discuss role failures and role load balancing later in this section. 4.4.1 URAF architecture overview Figure 4.11 shows the high level design architecture of the unified role-abstraction framework (URAF) in conjunction with a middleware (RBMW) that maps application specified services and expected QoS onto an ad hoc wireless sensor network with heterogeneous node capabilities. The design of the framework is modular such that each module provides higher levels of network abstractions to the modules directly interfaced with it. For example, at the lowest level, we have API’s that interface directly with the physical hardware. The resource usage and accounting module maintains up-to-date information on node and neighbor resource specifications and their availability. As discussed earlier, complex roles are composed of elementary roles and these are executed as tasks on the node. The state of the role execution at any point in time is cached by the task status table for that complex role. At the next higher abstraction, we calculate and maintain the overall role execution time and the energy dissipated by the node in that time. The available energy is thus calculated and cross checked against remaining battery capacity. There is another table that measures and maintains the failure/success of a role for every service schedule or period. This is used to calculate the load imposed by the service at different time intervals.", "title": "" }, { "docid": "23f91ffdd3c15fdeeb3ef33ca463c238", "text": "The Shield project relied on application protocol analyzers to detect potential exploits of application vulnerabilities. We present the design of a second-generation generic application-level protocol analyzer (GAPA) that encompasses a domain-specific language and the associated run-time. We designed GAPA to satisfy three important goals: safety, real-time analysis and response, and rapid development of analyzers. We have found that these goals are relevant for many network monitors that implement protocol analysis. Therefore, we built GAPA to be readily integrated into tools such as Ethereal as well as Shield. GAPA preserves safety through the use of a memorysafe language for both message parsing and analysis, and through various techniques to reduce the amount of state maintained in order to avoid denial-of-service attacks. To support online analysis, the GAPA runtime uses a streamprocessing model with incremental parsing. In order to speed protocol development, GAPA uses a syntax similar to many protocol RFCs and other specifications, and incorporates many common protocol analysis tasks as built-in abstractions. We have specified 10 commonly used protocols in the GAPA language and found it expressive and easy to use. We measured our GAPA prototype and found that it can handle an enterprise client HTTP workload at up to 60 Mbps, sufficient performance for many end-host firewall/IDS scenarios. At the same time, the trusted code base of GAPA is an order of magnitude smaller than Ethereal.", "title": "" }, { "docid": "cd18d1e77af0e2146b67b028f1860ff0", "text": "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.", "title": "" } ]
scidocsrr
92ea68353b01970235db71b485445249
Retrieval-Based Learning : Active Retrieval Promotes Meaningful Learning
[ { "docid": "2c853123a29d27c3713c8159d13c3728", "text": "Retrieval practice is a potent technique for enhancing learning, but how often do students practice retrieval when they regulate their own learning? In 4 experiments the subjects learned foreign-language items across multiple study and test periods. When items were assigned to be repeatedly tested, repeatedly studied, or removed after they were recalled, repeated retrieval produced powerful effects on learning and retention. However, when subjects were given control over their own learning and could choose to test, study, or remove items, many subjects chose to remove items rather than practice retrieval, leading to poor retention. In addition, when tests were inserted in the learning phase, attempting retrieval improved learning by enhancing subsequent encoding during study. But when students were given control over their learning they did not attempt retrieval as early or as often as they should to promote the best learning. The experiments identify a compelling metacognitive illusion that occurs during self-regulated learning: Once students can recall an item they tend to believe they have \"learned\" it. This leads students to terminate practice rather than practice retrieval, a strategy choice that ultimately results in poor retention.", "title": "" }, { "docid": "496501d679734b90dd9fd881389fcc34", "text": "Learning is often identified with the acquisition, encoding, or construction of new knowledge, while retrieval is often considered only a means of assessing knowledge, not a process that contributes to learning. Here, we make the case that retrieval is the key process for understanding and for promoting learning. We provide an overview of recent research showing that active retrieval enhances learning, and we highlight ways researchers have sought to extend research on active retrieval to meaningful learning—the learning of complex educational materials as assessed on measures of inference making and knowledge application. However, many students lack metacognitive awareness of the benefits of practicing active retrieval. We describe two approaches to addressing this problem: classroom quizzing and a computer-based learning program that guides students to practice retrieval. Retrieval processes must be considered in any analysis of learning, and incorporating retrieval into educational activities represents a powerful way to enhance learning.", "title": "" }, { "docid": "ab05a100cfdb072f65f7dad85b4c5aea", "text": "Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter.", "title": "" } ]
[ { "docid": "32b1418673edf8f7dba848621ba2eb32", "text": "A paraphrase is a restatement of the meaning of a text in other words. Paraphrases have been studied to enhance the performance of many natural language processing tasks. In this paper, we propose a novel task iParaphrasing to extract visually grounded paraphrases (VGPs), which are different phrasal expressions describing the same visual concept in an image. These extracted VGPs have the potential to improve language and image multimodal tasks such as visual question answering and image captioning. How to model the similarity between VGPs is the key of iParaphrasing. We apply various existing methods as well as propose a novel neural network-based method with image attention, and report the results of the first attempt toward iParaphrasing.", "title": "" }, { "docid": "c724fdcf7f58121ff6ad886df68e2725", "text": "The Internet of Things (IoT) is an emerging paradigm where smart objects are seamlessly connected to the overall Internet and can potentially cooperate to achieve common objectives such as supporting innovative home automation services. With reference to such a scenario, this paper presents an Intrusion Detection System (IDS) framework for IoT empowered by IPv6 over low-power personal area network (6LoWPAN) devices. In fact, 6LoWPAN is an interesting protocol supporting the realization of IoT in a resource constrained environment. 6LoWPAN devices are vulnerable to attacks inherited from both the wireless sensor networks and the Internet protocols. The proposed IDS framework which includes a monitoring system and a detection engine has been integrated into the network framework developed within the EU FP7 project `ebbits'. A penetration testing (PenTest) system had been used to evaluate the performance of the implemented IDS framework. Preliminary tests revealed that the proposed framework represents a promising solution for ensuring better security in 6LoWPANs.", "title": "" }, { "docid": "1ff1bc5bc2b9fae4a953733f1b8d0bfc", "text": "and Concrete Categories The Joy of Cats Dedicated to Bernhard Banaschewski The newest edition of the file of the present book can be downloaded from http://katmat.math.uni-bremen.de/acc The authors are grateful for any improvements, corrections, and remarks, and can be reached at the addresses Jǐŕı Adámek, email: adamek@iti.cs.tu-bs.de Horst Herrlich, email: horst.herrlich@t-online.de George E. Strecker, email: strecker@math.ksu.edu All corrections will be awarded, besides eternal gratefulness, with a piece of delicious cake! You can claim your cake at the KatMAT Seminar, University of Bremen, at any Tuesday (during terms). Copyright c © 2004 Jǐŕı Adámek, Horst Herrlich, and George E. Strecker. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”. See p. 512 ff.", "title": "" }, { "docid": "2ff290ba8bab0de760c289bff3feee06", "text": "Bayesian Networks are being used extensively for reasoning under uncertainty. Inference mechanisms for Bayesian Networks are compromised by the fact that they can only deal with propositional domains. In this work, we introduce an extension of that formalism, Hierarchical Bayesian Networks, that can represent additional information about the structure of the domains of variables. Hierarchical Bayesian Networks are similar to Bayesian Networks, in that they represent probabilistic dependencies between variables as a directed acyclic graph, where each node of the graph corresponds to a random variable and is quanti ed by the conditional probability of that variable given the values of its parents in the graph. What extends the expressive power of Hierarchical Bayesian Networks is that a node may correspond to an aggregation of simpler types. A component of one node may itself represent a composite structure; this allows the representation of complex hierarchical domains. Furthermore, probabilistic dependencies can be expressed at any level, between nodes that are contained in the same structure.", "title": "" }, { "docid": "fecee8738ede076bf09f1713eb57ef6e", "text": "We propose a formulation of visual localization that does not require construction of explicit maps in the form of point clouds or voxels. The goal is to learn an implicit representation of the environment at a higher, more abstract level, for instance that of objects. To study this approach we consider procedurally generated Minecraft worlds, for which we can generate visually rich images along with camera pose coordinates. We first show that Generative Query Networks (GQNs) enhanced with a novel attention mechanism can capture the visual structure of 3D scenes in Minecraft, as evidenced by their samples. We then apply the models to the localization problem, investigating both generative and discriminative approaches, and compare the different ways in which they each capture task uncertainty. Our results show that models with implicit mapping are able to capture the underlying 3D structure of visually complex scenes, and use this to accurately localize new observations, paving the way towards future applications in sequential localization. Supplementary video available at https://youtu.be/iHEXX5wXbCI.", "title": "" }, { "docid": "84b2dbea13df9e6ee70570a05f82049f", "text": "The main aim of this position paper is to identify and briefly discuss design-related issues commonly encountered with the implementation of both behaviour change techniques and persuasive design principles in physical activity smartphone applications. These overlapping issues highlight a disconnect in the perspectives held between health scientists' focus on the application of behaviour change theories and components of interventions, and the information systems designers' focus on the application of persuasive design principles as software design features intended to motivate, facilitate and support individuals through the behaviour change process. A review of the current status and some examples of these different perspectives is presented, leading to the identification of the main issues associated with this disconnection. The main behaviour change technique issues identified are concerned with: the fragmented integration of techniques, hindrances in successful use, diversity of user needs and preferences, and the informational flow and presentation. The main persuasive design issues identified are associated with: the fragmented application of persuasive design principles, hindrances in successful usage, diversity of user needs and preferences, informational flow and presentation, the lack of pragmatic guidance for application designers, and the maintenance of immersive user interactions and engagements. Given the common overlap across four of the identified issues, it is concluded that a methodological approach for integrating these two perspectives, and their associated issues, into a consolidated framework is necessary to address the apparent disconnect between these two independently-established, yet complementary fields.", "title": "" }, { "docid": "01f9384b33a84c3ece4db5337e708e24", "text": "Broken rails are the leading cause of major derailments in North America. Class I freight railroads average 84 mainline broken-rail derailments per year with an average track and equipment cost of approximately $525,000 per incident. The number of mainline broken-railcaused derailments has increased from 77 in 1997, to 91 in 2006; therefore, efforts to reduce their occurrence remain important. We conducted an analysis of the factors that influence the occurrence of broken rails and developed a quantitative model to predict locations where they are most likely to occur. Among the factors considered were track and rail characteristics, maintenance activities and frequency, and on-track testing results. Analysis of these factors involved the use of logistic regression techniques to develop a statistical model for the prediction of broken rail locations. For such a model to have value for railroads it must be feasible to use and provide information in a useful manner. Consequently, an optimal prediction model containing only the top eight factors related to broken rails was developed. The economic impact of broken rail events was also studied. This included the costs associated with broken rail derailments and service failures, as well as the cost of typical prevention measures. A train delay calculator was also developed based on industry operating averages. Overall, the information presented here can assist railroads to more effectively allocate resources to prevent the occurrence of broken rails. INTRODUCTION Understanding the factors related to broken rails is an important topic for U.S. freight railroads and is becoming more so because of the increase in their occurrence in recent years. This increase is due to several factors, but the combination of increased traffic and heavier axle loads are probably the most important. Broken rails are generally caused by the undetected growth of either internal or surface defects in the rail (1). Previous research has focused on both mechanistic analyses (2-8) and statistical analyses (9-13) in order to understand the factors that cause crack growth in rails and ultimately broken rails. The first objective of this analysis was to develop a predictive tool that will enable railroads to identify locations with a high probability of broken rail. The possible predictive factors that were evaluated included rail characteristics, infrastructure data, maintenance activity, operational information, and rail testing results. The second objective was to study the economic impact of broken rails based on industry operating averages. Our analysis on this topic incorporates previous work that developed a framework for the cost of broken rails (14). The purpose of this paper is to provide information to enable more efficient evaluation of options to reduce the occurence of broken rails. DEVELOPMENT OF SERVICE FAILURE PREDICTION MODEL The first objective of this paper was to develop a model to identify locations in the rail network with a high probability of broken rail occurrence based on broken rail service failure data and possible influence factors. All of the factors that might affect service failure occurrence and for which we had data were considered in this analysis. Several broken rail predictive models were developed and evaluated using logistic regression techniques. Data Available for Study In order to develop a predictive tool, it is desirable to initially consider as many factors as possible that might affect the occurrence of broken rails. From the standpoint of rail maintenance planning it is important to determine which factors are and are not correlated with broken rail occurence. Therefore the analysis included a wide-range of possible variables for which data were available. This included track and rail characteristics such as rail age, rail curvature, track speed, grade, and rail weight. Also, changes in track modulus due to the presence of infrastructure features such as bridges and turnouts have a potential effect on rail defect growth and were examined as well. Additionally, maintenance activities were included that can reduce the likelihood of broken rail occurrence, such as rail grinding and tie replacement. Finally, track geometry and ultrasonic testing for rail defects were used by railroads to assess the condition of track and therefore the results of these tests are included as they may provide predictive information about broken rail occurrence. The BNSF Railway provided data on the location of service failures and a variety of other infrastructure, inspection and operational parameters. In this study a “service failure” was defined as an incident where a track was taken out of service due to a broken rail. A database was developed from approximately 23,000 miles of mainline track maintained by the BNSF Railway covering the four-year period, 2003 through 2006. BNSF’s network was divided into 0.01-mile-long segments (approximately 53 feet each) and the location of each reported service failure was recorded. BNSF experienced 12,685 service failures during the four-year study period. For the case of modeling rare events it is common to sample all of the rare events and compare these with a similar sized sample of instances where the event did not occur (15). Therefore an additional 12,685 0.01-mile segments that did not experience a service failure during the four-year period were randomly selected from the same network. Each non-failure location was also assigned a random date within the four-year time period for use in evaluating certain temporal variables that might be factors. Thus, the dataset used in this analysis included a total of 25,370 segment locations and dates when a service failure did or did not occur in the railroad’s network during the study period. All available rail characteristics, infrastructure data, maintenance activity, operational information, and track testing results were linked to each of these locations, for a total of 28 unique input variables. Evaluation of Previous Service Failure Model In a previous study Dick developed a predictive model of service failures based on relevant track and traffic data for a two-year period (10, 11). The outcome of that study was a multivariate statistical model that could quantify the probability of a service failure at any particular location based on a number of track and traffic related variables. Dick‘s model used 11 possible predictor factors for broken rails and could correctly classify failure locations with 87.4% accuracy using the dataset provided to him. Our first step was to test this model using data from a more recent two-year period. From 2005 through 2006, the BNSF experienced 6,613 service failures and data on these, along with 6,613 randomly selected non-failure locations, were analyzed. 7,247 of the 13,226 cases were classified correctly (54.8%), considerably lower than in the earlier study causing us to ask why the predictive power seemed to have declined. Examination of the service failure dataset used previously revealed that it may not have included all the trackage from the network. This resulted in a dataset that generated the particular model and accuracy levels reported in the earlier study (10, 11). Therefore a new, updated statistical model was developed to predict service failure locations. Development of Updated Statistical Classification Model The updated model that was developed to predict service failure locations used similar logistic regression techniques. Logistic regression was selected because it is a discrete choice model that calculates the probability of failure based on available input variables. These probabilities are used to classify each case as either failure or non-failure. A statistical regression equation was developed based on the significant input parameters to determine the probability of failure. To find the best classification model, the input parameters were evaluated with and without multiple-term interactions allowed. Logistic Regression Methodology and Techniques The model was developed as a discrete choice classification problem of either failure or non-failure using the new dataset described above. The objective was to find the best combination of variables and mathematical relationships among the 28 available input variables to predict the occurrence of broken rails. The service failure probability model was developed using Statistical Analysis Software (SAS) and the LOGISTIC procedure (16). This procedure fits a discrete choice logistic regression model to the input data. The output of this model is an index value between zero and one corresponding to the probability of a service failure occurrence. Four commonly used variable selection techniques were evaluated in this analysis to find the best model. The simplest method is referred to as “full-model”, or variable selection type “none” in SAS. The full-model method uses every available input variable to determine the best regression model. The next technique examined was selection type “forward”, which evaluates each input variable and systematically adds the most significant variables to the model. The forward selection process continues adding the most significant variable until no additional variables meet a defined significance level for inclusion in the model. The entry and removal level used in this analysis for all variable selection techniques was a 0.05 significance threshold. The “backward” variable selection technique was also used. This method starts with all input variables included in the model. In the first step, the model determines the least significant variable that does not meet the defined significance level and removes it from the model. This process continues until no other variables included in the model meet the defined criteria for removal. The final logistic regression selection technique used was “step-wise” selection. The step-wise selection method is s", "title": "" }, { "docid": "bdefc710647c80630cb089aec9d79197", "text": "This chapter introduces a new computational intelligence paradigm to perform pattern recognition, named Artificial Immune Systems (AIS). AIS take inspiration from the immune system in order to build novel computational tools to solve problems in a vast range of domain areas. The basic immune theories used to explain how the immune system perform pattern recognition are described and their corresponding computational models are presented. This is followed with a survey from the literature of AIS applied to pattern recognition. The chapter is concluded with a trade-off between AIS and artificial neural networks as pattern recognition paradigms.", "title": "" }, { "docid": "8f3c0a8098ae76755b0e2f1dc9cfc8ea", "text": "This paper presents a new approach to structural topology optimization. We represent the structural boundary by a level set model that is embedded in a scalar function of a higher dimension. Such level set models are flexible in handling complex topological changes and are concise in describing the boundary shape of the structure. Furthermore, a wellfounded mathematical procedure leads to a numerical algorithm that describes a structural optimization as a sequence of motions of the implicit boundaries converging to an optimum solution and satisfying specified constraints. The result is a 3D topology optimization technique that demonstrates outstanding flexibility of handling topological changes, fidelity of boundary representation and degree of automation. We have implemented the algorithm with the use of several robust and efficient numerical techniques of level set methods. The benefit and the advantages of the proposed method are illustrated with several 2D examples that are widely used in the recent literature of topology optimization, especially in the homogenization based methods. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "031a07ada2160dc70380738bedd6d657", "text": "Emergent heterogeneous systems must be optimized for both power and performance at exascale. Massive parallelism combined with complex memory hierarchies form a barrier to efficient application and architecture design. These challenges are exacerbated with GPUs as parallelism increases orders of magnitude and power consumption can easily double. Models have been proposed to isolate power and performance bottlenecks and identify their root causes. However, no current models combine simplicity, accuracy, and support for emergent GPU architectures (e.g. NVIDIA Fermi). We combine hardware performance counter data with machine learning and advanced analytics to model power-performance efficiency for modern GPU-based systems. Our performance counter based approach is simpler than previous approaches and does not require detailed understanding of the underlying architecture. The resulting model is accurate for predicting power (within 2.1%) and performance (within 6.7%) for application kernels on modern GPUs. Our model can identify power-performance bottlenecks and their root causes for various complex computation and memory access patterns (e.g. global, shared, texture). We measure the accuracy of our power and performance models on a NVIDIA Fermi C2075 GPU for more than a dozen CUDA applications. We show our power model is more accurate and robust than the best available GPU power models - multiple linear regression models MLR and MLR+. We demonstrate how to use our models to identify power-performance bottlenecks and suggest optimization strategies for high-performance codes such as GEM, a biomolecular electrostatic analysis application. We verify our power-performance model is accurate on clusters of NVIDIA Fermi M2090s and useful for suggesting optimal runtime configurations on the Keeneland supercomputer at Georgia Tech.", "title": "" }, { "docid": "63c550438679c0353c2f175032a73369", "text": "Large screens or projections in public and private settings have become part of our daily lives, as they enable the collaboration and presentation of information in many diverse ways. When discussing the shown information with other persons, we often point to a displayed object with our index finger or a laser pointer in order to talk about it. Although mobile phone-based interactions with remote screens have been investigated intensively in the last decade, none of them considered such direct pointing interactions for application in everyday tasks. In this paper, we present the concept and design space of PointerPhone which enables users to directly point at objects on a remote screen with their mobile phone and interact with them in a natural and seamless way. We detail the design space and distinguish three categories of interactions including low-level interactions using the mobile phone as a precise and fast pointing device, as well as an input and output device. We detail the category of widgetlevel interactions. Further, we demonstrate versatile high-level interaction techniques and show their application in a collaborative presentation scenario. Based on the results of a qualitative study, we provide design implications for application designs.", "title": "" }, { "docid": "c5259f94f5dbee97edc12671db29a6df", "text": "Sentences and tweets are often annotated for sentiment simply by asking respondents to label them as positive, negative, or neutral. This works well for simple expressions of sentiment; however, for many other types of sentences, respondents are unsure of how to annotate, and produce inconsistent labels. In this paper, we outline several types of sentences that are particularly challenging for manual sentiment annotation. Next we propose two annotation schemes that address these challenges, and list benefits and limitations for both.", "title": "" }, { "docid": "a2f36e0f8abaa07124d446f6aa870491", "text": "We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given images and partial and/or noisy depth and semantic data. We formulate this objective of reconstructing one or more types of scene data using a Multi-modal stacked Auto-Encoder. We show that suitably designed Multi-modal Auto-Encoders can solve the depth estimation and the semantic segmentation problems simultaneously, in the partial or even complete absence of some of the input modalities. We demonstrate our method using the outdoor dataset KITTI that includes LIDAR and stereo cameras. Our results show that as a means to estimate depth from a single image, our method is comparable to the state-of-the-art, and can run in real time (i.e., less than 40ms per frame). But we also show that our method has a significant advantage over other methods in that it can seamlessly use additional data that may be available, such as a sparse point-cloud and/or incomplete coarse semantic labels.", "title": "" }, { "docid": "caa35f58e9e217fd45daa2e49c4a4cde", "text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤’ tEsEbbEr-E ‘it was broken’, ‰ ̃b’w l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "843c99eb041fd3fc6903afda71f1a37e", "text": "The purpose of this document is to re ect on novel and upcoming methods for computer vision that might have relevance for application in robot vision and video analytics. The document covers many di erent subelds of computer vision, most of which have been addressed by our research activity at the computer vision laboratory. The report has been written based on a request of, and supported by, FOI.", "title": "" }, { "docid": "693c5cb15aea4398c95fd9d67f6615e9", "text": "With the renaissance of neural network in recent years, relation classification has again become a research hotspot in natural language processing, and leveraging parse trees is a common and effective method of tackling this problem. In this work, we offer a new perspective on utilizing syntactic information of dependency parse tree and present a position encoding convolutional neural network (PECNN) based on dependency parse tree for relation classification. First, treebased position features are proposed to encode the relative positions of words in dependency trees and help enhance the word representations. Then, based on a redefinition of “context”, we design two kinds of tree-based convolution kernels for capturing the semantic and structural information provided by dependency trees. Finally, the features extracted by convolution module are fed to a classifier for labelling the semantic relations. Experiments on the benchmark dataset show that PECNN outperforms state-of-the-art approaches. We also compare the effect of different position features and visualize the influence of treebased position feature by tracing back the convolution process.", "title": "" }, { "docid": "ae43fc77cfe3e88f00a519744407eed7", "text": "In this work we use the recent advances in representation learning to propose a neural architecture for the problem of natural language inference. Our approach is aligned to mimic how a human does the natural language inference process given two statements. The model uses variants of Long Short Term Memory (LSTM), attention mechanism and composable neural networks, to carry out the task. Each part of our model can be mapped to a clear functionality humans do for carrying out the overall task of natural language inference. The model is end-to-end differentiable enabling training by stochastic gradient descent. On Stanford Natural Language Inference(SNLI) dataset, the proposed model achieves better accuracy numbers than all published models in literature.", "title": "" }, { "docid": "ec7590c04dc31b1c6065ef4e15148dfc", "text": "No thesis - no graduation. Academic writing poses manifold challenges to students, instructors and institutions alike. High labor costs, increasing student numbers, and the Bologna Process (which has reduced the period after which undergraduates in Europe submit their first thesis and thus the time available to focus on writing skills) all pose a threat to students’ academic writing abilities. This situation gave rise to the practical goal of this study: to determine if, and to what extent, academic writing and its instruction can be scaled (i.e., designed more efficiently) using a technological solution, in this case Thesis Writer (TW), a domain-specific, online learning environment for the scaffolding of student academic writing, combined with an online editor optimized for producing academic text. Compared to existing automated essay scoring and writing evaluation tools, TW is not focusing on feedback but on instruction, planning, and genre mastery. While most US-based tools, particularly those also used in secondary education, are targeting on the essay genre, TW is tailored to the needs of theses and research article writing (IMRD scheme). This mixed-methods paper reports data of a test run with a first-year course of 102 business administration students. A technology adoption model served as a frame of reference for the research design. From a student’s perspective, problems posed by the task of writing a research proposal as well as the use, usability, and usefulness of TW were studied through an online survey and focus groups (explanatory sequential design). Results seen were positive to highly positive – TW is being used, and has been deemed supportive by students. In particular, it supports the scaling of writing instruction in group assignment settings.", "title": "" }, { "docid": "5d17ff397a09da24945bb549a8bfd3ec", "text": "For applications of 5G (5th generation mobile networks) communication systems, dual-polarized patch array antenna operating at 28.5 GHz is designed on the package substrate. To verify the radiation performance of designed antenna itself, a test package including two patch antennas is also design and its scattering parameters were measured. Using a large height of dielectric materials, 1.5 ∼ 2.0 GHz of antenna bandwidth is achieved which is wide enough. Besides, the dielectric constants are reduced to reflect variances of material properties in the higher frequency region. Measured results of the test package show a good performance at the operating frequency, indicating that the fabricated antenna package will perform well, either. In the future work, manufacturing variances will be investigated further.", "title": "" } ]
scidocsrr
e1e1d3af82c42a25a432902efb540b8c
Micro-Expression Recognition Using Color Spaces
[ { "docid": "ffc36fa0dcc81a7f5ba9751eee9094d7", "text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.", "title": "" }, { "docid": "78ae476295aa266a170a981a34767bdd", "text": "Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.", "title": "" } ]
[ { "docid": "1abeeaa8c100e1231f3e06cad3f0ea70", "text": "Collaborative online shopping refers to an activity in which a consumer shops at an eCommerce website with remotely located shopping partners such as friends or family. Although collaborative online shopping has increased with the pervasiveness of social networking, few studies have examined how to enhance this type of shopping experience. This study examines two potential design components, embodiment and media richness, that could enhance shoppers’ experiences. Based on theories of copresence and flow, we examined whether the implementation of these two features could increase copresence, flow, and the intention to use a collaborative online shopping website. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "65fa13e16b7411c5b3ed20f6009809df", "text": "In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks (GANs). GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer. In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data. Attempts have been made for utilizing GANs with word embeddings for text generation. This work presents an approach to text generation using SkipThought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures. The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.", "title": "" }, { "docid": "e8c7f00d775254bd6b8c5393397d05a6", "text": "PURPOSE\nVirtual reality devices, including virtual reality head-mounted displays, are becoming increasingly accessible to the general public as technological advances lead to reduced costs. However, there are numerous reports that adverse effects such as ocular discomfort and headache are associated with these devices. To investigate these adverse effects, questionnaires that have been specifically designed for other purposes such as investigating motion sickness have often been used. The primary purpose of this study was to develop a standard questionnaire for use in investigating symptoms that result from virtual reality viewing. In addition, symptom duration and whether priming subjects elevates symptom ratings were also investigated.\n\n\nMETHODS\nA list of the most frequently reported symptoms following virtual reality viewing was determined from previously published studies and used as the basis for a pilot questionnaire. The pilot questionnaire, which consisted of 12 nonocular and 11 ocular symptoms, was administered to two groups of eight subjects. One group was primed by having them complete the questionnaire before immersion; the other group completed the questionnaire postviewing only. Postviewing testing was carried out immediately after viewing and then at 2-min intervals for a further 10 min.\n\n\nRESULTS\nPriming subjects did not elevate symptom ratings; therefore, the data were pooled and 16 symptoms were found to increase significantly. The majority of symptoms dissipated rapidly, within 6 min after viewing. Frequency of endorsement data showed that approximately half of the symptoms on the pilot questionnaire could be discarded because <20% of subjects experienced them.\n\n\nCONCLUSIONS\nSymptom questionnaires to investigate virtual reality viewing can be administered before viewing, without biasing the findings, allowing calculation of the amount of change from pre- to postviewing. However, symptoms dissipate rapidly and assessment of symptoms needs to occur in the first 5 min postviewing. Thirteen symptom questions, eight nonocular and five ocular, were determined to be useful for a questionnaire specifically related to virtual reality viewing using a head-mounted display.", "title": "" }, { "docid": "de3ba8a5e83dc1fa153b9341ff7cbc76", "text": "The 1990s have seen a rapid growth of research interests in mobile ad hoc networking. The infrastructureless and the dynamic nature of these networks demands new set of networking strategies to be implemented in order to provide efficient end-to-end communication. This, along with the diverse application of these networks in many different scenarios such as battlefield and disaster recovery, have seen MANETs being researched by many different organisations and institutes. MANETs employ the traditional TCP/IP structure to provide end-to-end communication between nodes. However, due to their mobility and the limited resource in wireless networks, each layer in the TCP/IP model require redefinition or modifications to function efficiently in MANETs. One interesting research area in MANET is routing. Routing in the MANETs is a challenging task and has received a tremendous amount of attention from researches. This has led to development of many different routing protocols for MANETs, and each author of each proposed protocol argues that the strategy proposed provides an improvement over a number of different strategies considered in the literature for a given network scenario. Therefore, it is quite difficult to determine which protocols may perform best under a number of different network scenarios, such as increasing node density and traffic. In this paper, we provide an overview of a wide range of routing protocols proposed in the literature. We also provide a performance comparison of all routing protocols and suggest which protocols may perform best in large networks.", "title": "" }, { "docid": "2c63c39cf0e21119ecd6a471c9764fa2", "text": "CODE4 is a general-purpose knowledge management system, intended to assist with the common knowledge processing needs of anyone who desires to analyse, store, or retrieve conceptual knowledge in applications as varied as the specification, design and user documentation of computer systems; the construction of term banks, or the development of ontologies for natural language understanding. This paper provides an overview of CODE4 as follows: We first describe the general philosophy and rationale of CODE4 and relate it to other systems. Next, we discuss the knowledge representation, specifically designed to meet the needs of flexible, interactive knowledge management. The highly-developed user interface, which we believe to be critical for this type of system, is explained in some detail. We finally describe how CODE4 is being used in a number of applications.", "title": "" }, { "docid": "446a15e1dae957f1e142454e4f32db5d", "text": "Cyber attacks in the Internet are common knowledge for even nontechnical people. Same attack techniques can also be used against any military radio networks in the battlefield. This paper describes a test setup that can be used to test tactical radio networks against cyber vulnerabilities. The test setup created is versatile and can be adapted to any command and control system on any level of the OSI model. Test setup uses as much publicly or commercially available tools as possible. Need for custom made components is minimized to decrease costs, to decrease deployment time and to increase usability. With architecture described, same tools used in IP network testing can be used in tactical radio networks. Problems found in any level of the system can be fixed in co-operation with vendors of the system. Cyber testing should be adapted as part of acceptance tests of any new military communication system.", "title": "" }, { "docid": "6001982cb50621fe488034d6475d1894", "text": "Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.", "title": "" }, { "docid": "e92a85f7ce827f4108c2393b401c1248", "text": "Although many people are aware of the communication that occurs between the gastrointestinal (GI) tract and the central nervous system, fewer know about the ability of the central nervous system to influence the microbiota or of the microbiota's influence on the brain and behavior. Within the GI tract, the microbiota have a mutually beneficial relationship with their host that maintains normal mucosal immune function, epithelial barrier integrity, motility, and nutrient absorption. Disruption of this relationship alters GI function and disease susceptibility. Animal studies suggest that perturbations of behavior, such as stress, can change the composition of the microbiota; these changes are associated with increased vulnerability to inflammatory stimuli in the GI tract. The mechanisms that underlie these alterations are likely to involve stress-induced changes in GI physiology that alter the habitat of enteric bacteria. Furthermore, experimental perturbation of the microbiota can alter behavior, and the behavior of germ-free mice differs from that of colonized mice. Gaining a better understanding of the relationship between behavior and the microbiota could provide insight into the pathogenesis of functional and inflammatory bowel disorders.", "title": "" }, { "docid": "a8a802b8130d2b6a1b2dae84d53fb7c9", "text": "This paper addresses an open challenge in educational data mining, i.e., the problem of using observed prerequisite relations among courses to learn a directed universal concept graph, and using the induced graph to predict unobserved prerequisite relations among a broader range of courses. This is particularly useful to induce prerequisite relations among courses from different providers (universities, MOOCs, etc.). We propose a new framework for inference within and across two graphs---at the course level and at the induced concept level---which we call Concept Graph Learning (CGL). In the training phase, our system projects the course-level links onto the concept space to induce directed concept links; in the testing phase, the concept links are used to predict (unobserved) prerequisite links for test-set courses within the same institution or across institutions. The dual mappings enable our system to perform an interlingua-style transfer learning, e.g. treating the concept graph as the interlingua, and inducing prerequisite links in a transferable manner across different universities. Experiments on our newly collected data sets of courses from MIT, Caltech, Princeton and CMU show promising results, including the viability of CGL for transfer learning.", "title": "" }, { "docid": "35ac15f19cefd103f984519e046e407c", "text": "This paper presents a highly sensitive sensor for crack detection in metallic surfaces. The sensor is inspired by complementary split-ring resonators which have dimensions much smaller than the excitation’s wavelength. The entire sensor is etched in the ground plane of a microstrip line and fabricated using printed circuit board technology. Compared to available microwave techniques, the sensor introduced here has key advantages including high sensitivity, increased dynamic range, spatial resolution, design simplicity, selectivity, and scalability. Experimental measurements showed that a surface crack having 200-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> width and 2-mm depth gives a shift in the resonance frequency of 1.5 GHz. This resonance frequency shift exceeds what can be achieved using other sensors operating in the low GHz frequency regime by a significant margin. In addition, using numerical simulation, we showed that the new sensor is able to resolve a 10-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>-wide crack (equivalent to <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula>/4000) with 180-MHz shift in the resonance frequency.", "title": "" }, { "docid": "394f71d22294ec8f6704ad484a825b20", "text": "Despite decades of research, the roles of climate and humans in driving the dramatic extinctions of large-bodied mammals during the Late Quaternary remain contentious. We use ancient DNA, species distribution models and the human fossil record to elucidate how climate and humans shaped the demographic history of woolly rhinoceros, woolly mammoth, wild horse, reindeer, bison and musk ox. We show that climate has been a major driver of population change over the past 50,000 years. However, each species responds differently to the effects of climatic shifts, habitat redistribution and human encroachment. Although climate change alone can explain the extinction of some species, such as Eurasian musk ox and woolly rhinoceros, a combination of climatic and anthropogenic effects appears to be responsible for the extinction of others, including Eurasian steppe bison and wild horse. We find no genetic signature or any distinctive range dynamics distinguishing extinct from surviving species, underscoring the challenges associated with predicting future responses of extant mammals to climate and human-mediated habitat change. Toward the end of the Late Quaternary, beginning c. 50,000 years ago, Eurasia and North America lost c. 36% and 72% of their large-bodied mammalian genera (megafauna), respectively1. The debate surrounding the potential causes of these extinctions has focused primarily on the relative roles of climate and humans2,3,4,5. In general, the proportion of species that went extinct was greatest on continents that experienced the most dramatic Correspondence and requests for materials should be addressed to E.W (ewillerslev@snm.ku.dk). *Joint first authors †Deceased Supplementary Information is linked to the online version of the paper at www.nature.com/nature. Author contributions E.W. initially conceived and headed the overall project. C.R. headed the species distribution modelling and range measurements. E.D.L. and J.T.S. extracted, amplified and sequenced the reindeer DNA sequences. J.B. extracted, amplified and sequenced the woolly rhinoceros DNA sequences; M.H. generated part of the woolly rhinoceros data. J.W., K-P.K., J.L. and R.K.W. generated the horse DNA sequences; A.C. generated part of the horse data. L.O., E.D.L. and B.S. analysed the genetic data, with input from R.N., K.M., M.A.S. and S.Y.W.H. Palaeoclimate simulations were provided by P.B., A.M.H, J.S.S. and P.J.V. The directly-dated spatial LAT/LON megafauna locality information was collected by E.D.L., K.A.M., D.N.-B., D.B. and A.U.; K.A.M. and D.N-B performed the species distribution modelling and range measurements. M.B. carried out the gene-climate correlation. A.U. and D.B. assembled the human Upper Palaeolithic sites from Eurasia. T.G. and K.E.G. assembled the archaeofaunal assemblages from Siberia. A.U. analysed the spatial overlap of humans and megafauna and the archaeofaunal assemblages. E.D.L., L.O., B.S., K.A.M., D.N.-B., M.K.B., A.U., T.G. and K.E.G. wrote the Supplementary Information. D.F., G.Z., T.W.S., K.A-S., G.B., J.A.B., D.L.J., P.K., T.K., X.L., L.D.M., H.G.M., D.M., M.M., E.S., M.S., R.S.S., T.S., E.S., A.T., R.W., A.C. provided the megafauna samples used for ancient DNA analysis. E.D.L. made the figures. E.D.L, L.O. and E.W. wrote the majority of the manuscript, with critical input from B.S., M.H., K.A.M., M.T.P.G., C.R., R.K.W, A.U. and the remaining authors. Mitochondrial DNA sequences have been deposited in GenBank under accession numbers JN570760-JN571033. Reprints and permissions information is available at www.nature.com/reprints. NIH Public Access Author Manuscript Nature. Author manuscript; available in PMC 2014 June 25. Published in final edited form as: Nature. ; 479(7373): 359–364. doi:10.1038/nature10574. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript climatic changes6, implying a major role of climate in species loss. However, the continental pattern of megafaunal extinctions in North America approximately coincides with the first appearance of humans, suggesting a potential anthropogenic contribution to species extinctions3,5. Demographic trajectories of different taxa vary widely and depend on the geographic scale and methodological approaches used3,5,7. For example, genetic diversity in bison8,9, musk ox10 and European cave bear11 declines gradually from c. 50–30,000 calendar years ago (ka BP). In contrast, sudden losses of genetic diversity are observed in woolly mammoth12,13 and cave lion14 long before their extinction, followed by genetic stability until the extinction events. It remains unresolved whether the Late Quaternary extinctions were a cross-taxa response to widespread climatic or anthropogenic stressors, or were a species-specific response to one or both factors15,16. Additionally, it is unclear whether distinctive genetic signatures or geographic range-size dynamics characterise extinct or surviving species— questions of particular importance to the conservation of extant species. To disentangle the processes underlying population dynamics and extinction, we investigate the demographic histories of six megafauna herbivores of the Late Quaternary: woolly rhinoceros (Coelodonta antiquitatis), woolly mammoth (Mammuthus primigenius), horse (wild Equus ferus and living domestic Equus caballus), reindeer/caribou (Rangifer tarandus), bison (Bison priscus/Bison bison) and musk ox (Ovibos moschatus). These taxa were characteristic of Late Quaternary Eurasia and/or North America and represent both extinct and extant species. Our analyses are based on 846 radiocarbon-dated mitochondrial DNA (mtDNA) control region sequences, 1,439 directly-dated megafaunal remains, and 6,291 radiocarbon determinations associated with Upper Palaeolithic human occupations in Eurasia. We reconstruct the demographic histories of the megafauna herbivores from ancient DNA data, model past species distributions and determine the geographic overlap between humans and megafauna over the last 50,000 years. We use these data to investigate how climate change and anthropogenic impacts affected species dynamics at continental and global scales, and contributed to in the extinction of some species and the survival of others. Effects of climate change differ across species and continents The direct link between climate change, population size and species extinctions is difficult to document10. However, population size is likely controlled by the amount of available habitat and is indicated by the geographic range of a species17,18. We assessed the role of climate using species distribution models, dated megafauna fossil remains and palaeoclimatic data on temperature and precipitation. We estimated species range sizes at the time periods of 42, 30, 21 and 6 ka BP as a proxy for habitat availability (Fig. 1; Supplementary Information section S1). Range size dynamics were then compared to demographic histories inferred from ancient DNA using three distinct analyses (Supplementary Information section S3): (i) coalescent-based estimation of changes in effective population size through time (Bayesian skyride19), which allows detection of changes in global genetic diversity; (ii) serial coalescent simulation followed by Approximate Bayesian Computation, which selects among different models describing continental population dynamics; and (iii) isolation-by-distance analysis, which estimates Lorenzen et al. Page 2 Nature. Author manuscript; available in PMC 2014 June 25. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript potential population structure and connectivity within continents. If climate was a major factor driving species population sizes, we would expect expansion and contraction of a species’ geographic range to mirror population increase and decline, respectively. We find a positive correlation between changes in the size of available habitat and genetic diversity for the four species—horse, reindeer, bison and musk ox—for which we have range estimates spanning all four time-points (the correlation is not statistically significant for reindeer: p = 0.101) (Fig. 2; Supplementary Information section S4). Hence, species distribution modelling based on fossil distributions and climate data are congruent with estimates of effective population size based on ancient DNA data, even in species with very different life-history traits. We conclude that climate has been a major driving force in megafauna population changes over the past 50,000 years. It is noteworthy that both estimated modelled ranges and genetic data are derived from a subset of the entire fossil record (Supplementary Information sections S1 and S3). Thus, changes in effective population size and range size may change with the addition of more data, especially from outside the geographical regions covered by the present study. However, we expect that the reported positive correlation will prevail when congruent data are compared. The best-supported models of changes in effective population size in North America and Eurasia during periods of dramatic climatic change during the past 50,000 years are those in which populations increase in size (Fig. 3, Supplementary Information section S3). This is true for all taxa except bison. However, the timing is not synchronous across populations. Specifically, we find highest support for population increase beginning c. 34 ka BP in Eurasian horse, reindeer and musk ox (Fig. 3a). Eurasian mammoth and North American horse increase prior to the Last Glacial Maximum (LGM) c. 26 ka BP. Models of population increase in woolly rhinoceros and North American mammoth fit equally well before and after the LGM, and North American reindeer populations increase later still. Only North American bison shows a population decline (Fig. 3b), the intensity of which likely swamps the signal of global population increase starting at c. 35 ka BP identified in the skyride plot", "title": "" }, { "docid": "0e477f56c7f0e1c40eadbd499b226347", "text": "In this paper, the channel stacked array (CSTAR) NAND flash memory with layer selection by multi-level operation (LSM) of string select transistor (SST) is proposed and investigated to solve problems of conventional channel stacked array. In case of LSM architecture, the stacked layers can be distinguished by combinations of multi-level states of SST and string select line (SSL) bias. Due to the layer selection performed by the bias of SSL, the placement of bit lines and word lines is similar to the conventional planar structure, and proposed CSTAR with LSM has no island-type SSLs. As a result of the advantages of the proposed architecture, various issues of conventional channel stacked NAND flash memory array can be solved.", "title": "" }, { "docid": "8ab5ae25073b869ea28fc25df3cfdf5f", "text": "We present the TurkuNLP entry to the BioNLP Shared Task 2016 Bacteria Biotopes event extraction (BB3-event) subtask. We propose a deep learningbased approach to event extraction using a combination of several Long Short-Term Memory (LSTM) networks over syntactic dependency graphs. Features for the proposed neural network are generated based on the shortest path connecting the two candidate entities in the dependency graph. We further detail how this network can be efficiently trained to have good generalization performance even when only a very limited number of training examples are available and part-of-speech (POS) and dependency type feature representations must be learned from scratch. Our method ranked second among the entries to the shared task, achieving an F-score of 52.1% with 62.3% precision and 44.8% recall.", "title": "" }, { "docid": "1e6c497fe53f8cba76bd8b432c618c1f", "text": "inputs into digital (down or up), analog (-1.0 to 1.0), and positional (touch and • mouse cursor). By building on a solid main loop you can easily add support for detecting chorded inputs and sequence inputs.", "title": "" }, { "docid": "77b4cb00c3a72fdeefa99aa504f492d8", "text": "This article considers a short survey of basic methods of social networks analysis, which are used for detecting cyber threats. The main types of social network threats are presented. Basic methods of graph theory and data mining, that deals with social networks analysis are described. Typical security tasks of social network analysis, such as community detection in network, detection of leaders in communities, detection experts in networks, clustering text information and others are considered.", "title": "" }, { "docid": "3798374ed33c3d3255dcc7d7c78507c2", "text": "Cloud computing is characterized by shared infrastructure and a decoupling between its operators and tenants. These two characteristics impose new challenges to databases applications hosted in the cloud, namely: (i) how to price database services, (ii) how to isolate database tenants, and (iii) how to optimize database performance on this shared infrastructure. We argue that today’s solutions, based on virtual-machines, do not properly address these challenges. We hint at new research directions to tackle these problems and argue that these three challenges share a common need for accurate predictive models of performance and resource utilization. We present initial predictive models for the important class of OLTP/Web workloads and show how they can be used to address these challenges.", "title": "" }, { "docid": "3765aae3bd550c2ab5b4b32e1a969c71", "text": "This paper develops a novel algorithm, termed <italic>SPARse Truncated Amplitude flow</italic> (SPARTA), to reconstruct a sparse signal from a small number of magnitude-only measurements. It deals with what is also known as sparse phase retrieval (PR), which is <italic>NP-hard</italic> in general and emerges in many science and engineering applications. Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations. SPARTA is a simple yet effective, scalable, and fast sparse PR solver. On the theoretical side, for any <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula>-dimensional <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math></inline-formula>-sparse (<inline-formula> <tex-math notation=\"LaTeX\">$k\\ll n$</tex-math></inline-formula>) signal <inline-formula><tex-math notation=\"LaTeX\"> $\\boldsymbol {x}$</tex-math></inline-formula> with minimum (in modulus) nonzero entries on the order of <inline-formula> <tex-math notation=\"LaTeX\">$(1/\\sqrt{k})\\Vert \\boldsymbol {x}\\Vert _2$</tex-math></inline-formula>, SPARTA recovers the signal exactly (up to a global unimodular constant) from about <inline-formula><tex-math notation=\"LaTeX\">$k^2\\log n$ </tex-math></inline-formula> random Gaussian measurements with high probability. Furthermore, SPARTA incurs computational complexity on the order of <inline-formula><tex-math notation=\"LaTeX\">$k^2n\\log n$</tex-math> </inline-formula> with total runtime proportional to the time required to read the data, which improves upon the state of the art by at least a factor of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math></inline-formula>. Finally, SPARTA is robust against additive noise of bounded support. Extensive numerical tests corroborate markedly improved recovery performance and speedups of SPARTA relative to existing alternatives.", "title": "" }, { "docid": "fb89fd2d9bf526b8bc7f1433274859a6", "text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes", "title": "" }, { "docid": "bcf69b1d42d28b8ba66b133ad6421cc4", "text": "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "title": "" }, { "docid": "c99b283a66d3f9a6afeaf2c74338937c", "text": "If we want to describe the action of someone who is looking out a window for an extended time, how do we choose between the words gazing, staring, and peering? What exactly is the difference between an rgument, a dispute, and a row? In this paper, we describe our research in progress on the problem of lexical choice and the representations of world knowledge and of lexical structure and meaning that the task requires. In particular, we wish to deal with nuances and subtleties of denotation and connotation--shades of meaning and of style--such as those illustrated by the examples above. We are studying the task in two related contexts: machine translation, and the generation of multilingual text from a single representation of content. This work brings together several elements of our earlier research: unilingual lexical choice (Miezitis 1988); multilingual generation (R6sner and Stede 1992a,b); representing and preserving stylistic nuances in translation (DiMarco 1990; DiMarco and Hirst 1990; Mah 1991); and, more generally, analyzing and generating stylistic nuances in text (DiMarco and Hirst 1993; DiMarco et al 1992; MakutaGiluk 1991; Maknta-Giluk and DiMarco 1993; BenHassine 1992; Green 1992a,b, 1993; Hoyt forthcoming). In the present paper, we concentrate on issues in lexical representation. We describe a methodology, based on dictionary usage notes, that we are using to discover the dimensions along which similar words can be differentiated, and we discuss a two-part representation for lexical differentiation. (Our related work on lexical choice itself and its integration with other components of text generation is discussed by Stede (1993a,b, forthcoming).) aspects of their usage. 1 Such differences can include the collocational constraints of the words (e.g., groundhog and woodchuck denote the same set of animals; yet Groundhog Day, * Woodchuck Day) and the stylistic and interpersonal connotations of the words (e.g., die, pass away, snuff it; slim, skinny; police oI~icer, cop, pig). In addition, many groups of words are plesionyms (Cruse 1986)--that is, nearly synonymous; forest and woods, for example, or stared and gazed, or the German words einschrauben, festschrauben, and festziehen. ~ The notions of synonymy and plesionymy can be made more precise by means of a notion of semantic distance (such as that invoked by Hirst (1987), for example, lexical disambiguation); but this is troublesome to formalize satisfactorily. In this paper it will suffice to rely on an intuitive understanding. We consider two dimensions along which words can vary: semantic and stylistic, or, equivalently, denotative and connotative. If two words differ semantically (e.g., mist, fog), then substituting one for the other in a sentence or discourse will not necessarily preserve truth conditions; the denotations are not identical. If two words differ (solely) in stylistic features (e.g., frugal, stingy), then intersubstitution does preserve truth conditions, but the connotation--the stylistic and interpersonal effect of the sentence--is changed, s Many of the semantic distinctions between plesionyms do not lend themselves to neat, taxonomic differentiation; rather, they are fuzzy, with plesionyms often having an area of overlap. For example, the boundary between forest and wood ’tract of trees’ is vague, and there are some situations in which either word might be equally appropriate. 4", "title": "" } ]
scidocsrr